Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project dependencies may have API risk issues #7

Open
PyDeps opened this issue Oct 27, 2022 · 1 comment
Open

Project dependencies may have API risk issues #7

PyDeps opened this issue Oct 27, 2022 · 1 comment

Comments

@PyDeps
Copy link

PyDeps commented Oct 27, 2022

Hi, In NetblockTool, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

netaddr
bs4
lxml
requests
fuzzywuzzy
tqdm

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency requests can be changed to >=2.4.0,<=2.15.1.
The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the requests
requests.packages.urllib3.disable_warnings
requests.post
requests.get
The calling methods from the tqdm
tqdm.tqdm
The calling methods from the all methods
round
sub_list.join.lower.replace.replace
context_ref.startswith
test_string.rstrip
ranges.append
temp.replace.lower
result_list3.append
sub_list.str.lower
edgar.Company.get_documents
last_row.getchildren.getchildren
netaddr.IPAddress
replace
hyphen_check_string.split.lower
company.lower.rstrip.startswith
company.get_all_filings.xpath
blacklist.append
document.getchildren
potential_nets.append
get_nethandle
self.get_10Ks
process_company_name
random.uniform
zip
str.replace
re.sub
s.datetime.strptime.date
len
int.lower
companies_page_path.open.read
sys.exit
url.replace.split
f.endswith
retrieved_pocs.append
process_potential_company
result_list.clear
result_list.append
all_companies_page.content.decode
sub_list.join.lower
address.split.rstrip
get_org_address_info
possible_companies.append
self._group_document_type
Company.get_request
name.lower
url.replace.replace
netaddr.iprange_to_cidrs
haystack.split
lxml.etree.XMLParser
arg_target.replace.replace
tqdm.tqdm
return_string.replace.replace
csv.writer.writerow
tag.text.split
ranges.sort
last_row.getchildren.getchildren.replace
process_output_file
tree.find_class.find_class
a.text.str.strip
blacklist6.append
str.split
cls._clean_text_
get_asn_subnets
Company.get_documents
arg_address_grep.lower.lower
USER_AGENT.edgar.Edgar.find_company_name
get_asn_info
content_page.find_class.getchildren
process_dedup_filter.append
process_poc_output
elem.attrib.get
self._document_urls.append
last_row.getchildren.getchildren.split
re.findall
ranges6.sort
elem.text_content
html.text_content
self.attrib.get
os.path.dirname
tag.text.str.count
edgar.Company.replace
edgar.Company.title
company.lower
links.append
main
company.lower.rstrip
sorted
get_asn_subnets.append
grouped.append
time.sleep
socket.inet_aton
warnings.filterwarnings
properties.get
super
hyphen_check_string.split.split
context_ref_to_date_text
returnName.replace.replace
company.lower.rstrip.endswith
netaddr.IPNetwork
tag.text.split.lower
page.xpath.getchildren
int.replace
tag.text.str.split.split
process_dedup_filter
document.Documents
XBRLElement
elem.getchildren
result.append
process_ip_count
elem.tag.find
get_google_networks
temp.lower.split
cls.get_request
process_company_extension.append
elem.xpath
dict
address.split.split
argparse.ArgumentParser.add_argument
arin_org_addresses.append
company_name.lower.check_string.lower.split.str.isalpha
e.isalnum
Company.get_all_filings
format
enumerate
company.lower.replace
isinstance
get_statistics
all
self.all_companies_dict.items
urllib.parse.quote_plus
argparse.ArgumentParser
self.__parse_base_elem__
process_output_addresses
item.endswith
context.getchildren
edgar.Company
get_net_info
all_companies_page.content.decode.split
self._get
self.get_company_info
join
companyInfo.getchildren.getchildren
no_ip_data.append
ranges6.append
words.lower.lower
company_name.lower
href.split.split
get_org_poc_info
self.__get_text_from_list__
lxml.html.fromstring.find_class
company.replace.replace
company_name.replace.lower
lxml.html.fromstring.xpath
sub_list.append
lxml.html.fromstring
table.getchildren.getchildren
val.text_content
process_duplicate_ranges
requests.get.xpath
get_key_tag
findnth
sub_company_list.append
edgar.Company.get_all_filings
all_tags.count
process_output_name
edgar.Company.lower
USER_AGENT.edgar.Edgar.get_cik_by_company_name
process_netblock_confidence
get_poc_info
bs4.BeautifulSoup.findAll
Company
html.unescape
bs4.BeautifulSoup
poc_list_unique.append
line.split.lstrip
result_list2.append
item.split
names_list.append
range
fuzzywuzzy.fuzz.partial_ratio
XBRL.is_parsable
context_ref.split
company.lower.endswith
str
XBRL.clean_tag
sub_list.split
datetime.datetime.strptime
unique_tags.append
csv.writer
join.isalnum
target.lower
get_customer_info
os.path.isfile
process_geolocation
context_ref.find
self.child.text.replace.strip
child.attrib.get
edgar.Company.rstrip
recent.append
doc.tostring.decode
tag.text.str.split.rstrip
s.datetime.strptime.date.strftime
unique_companies.append
check_string.lower.split
os.path.basename
data_file.write
self.get_all_filings
tag.text.str.split
get_usage
requests.packages.urllib3.disable_warnings
elem.xpath.getchildren
asn_dups.append
process_addresses
company.title.rstrip
arin_pocs.append
self.__parse_context__
elem.name.lower
req.text.encode
potential_name.replace
open
sorted.append
int
main.append
int.append
name_check.lower
dups.append
i.parsed.str.rstrip
list_companies.append
get_arin_objects
list
words.lower.split
html.text_content.replace
ip.split.split
requests.post
row.getchildren
operator.itemgetter
process_output_name.endswith
get_ip_coordinates
doc.getchildren
lxml.etree.fromstring
super.__init__
sub_list.join.lower.replace
get_subsidiaries
lxml.etree.tostring
self.child.getchildren
set
process_url_encode
all_tags.append
float
edgar.Edgar
Company.__get_documents_from_element__
json.loads
self.context_ref.get
self.get_filings_url
warnings.catch_warnings
company_strings.append
tuple
Company.get_request.find_class
self.getchildren
cls.get_request.xpath
print
glob.glob
row.getchildren.getchildren
i.isdigit
url.replace.endswith
TXTML.get_HTML_from_document
requests.get
process_company_name.lower
argparse.ArgumentParser.parse_args
re.match
int.split
bs4.BeautifulSoup.find_all
all_companies_array_rev.append
self.child.text.replace
input
csv_headers.append
company_name.replace
basic_ranges.append
process_confidence_threshold
process_company_extension
address_list.count
Edgar.split_raw_string_to_cik_name

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.

@gopinaath
Copy link

I faced similar problem. I used python 3.9.x. To remediate the error, I ran

python -m pip install requests "urllib3<2"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants