Read text files from website with Python - python-3.x

Hello I have got problem I want to get all data from the web but this is too huge to save it to variable. I save data making it like this:
r = urlopen("http://download.cathdb.info/cath/releases/all-releases/v4_2_0/cath-classification-data/cath-domain-list-v4_2_0.txt")
r = BeautifulSoup(r, "lxml")
r = r.p.get_text()
some operations
This was working good until I have to get data from this website:
http://download.cathdb.info/cath/releases/all-releases/v4_2_0/cath-classification-data/cath-domain-description-file-v4_2_0.txt
When I run same code as above on this page my program is stopping at line
r = BeautifulSoup(r, "lxml")
and this is taking forever, nothing happen. I don't know how to get this whole data not saving it to file to make on this some operations of searching key words and printing them. I can't save this to file I have to get this from website.
I will be very thankful for every help.

I think the code below can do what you want. Like mentioned in a comment by #alecxe, you don't need to use BeautifulSoup. This problem should be a problem to retrieve content from text files online and is answered in this Given a URL to a text file, what is the simplest way to read the contents of the text file?
from urllib.request import urlopen
r = urlopen("http://download.cathdb.info/cath/releases/all-releases/v4_2_0/cath-classification-data/cath-domain-list-v4_2_0.txt")
for line in r:
do_somthing()

Related

When I parse a large XML sitemap on Beautifulsoup in Python, it only parses part of the file

I have written code that pulls out URLs of a very large sitemap xml file (10mb) using Beautiful Soup, and it works exactly how I want it, but it only seems to do a small amount of the overall file. This is my code:
`sitemap = "sitemap1.xml"
from bs4 import BeautifulSoup as bs
import lxml
content = []
with open(sitemap, "r") as file:
# Read each line in the file, readlines() returns a list of lines
content = file.readlines()
# Combine the lines in the list into a string
content = "".join(content)
bs_content = bs(content, "xml")
result = bs_content.find_all("loc")
for result in result:
print(result.text)
`
I have changed my IDE to allow for larger files, it just seems to start the process at a random point towards the end of the XML file and only extracts from there on.
I just wanted to say I ended up sorting this out. I used the read XML function in pandas and it worked well. The original XML file was corrupted.
... I also realised that the console was just printing from a certain point because it's such a large file, and it was still actually processing the whole file.
Sorry about this - I'm new :)

How to print content of multiple urls onto one single txt.file?

Good afternoon, I am new to stack overflow so I apologize in advance if my question is not in the right format.
I have a list of URLs such as these (but many more),
master_urls =
['https://www.sec.gov/Archives/edgar/daily-index/2020/QTR1/master.20190102.idx',
'https://www.sec.gov/Archives/edgar/daily-index/2020/QTR1/master.20190103.idx]
and I want to write the content onto one single txt.file.
Using one of these URLs works perfectly fine. I do the steps below to achieve it:
file_url = r"https://www.sec.gov/Archives/edgar/daily-index/2019/QTR2/master.20190401.idx"
content = requests.get(file_url).content
with open('master_20190401.txt', 'wb') as f:
f.write(content)
The txt.file looks like this (this is just a small sample of the text file, but it's all the same as shown below just with different company names ...etc):
CIK|Company Name|Form Type|Date Filed|File Name
--------------------------------------------------------------------------------
1000045|NICHOLAS FINANCIAL INC|8-K|20190401|edgar/data/1000045/0001193125-19-093800.txt
1000209|MEDALLION FINANCIAL CORP|SC 13D/A|20190401|edgar/data/1000209/0001193125-19-094732.txt
1000228|HENRY SCHEIN INC|4|20190401|edgar/data/1000228/0001209191-19-021970.txt
1000275|ROYAL BANK OF CANADA|424B2|20190401|edgar/data/1000275/0001140361-19-006199.txt
I tried the following code to get the content of all URLs onto one text file
for file in master_urls:
content = requests.get(file).content
with open('complete_list.txt', 'w') as f:
f.write(content)
but it does not work.
Can anyone help me get the content of each URL in my list of URLs onto one single text file?
Thank you in advance.
Since you are opening your file inside the loop for every URL, the file is getting overwrriten.
try this :
with open('complete_list.txt', 'wb') as f:
for url in master_urls:
content = requests.get(url).content
f.write(content)

Unknown encoding of files in a resulting Beautiful Soup txt file

I downloaded 13 000 files (10-K reports from different companies) and I need to extract a specific part of these files (section 1A- Risk factors). The problem is that I can open these files in Word easily and they are perfect, while as I open them in a normal txt editor, the document appear to be an HTML with tons of encrypted string in the end (EDIT: I suspect this is due to XBRL format of these files). Same happens as a result of using BeautifulSoup.
I've tried using online decoder, because I thought that maybe this is connected to Base64 encoding, but it seems that none of the known encoding could help me. I saw that at the beginning of some files, there is something like: "created with Certent Disclosure Management 6.31.0.1" and other programs, I thought maybe this causes the encoding. Nevertheless Word is able to open these files, so I guess there must be a known key to it. This is a sample encoded data:
M1G2RBE#MN)T='1,SC4,]%$$Q71T3<XU#[AHMB9#*E1=E_U5CKG&(77/*(LY9
ME$N9MY/U9DC,- ZY:4Z0EWF95RMQY#J!ZIB8:9RWF;\"S+1%Z*;VZPV#(MO
MUCHFYAJ'V#6O8*[R9L<VI8[I8KYQB7WSC#DMFGR[E6+;7=2R)N)1Q\24XQ(K
MYQDS$>UJ65%MV4+(KBRHJ3HFIAR76#G/F$%=*9FOU*DM-6TSTC$Q\[C$YC$/
And a sample file from the 13 000 that I downloaded.
Below I insert the BeautifulSoup that I use to extract text. It does its' job, but I need to find a clue to this encoded string and somehow decode it in the Python code below.
from bs4 import BeautifulSoup
with open("98752-TOROTEL INC-10-K-2019-07-23", "r") as f:
contents = f.read()
soup = BeautifulSoup(contents, 'html.parser')
print(soup.getText())
with open("extracted_test.txt", "w", encoding="utf-8") as f:
f.write(soup.getText())
f.close()
What I want to achieve is decoding of this dummy string in the end of the file.
Ok, this is going to be somewhat messy, but will get you close enough to what you are looking for, without using regex (which is notoriously problematic with html). The fundamental problem you'll be facing is that EDGAR filings are VERY inconsistent in their formatting, so what may work for one 10Q (or 10K or 8K) filing may not work with a similar filing (even from the same filer...) For example, the word 'item' may appear in either lower or uppercase (or mixed), hence the use of the string.lower() method, etc. So there's going to be some cleanup, under all circumstances.
Having said that, the code below should get you the RISK FACTORS sections from both filings (including the one which has none):
url = [one of these two]
from bs4 import BeautifulSoup as bs
response = requests.get(url)
soup = bs(response.content, 'html.parser')
risks = soup.find_all('a')
for risk in risks:
if 'item' in str(risk.attrs).lower() and '1a' in str(risk.attrs).lower():
for i in risk.findAllNext():
if 'item' in str(i.attrs).lower():
break
else:
print(i.text.strip())
Good luck with your project!

How to save the output of text from selenium chrome (Python)

I'm using Selenium for extracting comments of Youtube.
Everything went well. But when I print comment.text, the output is the last sentence.
I don't know who to save it for further analyze (cleaning and tokenization)
path = "/mnt/c/Users/xxx/chromedriver.exe"
This is the path that I saved and downloaded my chrome
chrome = webdriver.Chrome(path)
url = "https://www.youtube.com/watch?v=WPni755-Krg"
chrome.get(url)
chrome.maximize_window()
scrolldown
sleep = 5
chrome.execute_script('window.scrollTo(0, 500);'
time.sleep(sleep)
chrome.execute_script('window.scrollTo(0, 1080);')
time.sleep(sleep)
text_comment = chrome.find_element_by_xpath('//*[#id="contents"]')
comments = text_comment.find_elements_by_xpath('//*[#id="content-text"]')
comment_ids = []
Try this approach for getting the text of all comments. (the forloop part edited- there was no indention in the previous code.)
for comment in comments:
comment_ids.append(comment.get_attribute('id'))
print(comment.text)
when I print, i can see all the texts here. but how can i open it for further study. Should i always use for loop? I want to tokenize the texts but the output is only last sentence. Is there a way to save this .text file with the whole texts inside it and open it again? I googled it a lot but it wasn't successful.
So it sounds like you're just trying to store these comments to reference later. Your current solution is to append them to a string and use a token to create substrings? I'm not familiar with pythons data structures, but this sounds like a great job for an array or a list depending on how you plan to reference this data.

Processing all values of an array with get_text

(Disclaimer: I'm a newbie, I'm sorry if this problem is really obvious)
Hello,
I build a little script in order to first find certain parts of HTML markup within a local file and then display the information without HTML tags.
I used bs4 and find_all / get_text for this. Take a look:
from bs4 import BeautifulSoup
with open("/Users/user1/Desktop/testdatapython.html") as fp:
soup = BeautifulSoup(fp, "lxml")
titleResults = soup.find_all('span', attrs={'class':'caption-subject'})
firstResult = titleResults[0]
firstStripped = firstResult.get_text()
print(firstStripped)
This actually works so far. But I want to do this for all values of titleResults, not only the first value. But I can't process an array with get_text.
Which way would be best to accomplish this? The number of values for titleResults is always changing since the local html file is only a sample.
Thank you in advance!
P.S. I already looked up this related thread but it is not enough for understanding or solving the problem sadly:
BeautifulSoup get_text from find_all
find_all returns a list
for result in titleResults:
stripped = result.get_text()
print(stripped)

Resources