Unable to access text from a class using selenium on python - python-3.x

I am willing to parse https://2gis.kz , and I encountered the problem that I am getting error while using .text or any methods used to extract text from a class
I am typing the search query such as "fitness"
My window variable is
all_cards = driver.find_elements(By.CLASS_NAME,"_1hf7139")
for card_ in all_cards:
card_.click()
window = driver.find_element(By.CLASS_NAME, "_18lzknl")
This is a quite simplified version of how I open a mini-window with all of the essential information inside it. Below I am attaching the piece of code where I am trying to extract text from a phone number holder.
texts = window.find_elements(By.CLASS_NAME,'_b0ke8')
print(texts) # this prints out something from where I am concluding that this thing is accessible
try:
print(texts.text)
except:
print(".text")
try:
print(texts.text())
except:
print(".text()")
try:
print(texts.get_attribute("innerHTML"))
except:
print('getAttribute("innerHTML")')
try:
print(texts.get_attribute("textContent"))
except:
print('getAttribute("textContent")')
try:
print(texts.get_attribute("outerHTML"))
except:
print('getAttribute("outerHTML")')
Hi, guys, I solved an issue. The .text was not working for some reason. I guess developers somehow managed to protect information from using this method. I used a
get_attribute("innerHTML") # afaik this allows us to get a html code of a particular class
and now it works like a charm.
texts = window.find_elements(By.TAG_NAME, "bdo")
with io.open("t.txt", "a", encoding="utf-8") as f:
for text in texts:
nums = re.sub("[^0-9]", "",
text.get_attribute("innerHTML"))
f.write(nums+'\n')
f.close()
So the problem was that:
I was trying to print a list of items just by using print(texts)
Even when I tried to print each element of texts variable in a for loop, I was getting an error due to the fact that it was decoded in utf-8.
I hope someone will find it useful and will not spend a plethora of time trying to fix such a simple bug.

find_elements method returns a list of web elements. So this
texts = window.find_elements(By.CLASS_NAME,'_b0ke8')
gives you texts a list of web elements.
You can not apply .text method directly on list.
In order to get each element text you will have to iterate over elements in the list and extract that element text, like this:
text_elements = window.find_elements(By.CLASS_NAME,'_b0ke8')
for element in text_elements:
print(element.text)
Also, I'm not sure about locators you are using.
_1hf7139, _18lzknl and _b0ke8 class names are seem to be dynamic class names i.e they may change each browsing session.

Related

I am trying to use Fitz to extract data from a pdf that contains text in a very unstructured format. But it's returning none at the first step

Here's the code I have been trying with the output:
import fitz
import pandas as pd
doc = fitz.open('xyz.pdf')
page1 = doc[0]
words = page1.get_text("words")
first_annots=[]
rec=page1.first_annot.rect
rec
Output:
the output I am expecting is all text rectangles to be identified and called separately.
Here's where i found the code that i am implementing: https://www.analyticsvidhya.com/blog/2021/06/data-extraction-from-unstructured-pdfs/
Independent from your overall intention (to parse unstructured text):
Accessing the page's annotations via page.first_annot makes no sense at all.
Your exception is caused by the fact that that page page has no annotations, and therefore page.first_annot is None of course.
Again: whether or not there are annotations has nothing to do with the text of the page. Simply do not access page.first_annot.

Reading and writing a text value using selenium and pandas when the html element has no definite id

I am working on creating a program that would read a list of aircraft registrations from an excel file and return the aircraft type codes.
My source of information is FlightRadar24. (example - https://www.flightradar24.com/data/aircraft/n502dn)
I tried inspecting the elements on the page to find the correct class id to invoke and found the id to be listed as "details" When I run my code, it extracts the aircraft name with the class id/name details, instead of the type code.
See here for the example data
I then changed my approach to using XPath to seek the correct text but with the xpath it prints out
(For Xpath, i used a browser add on to find the exact xpath for the element, fairly confident that it is correct.)
It gives no output. What would you suggest in this particular instance when extracting values without a definite id ?
for i in list_regs:
driver.get('https://www.flightradar24.com/data/aircraft/'+i)
driver.implicitly_wait(3)
load = 0
while load==0:
try:
element = driver.find_element_by_xpath("/html/body/div[5]/div/section/section[2]/div[1]/div[1]/div[2]/div[2]/span")
print('element') #Printing to terminal to see if the right value is returned.
You should probably change your xpath expression to:
//label[.="TYPE CODE"]/following-sibling::span[#class="details"]
and
print('element')
to
print(element)
Edit:
This works for me:
element = driver.find_element_by_xpath('//label[.="TYPE CODE"]/following-sibling::span[#class="details"]')
print(element.text)
Output:
A359

Selenium Webdriver element returns None even with WebDriverWait

I have a script using BeautifulSoup where I am trying to get the text within a span element.
number_of_pages = soup.find('span', attrs={'class':'random})
print(number_of_pages.string)
and it returns a variable like {{lastPage()}} which means it is generated by JS. So, then I changed my script to use Selenium but it returns an element that doesn't contain the text I need. I tried a random website to see if it works there:
from selenium import webdriver
browser = webdriver.Firefox()
browser.get("https://hoshiikins.com/") #navigates to hoshiikins.com
spanList= browser.find_elements_by_xpath("/html/body/div[1]/main/div/div[13]/div/div[2]/div/p")
print(spanList)
and what it returns is:
[<selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fe20e73e-5638-420e-a8a0-a8785153c157", element="3065d5b1-f8a6-4e46-9359-87386b4d1511")>]
I then thought it was an issue related to how fast the script runs. So, I added a delay/wait:
element = WebDriverWait(browser, 10).until(
EC.presence_of_element_located((By.XPATH, "/html/body/div[1]/main/div/div[13]/div/div[2]/div/p"))
)
I even tried different parts of the page and used a class and an ID but I am not getting any text back. Note that I had tried using the spanList.getattribute('value') or spanList.text but they return nothing.
I had this same issue, your variable spanList is an web object, the find elements function doesn't return meaningful text. You have to do one more step and add .text to return the text. You can do this in the print statement
print(spanText.text)
If this tag is an input element then you'll need
print(spanText.get_attribute('value'))
This should print what you are looking for
It sounds like you're perhaps misunderstanding your results, the code you provided for Selenium works with one small change:
driver.get("https://hoshiikins.com/")
spanList = driver.find_elements_by_xpath("/html/body/div[1]/main/div/div[13]/div/div[2]/div/p")
for span in spanList:
print(span.text)
Returns Indivdually Handcrafted with Love, Just for You.
You're using find_elements_by_xpath, which is different from find_element_by_xpath as the former is plural (elements). So all you have to do is either change it to element or iterate over your result set and get the text property of the element.

How to use a conditional statement while scraping?

I'm trying to scrape the MTA website and need a little help scraping the "Train Lines Row." (Website for reference: https://advisory.mtanyct.info/EEoutage/EEOutageReport.aspx?StationID=All
The train line information is stored as image files (1 line subway, A line subway, etc) describing each line that's accessible through a particular station. I've had success scraping info out of rows in which only one train passes through, but I'm having difficulty figuring out how to iterate through the columns which have multiple trains passing through it...using a conditional statement to test for whether it has one line or multiple lines.
tableElements = table.find_elements_by_tag_name('tr')
that's the table i'm iterating through
tableElements[2].find_elements_by_tag_name('td')[1].find_element_by_tag_name('h4').find_element_by_tag_name('img').get_attribute('alt')
this successfully gives me the values if only one value exists in the particular column
tableElements[8].find_elements_by_tag_name('td')[1].find_element_by_tag_name('h4').find_elements_by_tag_name('img')
this successfully gives me a list of values I can successfully iterate through to extract my needed values.
Now I try and combine these lines of code together in a forloop to extract all the information without stopping.
for info in tableElements[1:]:
if info.find_elements_by_tag_name('td')[1].find_element_by_tag_name('h4').find_elements_by_tag_name('img')[1] == True:
for images in info.find_elements_by_tag_name('td')[1].find_element_by_tag_name('h4').find_elements_by_tag_name('img'):
print(images.get_attribute('alt'))
else:
print(info.find_elements_by_tag_name('td')[1].find_element_by_tag_name('h4').find_element_by_tag_name('img').get_attribute('alt'))
I'm getting the error message: "list index out of range." I dont know why, as every iteration done in isolation seems to work. My hunch is I haven't correctly used the boolean operation properly here. My idea was that if find_elements_by_tag_name had an index of [1] that would mean multiple image text for me to iterate through. Hence, why I want to use this boolean operation.
Hi All, thanks so much for your help. I've uploaded my full code to Github and attached a link for your reference: https://github.com/tsp2123/MTA-Scraping/blob/master/MTA.ElevatorData.ipynb
The end goal is going to be to put this information into a dataframe using some formulation of and having a for loop that will extract the image information that I want.
dataframe = []
for elements in tableElements:
row = {}
columnName1 = find_element_by_class_name('td')
..
Your logic isn't off here.
"My hunch is I haven't correctly used the boolean operation properly here. My idea was that if find_elements_by_tag_name had an index of [1] that would mean multiple image text for me to iterate through."
The problem is it can't check if the statement is True if there's nothing in index position [1]. Hence the error at this point.
if info.find_elements_by_tag_name('td')[1].find_element_by_tag_name('h4').find_elements_by_tag_name('img')[1] == True:
What you want to do is use try: So something like:
for info in tableElements[1:]:
try:
if info.find_elements_by_tag_name('td')[1].find_element_by_tag_name('h4').find_elements_by_tag_name('img')[1] == True:
for images in info.find_elements_by_tag_name('td')[1].find_element_by_tag_name('h4').find_elements_by_tag_name('img'):
print(images.get_attribute('alt'))
else:
print(info.find_elements_by_tag_name('td')[1].find_element_by_tag_name('h4').find_element_by_tag_name('img').get_attribute('alt'))
except:
#do something else
print ('Nothing found in index position.')
Is it also possible to back to your question and provide the full code? When I try this, I'm getting 11 table elements, so want to test it with the specific table you're trying to scrape.

How to save the output of text from selenium chrome (Python)

I'm using Selenium for extracting comments of Youtube.
Everything went well. But when I print comment.text, the output is the last sentence.
I don't know who to save it for further analyze (cleaning and tokenization)
path = "/mnt/c/Users/xxx/chromedriver.exe"
This is the path that I saved and downloaded my chrome
chrome = webdriver.Chrome(path)
url = "https://www.youtube.com/watch?v=WPni755-Krg"
chrome.get(url)
chrome.maximize_window()
scrolldown
sleep = 5
chrome.execute_script('window.scrollTo(0, 500);'
time.sleep(sleep)
chrome.execute_script('window.scrollTo(0, 1080);')
time.sleep(sleep)
text_comment = chrome.find_element_by_xpath('//*[#id="contents"]')
comments = text_comment.find_elements_by_xpath('//*[#id="content-text"]')
comment_ids = []
Try this approach for getting the text of all comments. (the forloop part edited- there was no indention in the previous code.)
for comment in comments:
comment_ids.append(comment.get_attribute('id'))
print(comment.text)
when I print, i can see all the texts here. but how can i open it for further study. Should i always use for loop? I want to tokenize the texts but the output is only last sentence. Is there a way to save this .text file with the whole texts inside it and open it again? I googled it a lot but it wasn't successful.
So it sounds like you're just trying to store these comments to reference later. Your current solution is to append them to a string and use a token to create substrings? I'm not familiar with pythons data structures, but this sounds like a great job for an array or a list depending on how you plan to reference this data.

Resources