find_elements_by_xpath() not producing the desired output python selenium scraping - python-3.x

I'm trying to find a tr by its class of .tableOne. Here is my code:
browser = webdriver.Chrome(executable_path=path, options=options)
cells = browser.find_elements_by_xpath('//*[#class="tableone"]')
But the output of the cells variable is [], an empty array.
Here is the html of the page:
<tbody class="tableUpper">
<tr class="tableone">
<td><a class="studentName" href="//www.abc.com"> student one</a></td>
<td> <span class="id_one"></span> <span class="long">Place</span> <span class="short">Place</span></td>
<td class="hide-s">
<span class="state"></span> <span class="studentState">student_state</span>
</td>
</tr>
<tr class="tableone">..</tr>
<tr class="tableone">..</tr>
<tr class="tableone">..</tr>
<tr class="tableone">..</tr>
</tbody>

Please try this:
import re
cells = browser.find_elements_by_xpath("//*[contains(local-name(), 'tr') and contains(#class, 'tableone')]")
for (e in cells):
insides = e.find_elements_by_xpath("./td")
for (i in insides):
result = re.search('\">(.*)</', i.get_attribute("outerHTML"))
print result.group(1)
What this does is gets all the tr elements that have class tableone, then iterates through each element and lists all the tds. Then iterates through the outerHTML of each td and strips each string to get the text value.
It's quite unrefined and will return empty strings, I think. You might need to put some more work into the final product.

Related

Colspan not working properly using Python Pandas

I have some data that i need to convert into an Excel sheet which needs to look like this at the end of the day:
I've tried the following code:
import pandas as pd
result = pd.read_html(
"""<table>
<tr>
<th colspan="2">Status N</th>
</tr>
<tr>
<td style="font-weight: bold;">Merchant</td>
<td>Count</td>
</tr>
<tr>
<td>John Doe</td>
<td>10</td>
</tr>
</table>"""
)
writer = pd.ExcelWriter('out/test_pd.xlsx', engine='xlsxwriter')
print(result[0])
result[0].to_excel(writer, sheet_name='Sheet1', index=False)
writer.save()
This issue here is that the colspan is not working properly. The output is like this instead:
Can someone help me on how i can use colspan on Python Pandas?
It would be better if i don't have to use read_html() and do it directly on python code but if it's not possible, i can use read_html()
Since Pandas can't recognize the values and columns title you should introduce them, if you convert HTML text to the standard format, then pandas can handle it correctly. use thead and tbody to split header and values like this.
result = pd.read_html("""
<table>
<thead>
<tr>
<th colspan="2">Status N</th>
</tr>
<tr>
<td style="font-weight: bold;">Merchant</td>
<td>Count</td>
</tr>
</thead>
<tbody>
<tr>
<td>John Doe</td>
<td>10</td>
</tr>
</tbody>
</table>
"""
)
To write Dataframe to an excel file you can use the pandas to_excel method.
result[0].to_excel("out.xlsx")

beautiful soup not to parse nested table data

I have a nested table structure. I am using the below code for parsing the data.
for row in table.find_all("tr")[1:][:-1]:
for td in row.find_all("td")[1:]:
dataset = td.get_text()
The problem here is when there are nested tables like in my case there are tables inside <td></td> so these are parsed again after parsing initially as I am using find_all(tr) and find_all(td). So how can I avoid parsing the nested table as it is parsed already?
Input:
<table>
<tr>
<td>1</td><td>2</td>
</tr>
<tr>
<td>3</td><td>4</td>
</tr>
<tr>
<td>5
<table><tr><td>11</td><td>22</td></tr></table>
6
</td>
</tr>
</table>
Expected Output:
1 2
3 4
5
11 22
6
But what I am getting is:
1 2
3 4
5
11 22
11 22
6
That is, the inner table is parsed again.
Specs:
beautifulsoup4==4.6.3
Data order should be preserved and content could be anything including any alphanumeric characters.
Using a combinations of bs4 and re, you might achieve what you want.
I am using bs4 4.6.3
from bs4 import BeautifulSoup as bs
import re
html = '''
<table>
<tr>
<td>1</td><td>2</td>
</tr>
<tr>
<td>3</td><td>4</td>
</tr>
<tr>
<td>5
<table><tr><td>11</td><td>22</td></tr></table>
6
</td>
</tr>
</table>'''
soup = bs(html, 'lxml')
ans = []
for x in soup.findAll('td'):
if x.findAll('td'):
for y in re.split('<table>.*</table>', str(x)):
ans += re.findall('\d+', y)
else:
ans.append(x.text)
print(ans)
For each td we test if this is a nest td. If so, we split on table and take everything and match with a regex every number.
Note this working only for two depths level, but adaptable to any depths
I have tried with findChilden() method and some how managed to produce output.I am not sure if this will help you in any other circumstances.
from bs4 import BeautifulSoup
data='''<table>
<tr>
<td>1</td><td>2</td>
</tr>
<tr>
<td>3</td><td>4</td>
</tr>
<tr>
<td>5
<table><tr><td>11</td><td>22</td></tr></table>
6
</td>
</tr>
</table>'''
soup=BeautifulSoup(data,'html.parser')
for child in soup.find('table').findChildren("tr" , recursive=False):
tdlist = []
if child.find('table'):
for td in child.findChildren("td", recursive=False):
print(td.next_element.strip())
for td1 in td.findChildren("table", recursive=False):
for child1 in td1.findChildren("tr", recursive=False):
for child2 in child1.findChildren("td", recursive=False):
tdlist.append(child2.text)
print(' '.join(tdlist))
print(child2.next_element.next_element.strip())
else:
for td in child.findChildren("td" , recursive=False):
tdlist.append(td.text)
print(' '.join(tdlist))
Output:
1 2
3 4
5
11 22
6
EDITED for Explanation
Step 1:
When use findChilden() inside table it first returns 3 records.
for child in soup.find('table').findChildren("tr", recursive=False):
print(child)
Output:
<tr>
<td>1</td><td>2</td>
</tr>
<tr>
<td>3</td><td>4</td>
</tr>
<tr>
<td>5
<table><tr><td>11</td><td>22</td></tr></table>
6
</td>
</tr>
Step 2:
Check that any children has tag <table> and do some operation.
if child.find('table'):
Step 3:
Follow the step 1 and use findChilden() to get <td> tag.
Once you get the <td> follow step 1 to get the children again.
Step 4:
for td in child.findChildren("td", recursive=False)
print(td.next_element.strip())
Next element will return the first text of tag so in that case it will return the value 5.
Step 5
for td in child.findChildren("td", recursive=False):
print(td.next_element.strip())
for td1 in td.findChildren("table", recursive=False):
for child1 in td1.findChildren("tr", recursive=False):
for child2 in child1.findChildren("td", recursive=False):
tdlist.append(child2.text)
print(' '.join(tdlist))
print(child2.next_element.next_element.strip())
If you see here i have just follows step 1 recursively.Yes Again I have used child2.next_element.next_element to get the value of 6 after </table> tag.
You can check if another table exists inside a td tag, if it exists then simply skip that td, otherwise use it as a regular td.
for row in table.find_all("tr")[1:][:-1]:
for td in row.find_all("td")[1:]:
if td.find('table'): # check if td has nested table
continue
dataset = td.get_text()
In your example, with bs4 4.7.1 I use :has :not to exclude looping rows with table child
from bs4 import BeautifulSoup as bs
html = '''
<table>
<tr>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>
<table>
<tr>
<td>11</td>
<td>22</td>
</tr>
</table>
</td>
</tr>
</table>'''
soup = bs(html, 'lxml')
for tr in soup.select('tr:not(:has(table))'):
print([td.text for td in tr.select('td')])

How to get all td[3] tags from the tr tags with selenium Xpath in python

I have a webpage HTML like this:
<table class="table_type1" id="sailing">
<tbody>
<tr>
<td class="multi_row"></td>
<td class="multi_row"></td>
<td class="multi_row">1</td>
<td class="multi_row"></td>
</tr>
<tr>
<td class="multi_row"></td>
<td class="multi_row"></td>
<td class="multi_row">1</td>
<td class="multi_row"></td>
</tr>
</tbody>
</table>
and tr tags are dynamic so i don't know how many of them exist, i need all td[3] of any tr tags in a list for some slicing stuff.it is much better iterate with built in tools if find_element(s)_by_xpath("") has iterating tools.
Try
cells = driver.find_elements_by_xpath("//table[#id='sailing']//tr/td[3]")
to get third cell of each row
Edit
For iterating just use a for loop:
print ([i.text for i in cells])
Try following code :
tdElements = driver.find_elements_by_xpath("//table[#id="sailing "]/tbody//td")
Edit : for 3rd element
tdElements = driver.find_elements_by_xpath("//table[#id="sailing "]/tbody/tr/td[3]")
To print the text e.g. 1 from each of the third <td> you can either use the get_attribute() method or text property and you can use either of the following solutions:
Using CssSelector and get_attribute():
print(driver.find_elements_by_css_selector("table.table_type1#sailing tr td:nth-child(3)").get_attribute("innerHTML"))
Using CssSelector and text property:
print(driver.find_elements_by_css_selector("table.table_type1#sailing tr td:nth-child(3)").text)
Using XPath and get_attribute():
print(driver.find_elements_by_xpath('//table[#class='table_type1' and #id="sailing"]//tr//following::td[3]').get_attribute("innerHTML"))
Using XPath and text property:
print(driver.find_elements_by_xpath('//table[#class='table_type1' and #id="sailing"]//tr//following::td[3]').text)
To get the 3 rd td of each row, you can try either with xpath
driver.find_elements_by_xpath('//table[#id="sailing"]/tbody//td[3]')
or you can try with css selector like
driver.find_elements_by_css_selector('table#sailing td:nth-child(3)')
As it is returning list you can iterate with for each,
elements=driver.find_elements_by_xpath('//table[#id="sailing"]/tbody//td[3]')
for element in elements:
print(element.text)

xpath join text from multiple elements python

Hello I have some html file from this website: https://www.oddsportal.com/soccer/argentina/superliga/results/
<td class="name table-participant">
<a href="/soccer/argentina/superliga/independiente-san-martin-tIuN5Umrd/">
<span class="bold">Independiente</span>
"- San Martin T."
</a>
</td>
<td class="name table-participant">
<a href="/soccer/argentina/superliga/lanus-huracan-xIDIe0Gr/">
"Lanus - "
<span class="bold">Huracan</span>
</a>
</td>
<td class="name table-participant">
Rosario Central - Colon Santa FE
</td>
I want to select and join a/text() and span/text() in order to look like this: "Independiente - San Martin T."
As you see the 'span' is not allways in the same place and some times is missing (see last 'td class')
I used this code:
('//td[#class="name table-participant"]/a/text() | span/text()').extract()
but it returns only the a/text().
Can you help me to make this work?
Thank you
You trying to search span/text() without a scope. Add // at the beginning of this part of query, in the totally:
('//td[#class="name table-participant"]/a/text() | //span/text()').extract()
But I'm strongly recommend use this decision:
('//td[#class="name table-participant"]//*[self::a/ancestor::td or self::span]/text()').extract
for get span only from your choiced td-scope.
I'm assuming that you're using Scrapy to scrape the HTML.
From the structure of your sample HTML, it looks like you want to obtain the text of the anchor element, so you need to iterate over those.
Only then you can strip and join the text child nodes of the anchor element to obtain properly formatted strings. There is additional complication by the inconsistent use of quotes, but the following should get you going.
from scrapy.selector import Selector
HTML="""
<td class="name table-participant">
<a href="/soccer/argentina/superliga/independiente-san-martin-tIuN5Umrd/">
<span class="bold">Independiente</span>
"- San Martin T."
</a>
</td>
<td class="name table-participant">
<a href="/soccer/argentina/superliga/lanus-huracan-xIDIe0Gr/">
"Lanus - "
<span class="bold">Huracan</span>
</a>
</td>
<td class="name table-participant">
Rosario Central - Colon Santa FE
</td>
"""
def strip_and_join(x):
l=[]
for s in x:
# strip whitespace and quotes
s = s.strip().strip('"').strip()
# drop now empty strings
if s:
l.append(s)
return " ".join(l)
for x in Selector(text=HTML).xpath('//td[#class="name table-participant"]/a'):
print strip_and_join(x.xpath('.//text()').extract())
Note that for the sake of clarity I didn't squeeze the code into a single list comprehension, although this would be possible of course.

VBA Excel get text inside HTMLObject

I know this is really easy for some of you out there. But I have been going deep on the internet and I can not find an answer. I need to get the company name that is inside the
tbody tr td a eBay-tradera.com
and
td class="bS aR" 970,80
/td /tr /tbody
<tbody id="matrix1_group0">
<tr class="oR" onmouseover="onMouseOver(this, false)" onmouseout="onMouseOut(this, false)" onclick="onClick(this, false)">
<td class="bS"> </td>
<td>
<a href="aProgramInfoApplyRead.action?programId=175&affiliateId=2014848" title="http://www.tradera.com/" target="_blank">
eBay-Tradera.com
</a>
</td>
<td class="aR">
175</td>
<td class="bS aR">0</td><td class="bS aR">0</td><td class="bS aR">187</td>
<td class="aR">0,00%</td><td class="bS aR">124</td>
<td class="aR">0,00%</td>
<td class="bS aR">26</td>
<td class="aR">20,97%</td>
<td class="bS aR">32</td>
<td class="aR">60,80</td>
<td class="aR">25,81%</td>
<td class="bS aR">5 102,00</td>
<td class="bS aR">0,00</td>
<td class="aR">0,00</td>
<td class="bS aR">
970,80
</td>
</tr>
</tbody>
This is my code, where I only try to get the a tag to start of with but I cant get that to work either
Set TDelements = document.getElementById("matrix1_group0").document.getElementsbytagname("a").innerHTML
r = 0
C = 0
For Each TDelement In TDelements
Blad1.Range("A1").Offset(r, C).Value = TDelement.innerText
r = r + 1
Next
Thanks on beforehand I know that this might be to simple. But I hope that other people might have the same issue and this will be helpful for them as well. The reason for the "r = r + 1" is because there are many more companies on this list. I just wanted to make it as easy as I could. Thanks again!
You will need to specify the element location in the table. Ebay seems to be obfuscating the class-names so we cannot rely on those being consistent. Nor would I usually rely on the elements by their table index being consistent but I don't see any way around this.
I am assuming that this is the HTML document you are searching
<tbody id="matrix1_group0">
<tr class="oR" onmouseover="onMouseOver(this, false)" onmouseout="onMouseOut(this, false)" onclick="onClick(this, false)">
<td class="bS"> </td>
<td>
<a href="aProgramInfoApplyRead.action?programId=175&affiliateId=2014848" title="http://www.tradera.com/" target="_blank">
eBay-Tradera.com <!-- <=== You want this? -->
</a>
</td>
<!-- ... -->
</tr>
<!-- ... -->
</tbody>
We can ignore the rest of the document as the table element has an ID. In short, we assume that
.getElementById("matrix1_group0").getElementsByTagName("TR")
will return a collection of html row objects sorted by their appearance.
Set matrix = document.getElementById("matrix1_group0")
Set firstRow = matrix.getElementsByTagName("TR")(1)
Set firstRowSecondCell = firstRow.getElementsByTagName("TD")(2)
traderaName = firstRowSecondCell.innerText
Of course you could inline this all as
document.getElementById("matrix1_group0").getElementsByTagName("TR")(1).getElementsByTagName("TD")(2).innerText
but that would make debugging harder. Also if the web-page is ever presented to you in a different format then this won't work. Ebay is deliberately making it hard for you to scrape data off of it for security.
With only the HTML you have shown you can use CSS selectors to obtain these:
a[href*='aProgramInfoApplyRead.action?programId']
Which says a tag with attribute href that contains the string 'aProgramInfoApplyRead.action?programId'. This matches two elements but the first is the one you want.
CSS Selector:
VBA:
You can use .querySelector method of .document to retrieve the first match
Debug.Print ie.document.querySelector("a[href*='aProgramInfoApplyRead.action?programId']").innerText

Resources