How can i click the third href link? - python-3.x

<ul id='pairSublinksLevel1' class='arial_14 bold newBigTabs'>...<ul>
<ul id='pairSublinksLevel2' class='arial_12 newBigTabs'>
<li>...</li>
<li>...</li>
<li>
<a href='/equities/...'> last data </a> #<-- HERE
</li>
<li>...</li>
Question is how can i get click third li tag ??
In my code
xpath = "//ul[#id='pairSublinksLevel2']"
element = driver.find_element_by_xpath(xpath)
actions = element.find_element_by_css_selector('a').click()
code works partially. but i want to click third li tag.
The code keeps clicking on the second tag.

Try
driver.find_element_by_xpath("//ul[#id='pairSublinksLevel2']/li[3]/a").click()
EDIT:
Thanks #DebanjanB for suggestion:
When you get the element with xpath //ul[#id='pairSublinksLevel2'] and search for a tag in its child elements, then it will return the first match(In your case, it could be inside second li tag). So you can use indexing as given above to get the specific numbered match. Please note that such indexing starts from 1 not 0.

As per the HTML you have shared you can use either of the following solutions:
Using link_text:
driver.find_element_by_link_text("last data").click()
Using partial_link_text:
driver.find_element_by_partial_link_text("last data").click()
Using css_selector:
driver.find_element_by_css_selector("ul.newBigTabs#pairSublinksLevel2 a[href*='equities']").click()
Using xpath:
driver.find_element_by_xpath("//ul[#class='arial_12 newBigTabs' and #id='pairSublinksLevel2']//a[contains(#href,'equities') and contains(.,'last data')]").click()
Reference: Official locator strategies for the webdriver

Related

How to find all the span tag inside of an element in selenium python?

<div id="textelem" class="random">
<span class="a">
TEXT 1
</span>
<span>
<span>TEXT 2 </span>
</span>
<span>TEXT 3</span>
</div>
Python: TargetElem = self.wait.until(EC.presence_of_element_located((By.ID, "textelem")))
I want to get all the text inside of span tags of TargetElem element. How can I get all the span elements inside of TargetElem element and loop through them to get a single string of collected text. Thank you.
simply use .text
TargetElem = self.wait.until(EC.presence_of_element_located((By.ID, "textelem")))
print(TargetElem.text)
I do not think that you actually need a loop, since we are passing textelem id of div and all the span tags are inside the div, so .text should work.

Selenium Can't Find Element Returning None or []

im having trouble accessing element, here is my code:
driver.get(url)
desc = driver.find_elements_by_xpath('//p[#class="somethingcss xxx"]')
and im trying to use another method like this
desc = driver.find_elements_by_class_name('somethingcss xxx')
the element i try to find like this
<div data-testid="descContainer">
<div class="abc1123">
<h2 class="xxx">The Description<span data-tid="prodTitle">The Description</span></h2>
<p data-id="paragraphxx" class="somethingcss xxx">sometext here
<br>text
<br>
<br>text
<br> and several text with
<br> tag below
</p>
</div>
<!--and another div tag below-->
i want to extract tag p inside div class="abc1123", but it doesn't return any result, only return [] when i try to get_attribute or extract it to text.
When i try extract another element using this method with another class, it works perfectly.
Does anyone know why I can't access these elements?
Try the following css selector to locate p tag.
print(driver.find_element_by_css_selector("p[data-id^='paragraph'][class^='somethingcss']").text)
OR Use get_attribute("textContent")
print(driver.find_element_by_css_selector("p[data-id^='paragraph'][class^='somethingcss']").get_attribute("textContent"))

how to click on the first result of a search on metacritc with selenium

how do I click on the first search result on metacritc's search bar?
this is what I have so far:
self.search_element = self.driver.find_element_by_name("search_term")
self.search_element.clear()
self.search_element.send_keys(self.game_line_edit.text())
self.link_to_click = self.driver.find_element_by_name("search_results_item")
self.link_to_click.click()
# self.game.setText(self.driver.find_element("search_results_item"))
self.game_line_edit.setText("")
self.driver.close()
but I'm getting this error:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"name","selector":"search_results_item"}
I realize selenium can't find the link but I am not sure what element it is
this is the HTML I'm trying to click on:
<a class="search_results_item" href="https://www.metacritic.com/game/pc/into-the-breach">
<span class="metascore_w score_outstanding">90</span>
<span class="title" data-mctitle="Into the Breach"><b>Into the</b> Breach</span>
<span class="type secondary">PC Game</span>
<span class="separ secondary">,</span>
<span class="date secondary">2018</span>
</a>
can someone help?
Thanks!
You are searching by name when you are referring to a class. Instead use a CSS selector, e.g.
.find_element_by_css_selector(".search_results_item")
.search_results_item indicates a class with the name 'search_results_item'.
If that doesn't work, you probably need a wait. See this answer for more info.
If you're fine with using xPath, you can select by index.
(//a[contains(#class,'search_results_item')])[1]
#JeffC is also correct as well. You should not select by name because this element has no name. At the very least, select by class or tag.

Incorrect number of results found by XPath

Actually, the situation is a little more complex.
I'm trying to get data from this example html:
<li itemprop="itemListElement">
<h4>
one
</h4>
</li>
<li itemprop="itemListElement">
<h4>
two
</h4>
</li>
<li itemprop="itemListElement">
<h4>
three
</h4>
</li>
<li itemprop="itemListElement">
<h4>
four
</h4>
</li>
For now, I'm using Python 3 with urllib and lxml.
For some reason, the following code doesn't work as expected (Please read the comments)
scan = []
example_url = "path/to/html"
page = html.fromstring(urllib.request.urlopen(example_url).read())
# Extracting the li elements from the html
for item in page.xpath("//li[#itemprop='itemListElement']"):
scan.append(item)
# At this point, the list 'scan' length is 4 (Nothing wrong)
for list_item in scan:
# This is supposed to print '1' since there's only one match
# Yet, this actually prints '4' (This is wrong)
print(len(list_item.xpath("//h4/a")))
So as you can see, the first move is to extract the 4 li elements and append them to a list, then scan each li element for a element, but the problem is that each li element in scan is actually all the four elements.
...Or so I thought.
Doing a quick debugging, I found that the scan list contains the four li elements correctly, so I came to one possible conclusion: There's something wrong with the for loop aforementioned above.
for list_item in scan:
# This is supposed to print '1' since there's only one match
# Yet, this actually prints '4' (This is wrong)
print(len(list_item.xpath("//h4/a")))
# Something is wrong here...
The only real problem is that I can't pinpoint the bug. What causes that?
PS: I know, there's an easier way to get the a elements from the list, but this is just an example html, the real one contains many more... things.
In your example, when the XPath starts with //, it will start searching from the root of the document (which is why it was matching all four of the anchor elements). If you want to search relative to the li element, then you would omit the leading slashes:
for item in page.xpath("//li[#itemprop='itemListElement']"):
scan.append(item)
for list_item in scan:
print(len(list_item.xpath("h4/a")))
Of course you can also replace // with .// so that the search is relative as well:
for item in page.xpath("//li[#itemprop='itemListElement']"):
scan.append(item)
for list_item in scan:
print(len(list_item.xpath(".//h4/a")))
Here is a relevant quote taken from the specification:
2.5 Abbreviated Syntax
// is short for /descendant-or-self::node()/. For example, //para is short for /descendant-or-self::node()/child::para and so will select any para element in the document (even a para element that is a document element will be selected by //para since the document element node is a child of the root node); div//para is short for div/descendant-or-self::node()/child::para and so will select all para descendants of div children.
print(len(list_item.xpath(".//h4/a")))
// means /descendant-or-self::node()
it starts with /, so it will search from root node of the document.
use . to point the current context node is list_item, not the whole document

How I can access elements via a non-standard html property?

I'm try to implement test automation with watir-webdriver. By the way I am a freshman with watir-webdriver, ruby and co.
All our HTML-entities have a unique HTML-property named "wicketpath". It is possible to access the element with "name", "id" a.s.o, but not with the property "wicketpath". So I tried it with XPATH but I have no success.
Can anybody help me with a codesnippet how I can access the element via the propertie "wicketpath"?
Thanks in advance.
R.
You should be able to use xpath.
For example, consider the following HTML
<ul class="ui-autocomplete" role="listbox">
<li class="ui-menu-item" role="menuitem" wicketpath="false">Value 1</li>
<li class="ui-menu-item" role="menuitem" wicketpath="false">Value 2</li>
<li class="ui-menu-item" role="menuitem" wicketpath="true">Value 3</li>
</ul>
The following xpath will give the text of the li that has wicketpath = true:
puts browser.li(:xpath, "//li[#wicketpath='true']").text
#=>Value 3
Update - Alternative solution - Adding To Locators:
If you use a lot of wicketpath, you could add it to the locators.
After you require watir-webdriver, add this:
# This allows using :wicketpath in locators
Watir::HTMLElement.attributes << :wicketpath
# This allows accessing the wicketpath attribute
class Watir::Element
attribute(String, :wicketpath, 'wicketpath')
end
This will let you use 'wicketpath' as a locator:
p browser.li(:wicketpath, 'true').text
#=> "Value 3"
p browser.li(:text, 'Value 3').wicketpath
#=> true
Try this
puts browser.li(:css, ".ui-autocomplete > .ui-menu-item[wicketpath='true']").text
Please Let me know is the above scripting is working or not.

Resources