Selenium Python Iterate over li elements inside ol - python-3.x

I need to extract, using python and selenium, the text of multiple li elements, all of them, inside an ol element.
The list is like this:
/html/body/main/div/div/section/ol/li[3]/div/div/div[2]/div[1]/a[1]/h2
/html/body/main/div/div/section/ol/li[4]/div/div/div[2]/div[1]/a[1]/h2
/html/body/main/div/div/section/ol/li[5]/div/div/div[2]/div[1]/a[1]/h2
/html/body/main/div/div/section/ol/li[6]/div/div/div[2]/div[1]/a[1]/h2
...
So far, I'm using the following code, but this code shows only one element, and not the whole list.
What is wrong with it?
Thanks!
articles_list=r'/html/body/main/div/div/section/ol'
articles_elements = driver.find_elements_by_xpath(articles_list)
for article in articles_elements:
title = article.find_element_by_xpath('.//div/div/div[2]/div[1]/a[1]/h2').text
print('Title:'+title)

I see that li tag has increment indices like [3] then [4] and so on..
You can look for each and every web element by declaring i = 3, initially and then increment using programmatically.
a = 3
size_of_list = driver.find_elements_by_xpath(f"/html/body/main/div/div/section/ol/li")
for i in range(size_of_list):
title = driver.find_element_by_xpath(f"/html/body/main/div/div/section/ol/li[{a}]/div/div/div[2]/div[1]/a[1]/h2").text
print('Title:', title)
a = a +1

Related

How to iterate over WebElements and get a new WebElement in Robot Framework

I am trying to get href attribute from an html list using Robot Framework keywords. For example suppose the html code
<ul class="my-list">
<li class="my-listitem"><a href="...">...</li>
...
<li class="my-listitem"><a href="...">...</li>
</ul>
I have tried to use the keywords WebElement, WebElements and for loop without success. How can I do it?
This is my MWE
*** Test Cases ***
#{a tags} = Create List
#{href attr} = Create List
#{li items} = Get WebElements class:my-listitem
FOR ${li} IN #{li items}
${a tag} = Get WebElement tag:a
Append To List #{a tags} ${a tag}
END
FOR ${a tag} IN #{a tags}
${attr} = Get Element Attribute css:my-listitem href
Append To List #{href attr} ${attr}
END
Thanks in advance.
The href is an attribute of the a elements, not the li, thus you need to target them. Get a reference for all such elements, and then get their href in the loop:
${the a-s}= Get WebElements xpath=//li[#class='my-listitem']/a # by targeting the correct element, the list is a reference to all such "a" elements
${all href}= Create List
FOR ${el} IN #{the a-s} # loop over each of them
${value}= Get Element Attribute ${el} href # get the individual href
Append To List ${all href} ${value} # and store it in a result list
END
Log To Console ${all href}
Here is a possible solution (not tested):
#{my_list}= Get WebElements xpath=//li[#class='my-listitem']
FOR ${element} IN #{my_list}
${attr}= Get Element Attribute ${element} href
Log ${attr} html=True
END

how to use selenium python to open new chat whatsapp (i need to target the second icon New Chat)

I need to target the second icon New Chat but they have the same class name
from selenium import webdriver
driver = webdriver.Chrome('C:/Users/ka-my/AppData/Local/Programs/Python/Python37-32/chromedriver')
driver.get('https://web.whatsapp.com/')
input('Enter anything after scanning QR code')
user1 = driver.find_element_by_class_name('_3j8Pd')
user1.click()
1.i need to target the second icon New Chat
just like Facebook and google the class names are dynamically generated So the best way around that is to look for something constant which is the icon string
new_chat = driver.find_elements_by_xpath('//div[#title="New chat"]') # return a list
if new_chat:
new_chat[0].click()
To get 2nd icon in new chat, you can use this:
# get the 2nd element in the list
second_icon = driver.find_elements_by_xpath("//div[#class='_3j8Pd']")[1]
Or:
# get the 2nd element in the list
second_icon = driver.find_elements_by_xpath("//div[#class='_3j8Pd'][2]")
In first example, we are getting a list of all the div elements, and picking the 2nd item using the [1] index. In second example, we are using element index in XPath [2] to get the second element in the list. List index is 0-based and XPath element index is 1-based, so that is why we see 1 and 2 here.

python lxml xpath 1.0 : unique values for element's attribute

Here is a way to get unique values. It doesn't work if i want to get unique attribute.
For example:
<a href = '11111'>sometext</a>
<a href = '11121'>sometext2</a>
<a href = '11111'>sometext3</a>
I want to get unique hrefs. Restricted by using xpath 1.0
page_src.xpath( '(//a[not(.=preceding::a)] )')
page_src.xpath( '//a/#href[not(.=preceding::a/#href)]' )
return duplicates.
Is it possible to resolve this nightmare with unique-values absence ?
UPD : it's not a solution like function i wanted, but i wrote python function, which iterates over parent elements and check if adding parent tag filters links to needed count.
Here is my example:
_x_item = (
'//a[starts-with(#href, "%s")'
'and (not(#href="%s"))'
'and (not (starts-with(#href, "%s"))) ]'
%(param1, param1, param2 ))
#rm double links
neededLinks = list(map(lambda vasa: vasa.get('href'), page_src.xpath(_x_item)))
if len(neededLinks)!=len(list(set(neededLinks))):
uniqLength = len(list(set(neededLinks)))
breakFlag = False
for linkk in neededLinks:
if neededLinks.count(linkk)>1:
dupLinks = page_src.xpath('//a[#href="%s"]'%(linkk))
dupLinkParents = list(map(lambda vasa: vasa.getparent(), dupLinks))
for dupParent in dupLinkParents:
tempLinks = page_src.xpath(_x_item.replace('//','//%s/'%(dupParent.tag)))
tempLinks = list(map(lambda vasa: vasa.get('href'), tempLinks))
if len(tempLinks)==len(set(neededLinks)):
breakFlag = True
_x_item = _x_item.replace('//','//%s/'%(dupParent.tag))
break
if breakFlag:
break
This WILL work if duplicate links has different parent, but same #href value.
As a result i will add parent.tag prefix like //div/my_prev_x_item
Plus, using python, i can update result to //div[#key1="val1" and #key2="val2"]/my_prev_x_item , iterating over dupParent.items(). But this is only working if items are not located in same parent object.
In result i need only x_path_expression, so i cant just use list(set(myItems)) .
I want easier solution ( like unique-values() ), if it exists. Plus my solution does not work if link's parent is same.
You can extract all the hrefs and then find the unique ones:
all_hrefs = page_src.xpath('//a/#href')
unique_hrefs = list(set(all_hrefs))

How to create a list of all visible elements in a class python

I am using python 3.x Selenium WebDriver and I am making a for loop to go all through the elements of the page with limit the length of the elements in the class and then print number of iteration but it gets all visible and hidden elements how to get only visible element in the page.
To get all elements from the class I am using
showMore = driver.find_elements_by_class_name('getPhotos')
You can take the list of all elements (visible and invisible) and filter it down to only those that are visible. There are several ways to do this... here is one.
showMore = driver.find_elements_by_class_name('getPhotos')
onlyVisible = filter(lambda x: x.is_displayed(), showMore)
A better way to cater to your requirement will be to create a List inducing WebDriverWait with expected_conditions as visibility_of_all_elements_located as follows :
showMore = WebDriverWait(driver, 20).until(expected_conditions.visibility_of_all_elements_located((By.CLASS_NAME, "getPhotos")))
Note : visibility_of_all_elements_located refers to an expectation for checking that all elements are present on the HTML DOM of a page and are visible. Visibility means that the elements are not only displayed but also has a height and width that is greater than 0.

Using composed xpath to locate an element and click on it

I am trying to retrieve a list of elements using XPATH and from this list I want to retrieve a child element based on classname and click it.
var rowList = XPATH1 + className;
var titleList = className + innerHTMLofChildElement;
for(var i = 0; i < titleList.length; i++) {
if(titleList[i][0] === title) {
browser.click(titleList[i][0]); //I don't know what to do inside the click function
}
}
I had a similar implementation perhaps to what you are trying to do, however my implementation is perhaps more complex due to using CSS selectors rather than XPath. I'm certain this is not optimized, and can most likely be improved upon.
This uses the methods elementIdText() and elementIdClick() from WebdriverIO to work with the "Text Values" of the Web JSON Elements and then click the intended Element after matching what you're looking for.
http://webdriver.io/api/protocol/elementIdText.html
http://webdriver.io/api/protocol/elementIdClick.html
Step 1 - Find all your potential elements you want to work with:
// Elements Query to return Elements matching Selector (titles) as JSON Web Elements.
// Also `browser.elements('<selector>')`
titles = browser.$$('<XPath or CSS selector>')
Step 2 - Cycle through the Elements stripping out the InnerHTML or Text Values and pushing it into a separate Array:
// Create an Array of Titles
var titlesTextArray = [];
titles.forEach(function(elem) {
// Push all found element's Text values (titles) to the titlesTextArray
titlesTextArray.push(browser.elementIdText(elem.value.ELEMENT))
})
Step 3 - Cycle through the Array of Title Texts Values to find what you're looking for. Use elementIdClick() function to click your desired value:
//Loop through the titleTexts array looking for matching text to the desired title.
for (var i = 0; i < titleTextsArray.length; i++) {
if (titleTextsArray[i].value === title) {
// Found a match - Click the corresponding element that
// it belongs to that was found above in the Titles
browser.elementIdClick(titles[i].value.ELEMENT)
}
}
I wrapped all of this into a function in which i provided the intended Text (in your case a particular title) I wanted to search for. Hope this helps!
I don't know node.js, but in Java you should achieve your goal by:
titleList[i].findElementBy(By.className("classToFind"))
assuming titleList[i] is an element on list you want to get child elements from

Resources