selenium - clicking a button - python-3.x

I am trying to pull out the names of all courses offered by Lynda.com together with the subject so that it appears on my list as '2D Drawing -- Project Soane: Recover a Lost Monument with BIM with Paul F. Aubin'. So I am trying to write a script that will go to each subject on http://www.lynda.com/sitemap/categories and pull out the list of courses. I already managed to get Selenium to go from one subject to another and pull the courses. My only problem is that there is a button 'See X more courses' to see the rest of the courses. Sometimes you have to click it couple of times that´s why I used while loop. But selenium doesn´t seem to execute this click. Does anyone know why?
This is my code:
from selenium import webdriver
url = 'http://www.lynda.com/sitemap/categories'
mydriver = webdriver.Chrome()
mydriver.get(url)
course_list = []
for a in [1,2,3]:
for b in range(1,73):
mydriver.find_element_by_xpath('//*[#id="main-content"]/div[2]/div[3]/div[%d]/ul/li[%d]/a' % (a,b)).click()
while True:
#click the button 'See more results' as long as it´s available
try:
mydriver.find_element_by_xpath('//*[#id="main-content"]/div[1]/div[3]/button').click()
except:
break
subject = mydriver.find_element_by_tag_name('h1') # pull out the subject
courses = mydriver.find_elements_by_tag_name('h3') # pull out the courses
for course in courses:
course_list.append(str(subject.text)+" -- " + str(course.text))
# go back to the initial site
mydriver.get(url)

Scroll to element before clicking:
see_more_results = browser.find_element_by_css_selector('button[class*=see-more-results]')
browser.execute_script('return arguments[0].scrollIntoView()', see_more_results)
see_more_results.click()
One solution how to repeat these actions could be:
def get_number_of_courses():
return len(browser.find_elements_by_css_selector('.course-list > li'))
number_of_courses = get_number_of_courses()
while True:
try:
button = browser.find_element_by_css_selector(CSS_SELECTOR)
browser.execute_script('return arguments[0].scrollIntoView()', button)
button.click()
while True:
new_number_of_courses = get_number_of_courses()
if (new_number_of_courses > number_of_courses):
number_of_courses = new_number_of_courses
break
except:
break
Caveat: it's always better to use build-in explicit wait than while True:
http://www.seleniumhq.org/docs/04_webdriver_advanced.jsp#explicit-waits

The problem is that you're calling a method to find element by class name, but you're passing a xpath. if you're sure this is the correct xpath you'll simply need to change to method to 'find_element_by_xpath'.
A recommendation if you allow: Try to stay away from these long xpaths and go through some tutorials on how to write efficient xpath for example.

Related

Not able to click 'Add Friend' button on every iteration using pytest_bdd

I'm trying to build a framework for test automation based on pytest_bdd. I'm able to perform many functionalities on my Social Networking Site where I'm performing my automation.
So, my use case is, after logging in, I need to search for a user and click on 'Add Friend' button. If manually I try to give specific xpath, I'm able to do that. But when I'm searching for multiple users having same name, say for example 'Nitin kumar', there are 2 users with same name. I want to add both of them as my friend, but I'm not able to click on them.
My steps are the following:
Logging in
Search for a user(Nitin kumar)
Click on 'Add Friend' button - This is where I'm having problem
Necessary snippet from test_nitsanon.py
#when('the user clicks on add as friend besides a user')
def sendFrndReq(browser):
res, xp = clickBtn_multiple(browser, sendReq_addFrndBtn)
print('\n\n\n Sent Friend Request to all users with mentioned name >>> DEBUG\n', res, xp)
Snippet from functions.py
# Clicks a button on every iteration
def clickBtn_multiple(browser, xpath):
wait(3)
res = []
xp = []
s = browser.find_elements(By.XPATH, xpath)
for i in s:
res.append(i.text)
if xpath.is_displayed():
a = clickBy_Xpath(browser, xpath)
xp.append(a)
else:
continue
return res, xp
Snippet from xpaths.py
# Send friend request
sendReq_addFrndBtn = "//div[text()='" + sendReq_name + "']/ancestor::div/div/div/div/div/div[2]/child::div[2]/div/form/div/button"
Snippet from variables.py
nitsanonURL = "http://nitsanon.epizy.com/"
userValue = 'nitin'
passValue = 'kumar'
sendReq_name = "Nitin kumar"
What is wrong with this code? I guess there is something wrong with the function I've made! The return statement is returning an empty list. Check the output image here.
I've enhanced my code for the functions.py file. But then also I'm getting StaleElementReferenceException for checking out different functionality.
Following is the updated code:
def clickBtn_multiple(browser, xpath):
wait(3)
res = []
xp = []
s = browser.find_elements(By.XPATH, xpath)
print(s)
for i in s:
if i.is_displayed(): # Changed xpath.is_displayed() to i.is_displayed()
res.append(i.text)
a = clickBy_Xpath(browser, xpath)
xp.append(a)
return res, xp
I'm now trying to achieve the 'Unfriend' functionality. To navigate to the FRIENDS tab & Unfriend users with a specific name.
Code snippet for test_nitsanon.py
#when('the user clicks on unfriend besides a user')
def sendFrndReq(browser):
res, xp = clickBtn_multiple(browser, unfrnd_Btn)
print('\n\n\n Unfriended to all users with mentioned name >>> DEBUG\n', res, xp)
Code snippet for xpaths.py
navOption_friends = "//a[contains(text(),'Friends')]"
unfrnd_Btn = "//div[text()='" + sendReq_name + "']/ancestor::div/div[2]/child::div[1]/a"
The output I'm getting along with the console exception is here.
This is happening because after clicking on the 'UnFriend' button one time, the page refreshes and navigates to a different page.
I have a logic to check everytime with a value 1(let's say) during 1st iteration & will save the value till the next iteration. Till then, we'll navigate back to the FRIENDS tab & again go back to this function for the remaining iterations. But, I'm not able to implement this one.
Need answers. Not able to sleep. I'm so much curious to build a framework for test automation without using any OOPs concept.

Can't find element Selenium

I am trying to fill in the forms in the Gmail creation page. But for some reason, my driver can't find the code. I've used all the options, path, id, name, class name, but nothing works
chrome_driver = './chromedriver'
driver = webdriver.Chrome(chrome_options=chrome_options, executable_path=chrome_driver)
driver.get('https://www.google.com/intl/nl/gmail/about/#')
try:
print('locating create account button')
create_account_button = driver.find_element_by_class_name('h-c-button')
except:
print("error, couldn't find create account button")
try:
create_account_button.click()
print('navigating to creation page')
except:
print('error navigating to creation page')
time.sleep(15)
first_name_form = driver.find_element_by_class_name('whsOnd zHQkBf')
(the sleep is just temporary, to make sure it loads completely, I know it's not efficient)
here's the link to the Gmail page:
https://accounts.google.com/signup/v2/webcreateaccount?service=mail&continue=https%3A%2F%2Fmail.google.com%2Fmail%2F%3Fpc%3Dtopnav-about-n-en&flowName=GlifWebSignIn&flowEntry=SignUp
This is the error I'm getting:
Exception has occurred: NoSuchElementException
Message: no such element: Unable to locate element: {"method":"css
selector","selector":".whsOnd zHQkBf"}
(Session info: chrome=81.0.4044.129)
Thank you for your help
I found your error and I have the solution for you. Let me first determine to you the problem. When you click on the 'Create New Account' a new window is opening. BUT your bot is still undestanding that you are located in the first window (in the one that you click on the first button in order to create the account). Thus, the bot is trying to see if there is a First Name input. That's why is failing. So, the solution is that you have to change the window that you want to specify. The way you can do it is written inside the code block.
CODE
from selenium import webdriver
import time
path = '/home/avionerman/Documents/stack'
driver = webdriver.Firefox(path)
driver.get('https://www.google.com/intl/nl/gmail/about/#')
try:
print('locating create account button')
create_account_button = driver.find_element_by_class_name('h-c-button')
except:
print("error, couldn't find create account button")
try:
create_account_button.click()
print('navigating to creation page')
except:
print('error navigating to creation page')
time.sleep(15)
# Keeping all the windows into the array named as handles
handles = driver.window_handles
# Keeping the size of the array in order to know how many windows are open
size = len(handles)
# Switch to the second opened window (id:1)
driver.switch_to.window(handles[1])
# Print the title of the current page in order to validate if it's the proper one
print(driver.title)
time.sleep(10)
first_name_input = driver.find_element_by_id('firstName')
first_name_input.click()
first_name_input.send_keys("WhateverYouWant")
last_name_input = driver.find_element_by_id('lastName')
last_name_input.click()
last_name_input.send_keys("WhateverYouWant2")
username_input = driver.find_element_by_id('username')
username_input.click()
username_input.send_keys('somethingAsAUsername')
pswd_input = driver.find_element_by_name('Passwd')
pswd_input.click()
pswd_input.send_keys('whateveryouwant')
pswd_conf_input = driver.find_element_by_name('ConfirmPasswd')
pswd_conf_input.click()
pswd_conf_input.send_keys('whateveryouwant')
time.sleep(20)
So, if you will go to line 21 you will see that I have some comments in order to tell you what these lines (from 21 until 31) are doing.
Also, I inserted all the needed code for you (first name, last name, et.c). You only have to locate the creation button (last one).
Note: Try to use ids in such cases and not class names (when the ids are clear and unique) as I already did for you.

Selecting values from drop down of website through selenium Python

I am trying web scraping using Python selenium. I am getting the following error: Message: element not interactable (Session info: chrome=78.0.3904.108) I am trying to access option element by id, value or text. It is giving me this error. I am using Python-3. Can someone help to explain where am I going wrong. I used the select tag using xpath and also tried css_selector. The select tag is selected and I can get the output of select tag selected. Here is my code for a better understanding:
Code-1:-
path = r'D:\chromedriver_win32\chromedriver.exe'
browser = webdriver.Chrome(executable_path = path)
website = browser.get("https://publicplansdata.org/resources/download-avs-cafrs/")
el = browser.find_element_by_xpath('//*[#id="ppd-download-state"]/select')
for option in el.find_elements_by_tag_name('option'):
if option.text != None:
option.click()
break
Blockquote
Code-2:-
select_element = Select(browser.find_element_by_xpath('//*[#id="ppd-download-state"]/select'))
# this will print out strings available for selection on select_element, used in visible text below
print(o.value for o in select_element.options)
select_element.select_by_value('AK')
Both codes give the same error how can I select values from drop down from website
Same as question:
Python selenium select element from drop down. Element Not Visible Exception
But the error is different. Tried the methods in comments
State, Plan, and Year:
browser.find_element_by_xpath("//span[text()='State']").click()
browser.find_element_by_xpath("//a[text()='West Virginia']").click()
time.sleep(2) # wait for Plan list to be populated
browser.find_element_by_xpath("//span[text()='Plan']").click()
browser.find_element_by_xpath("//a[text()='West Virginia PERS']").click()
time.sleep(2) # wait for Year list to be populated
browser.find_element_by_xpath("//span[text()='Year']").click()
browser.find_element_by_xpath("//a[text()='2007']").click()
Don't forget to import time

python3 More button clickable in the 1st page but NOT clickable in the 2nd page

This is the extended question on how to click 'More' button on a webpage.
Below is my previous question and one person kindly answered for it.
Since I'm not that familiar with 'find element by class name' function, I just added that person's revised code on my existing code. So my revised code would not be efficient (my apology).
Python click 'More' button is not working
The situation is, there are two types of 'More' button. 1st one is in the property description part and the 2nd one is in the text reviews part. If you click only one 'More' button from any of the reviews, reviews will be expanded so that you can see the full text reviews.
The issue I run into is that I can click 'More' button for the reviews that are in the 1st page but not clickable for the reviews in the 2nd page.
Below is the error message I get but my code still runs (without stopping once it sees an error).
Message:
no such element: Unable to locate element: {"method":"tag name","selector":"span"}
Based on my understanding, there is entry class and corresponding span for every review. I don't understand why it says python can't find it.
from selenium import webdriver
from selenium.webdriver import ActionChains
from bs4 import BeautifulSoup
review_list=[]
review_appended_list=[]
review_list_v2=[]
review_appended_list_v2=[]
listed_reviews=[]
listed_reviews_v2=[]
listed_reviews_total=[]
listed_reviews_total_v2=[]
final_list=[]
#Incognito Mode
option = webdriver.ChromeOptions()
option.add_argument("--incognito")
#Open Chrome
driver=webdriver.Chrome(executable_path="C:/Users/chromedriver.exe",options=option)
#url I want to visit (I'm going to loop over multiple listings but for simplicity, I just added one listing url).
lists = ['https://www.tripadvisor.com/VacationRentalReview-g30196-d6386734-Hot_51st_St_Walk_to_Mueller_2BDR_Modern_sleeps_7-Austin_Texas.html']
for k in lists:
driver.get(k)
time.sleep(3)
#click 'More' on description part.
link = driver.find_element_by_link_text('More')
try:
ActionChains(driver).move_to_element(link)
time.sleep(1) # time to move to link
link.click()
time.sleep(1) # time to update HTML
except Exception as ex:
print(ex)
time.sleep(3)
# first "More" shows text in all reviews - there is no need to search other "More"
try:
first_entry = driver.find_element_by_class_name('entry')
more = first_entry.find_element_by_tag_name('span')
#more = first_entry.find_element_by_link_text('More')
except Exception as ex:
print(ex)
try:
ActionChains(driver).move_to_element(more)
time.sleep(1) # time to move to link
more.click()
time.sleep(1) # time to update HTML
except Exception as ex:
print(ex)
#begin parsing html and scraping data.
html =driver.page_source
soup=BeautifulSoup(html,"html.parser")
listing=soup.find_all("div", class_="review-container")
all_reviews = driver.find_elements_by_class_name('wrap')
for review in all_reviews:
all_entries = review.find_elements_by_class_name('partial_entry')
if all_entries:
review_list=[all_entries[0].text]
review_appended_list.extend([review_list])
for i in range(len(listing)):
review_id=listing[i]["data-reviewid"]
listing_v1=soup.find_all("div", class_="rating reviewItemInline")
rating=listing_v1[i].span["class"][1]
review_date=listing_v1[i].find("span", class_="ratingDate relativeDate")
review_date_detail=review_date["title"]
listed_reviews=[review_id, review_date_detail, rating[7:8]]
listed_reviews.extend([k])
listed_reviews_total.append(listed_reviews)
for a,b in zip (listed_reviews_total,review_appended_list):
final_list.append(a+b)
#loop over from the 2nd page of the reviews for the same listing.
for j in range(5,20,5):
url_1='-'.join(k.split('-',3)[:3])
url_2='-'.join(k.split('-',3)[3:4])
middle="-or%d-" % j
final_k=url_1+middle+url_2
driver.get(final_k)
time.sleep(3)
link = driver.find_element_by_link_text('More')
try:
ActionChains(driver).move_to_element(link)
time.sleep(1) # time to move to link
link.click()
time.sleep(1) # time to update HTML
except Exception as ex:
print(ex)
# first "More" shows text in all reviews - there is no need to search other "More"
try:
first_entry = driver.find_element_by_class_name('entry')
more = first_entry.find_element_by_tag_name('span')
except Exception as ex:
print(ex)
try:
ActionChains(driver).move_to_element(more)
time.sleep(2) # time to move to link
more.click()
time.sleep(2) # time to update HTML
except Exception as ex:
print(ex)
html =driver.page_source
soup=BeautifulSoup(html,"html.parser")
listing=soup.find_all("div", class_="review-container")
all_reviews = driver.find_elements_by_class_name('wrap')
for review in all_reviews:
all_entries = review.find_elements_by_class_name('partial_entry')
if all_entries:
#print('--- review ---')
#print(all_entries[0].text)
#print('--- end ---')
review_list_v2=[all_entries[0].text]
#print (review_list)
review_appended_list_v2.extend([review_list_v2])
#print (review_appended_list)
for i in range(len(listing)):
review_id=listing[i]["data-reviewid"]
#print review_id
listing_v1=soup.find_all("div", class_="rating reviewItemInline")
rating=listing_v1[i].span["class"][1]
review_date=listing_v1[i].find("span", class_="ratingDate relativeDate")
review_date_detail=review_date["title"]
listed_reviews_v2=[review_id, review_date_detail, rating[7:8]]
listed_reviews_v2.extend([k])
listed_reviews_total_v2.append(listed_reviews_v2)
for a,b in zip (listed_reviews_total_v2,review_appended_list_v2):
final_list.append(a+b)
print (final_list)
if len(listing) !=5:
break
How to enable clicking 'More' button for the 2nd and rest of the pages? so that I can scrape the full text reviews?
Edited Below:
The error messages I get are these two lines.
Message: no such element: Unable to locate element: {"method":"tag name","selector":"span"}
Message: stale element reference: element is not attached to the page document
I guess my whole codes still run because I used try and except function? Usually when python runs into an error, it stops running.
Try it like:
driver.execute_script("""
arguments[0].click()
""", link)

Unable to figure out where to add wait statement in python selenium

I am searching elements in my list(one by one) by inputing into searchbar of a website and get apple products name that appeared in search result and printed. However I am getting following exception
StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
I know its because of changing of element very fast so I need to add wait like
wait(driver, 10).until(EC.visibility_of_element_located((By.ID, "submitbutton")))
or explictly
Q1. But I don't understand where should I add it? Here is my code. Please help!
Q2. I want to go to all the next pages using but that's not working.
driver.find_element_by_xpath( '//div[#class="no-hover"]/a' ).click()
Earlier exception was raised on submitton button and now at if statement.
That's not what implicit wait is for. Since the page change regularly you can't be sure when the current object inside the variable is still valid.
My suggestion is to run the above code in loop using try except. Something like the following:
for element in mylist:
ok = False
while True:
try:
do_something_useful_while_the_page_can_change(element)
except StaleElementReferenceException:
# retry
continue
else:
# go to next element
break
Where:
def do_something_useful_while_the_page_can_change(element):
searchElement = driver.find_element_by_id("searchbar")
searchElement.send_keys(element)
driver.find_element_by_id("searchbutton").click()
items_count = 0
items = driver.find_elements_by_class_name( 'searchresult' )
for i, item in enumerate( items ):
if 'apple' in item.text:
print ('item.text')
items_count += len( items )
I think what you had was doing too much and can be simplified. You basically need to loop through a list of search terms, myList. Inside that loop you send the search term to the searchbox and click search. Still inside that loop you want to grab all the elements off the page that consist of search results, class='search-result-product-url' but also the text of the element contains 'apple'. The XPath locator I provided should do both so that the collection that is returned all are ones you want to print... so print each. End loop... back to next search term.
for element in mylist:
driver.find_element_by_id("search-input").send_keys(element)
driver.find_element_by_id("button-search").click()
# may need a wait here?
for item in driver.find_elements_by_xpath( "//a[#class='search-result-product-url'][contains(., 'apple')]" ):
print item.text

Resources