Python Selenium DOM Click event does not work as intended - python-3.x

I am trying to click the pagination links (next button). However the click goes to the site homepage. I am targeting the element by class. What could be wrong ?
driver.get('https://www.marinetraffic.com/en/data/?asset_type=vessels&columns=flag,shipname,photo,recognized_next_port,reported_eta,reported_destination,current_port,imo,ship_type,show_on_live_map,time_of_latest_position,lat_of_latest_position,lon_of_latest_position&current_port_in|begins|FUJAIRAH%20ANCH|current_port_in=20585')
# Wait 30 seconds for page to load
timeout = 30
try:
WebDriverWait(driver, timeout).until(EC.presence_of_element_located((By.CLASS_NAME, 'MuiButtonBase-root-60')))
element = driver.find_element_by_class_name('MuiButtonBase-root-60')
driver.execute_script("arguments[0].click();", element)
except TimeoutException:
print("Timed out waiting for page to load")
driver.quit()

Use Following Code :
driver.get('https://www.marinetraffic.com/en/data/?
asset_type=vessels&columns=flag,shipname,photo,recognized_next_port,reported_eta,reported_destination,current_port,imo,ship_type,show_on_live_map,time_of_latest_position,lat_of_latest_position,lon_of_latest_position&current_port_in|begins|FUJAIRAH%20ANCH|current_port_in=20585')
# Wait 30 seconds for page to load
timeout = 30
try:
element = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//p[text()='Page']//..//following-sibling::button")))
driver.execute_script("arguments[0].click();", element)
except TimeoutException:
print("Timed out waiting for page to load")
driver.quit()

There are 33 elements with this class, find_element_by_class_name returns the first one, (which is located in the header). You can use the footer as starting point to narrow down the options and select the second button using index (there is no difference between the next and previous when both of the are available)
element = WebDriverWait(driver, timeout).until(EC.visibility_of_element_located((By.CSS_SELECTOR, '.r-mtGridFooter-302 button:nth-of-type(2)')))
element.click()

Related

Stale Element Reference Exception occurred even though I explicitly ignored the exception using ignored_exceptions

Why is the StaleElementReferenceException still raised even though I ignored the exception? The code will eventually run successfully after a few tries but how can I make it wait until the element is clickable to avoid any exception?
import sys
from selenium.common.exceptions import StaleElementReferenceException
while True:
try:
WebDriverWait(driver, 10, ignored_exceptions=[StaleElementReferenceException]).until(ec.element_to_be_clickable((By.XPATH,"//button[contains(text(), 'Performance Summary')]"))).click()
break
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
print(e, exc_type, exc_tb.tb_lineno)
print('Retrying...')
From the source code of WebDriverWait:
class WebDriverWait:
def __init__(
self,
driver,
timeout: float,
poll_frequency: float = POLL_FREQUENCY,
ignored_exceptions: typing.Optional[WaitExcTypes] = None,
):
"""Constructor, takes a WebDriver instance and timeout in seconds.
:Args:
- driver - Instance of WebDriver (Ie, Firefox, Chrome or Remote)
- timeout - Number of seconds before timing out
- poll_frequency - sleep interval between calls
By default, it is 0.5 second.
- ignored_exceptions - iterable structure of exception classes ignored during calls.
By default, it contains NoSuchElementException only.
Example::
from selenium.webdriver.support.wait import WebDriverWait \n
element = WebDriverWait(driver, 10).until(lambda x: x.find_element(By.ID, "someId")) \n
is_disappeared = WebDriverWait(driver, 30, 1, (ElementNotVisibleException)).\\ \n
until_not(lambda x: x.find_element(By.ID, "someId").is_displayed())
"""
self._driver = driver
self._timeout = float(timeout)
self._poll = poll_frequency
# avoid the divide by zero
if self._poll == 0:
self._poll = POLL_FREQUENCY
exceptions = list(IGNORED_EXCEPTIONS)
if ignored_exceptions:
try:
exceptions.extend(iter(ignored_exceptions))
except TypeError: # ignored_exceptions is not iterable
exceptions.append(ignored_exceptions)
self._ignored_exceptions = tuple(exceptions)
It is worth to notice that ignored_exceptions is not iterable.
So NoSuchElementException being the default exception and StaleElementReferenceException being purposely added, can be ignored only once. Hence the second time StaleElementReferenceException is no more handled.
The problem is that once you get a StaleElementReferenceException, you can't just wait it out. Here's how stale elements work.
element = driver.find_element(By.ID, "someId")
driver.refresh() # or anything that changes the portion of the page that 'element' is on
element.click() # throws StaleElementReferenceException
At this point a StaleElementReferenceException is thrown because you had a reference to the element and then lost it (the element reference pointer points to nothing). No amount of waiting is going to restore that reference.
The way to "fix" this is to grab the reference again after the page has changed,
element = driver.find_element(By.ID, "someId")
driver.refresh()
element = driver.find_element(By.ID, "someId") # refetch the reference after the page refreshes
element.click()
Now the .click() will work without error.
Most people that run into this issue are looping through a collection of elements and in the middle of the loop they click a link that navigates to a new page or something else that reloads or changes the page. They later return to the original page and click on the next link but they get a StaleElementReferenceException.
elements = driver.find_elements(locator)
for element in elements
element.click() # navigates to new page
# do other stuff and return to first page
The first loop works fine but in the second loop the element reference is dead because of the page change. You can change this loop to force the elements collection to be refetched at the start of each loop
for element in driver.find_elements(locator)
element.click() # navigates to new page
# do other stuff and return to first page
Now the loop will work. This is just an example but hopefully it will point you in the right direction to fix your code.

Why is selenium not applying "click" to a popup?

I know the xpath of this popup tabs element is correct, however when I do filters_language_dropdown.click() and then .send_keys(Keys.Enter. It doesn't do anything.
However the same popup (press 'filters' button on this page to view) works with the xpath of the initial button press element instead (see code + images below) so with filters_button.send_keys.... Whats going on?
# Initialize the browser and navigate to the page
browser = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
browser.get("https://www.facebook.com/ads/library/?active_status=all&ad_type=all&country=ALL&q=%22%20%22&sort_data[direction]=desc&sort_data[mode]=relevancy_monthly_grouped&search_type=keyword_exact_phrase&media_type=all")
# (In working order): Look for keyword, make it clickable, clear existing data in box, enter new info, keep page open for 10 seconds
search_box = WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, "//input[#placeholder='Search by keyword or advertiser']")))
search_box.click()
search_box.clear()
search_box.send_keys("" "" + Keys.ENTER)
time.sleep(3)
# Activating the filters (English, active ads, date from (last 2 days) to today)
filters_button = WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, "//body[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[5]/div[2]/div[1]/div[1]/div[3]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]")))
filters_button.click()
filters_button.send_keys(Keys.ENTER)
time.sleep(3)
filters_language_dropdown = WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, "//div[#id='js_knd']//div[#class='x6s0dn4 x78zum5 x13fuv20 xu3j5b3 x1q0q8m5 x26u7qi x178xt8z xm81vs4 xso031l xy80clv xb9moi8 xfth1om x21b0me xmls85d xhk9q7s x1otrzb0 x1i1ezom x1o6z2jb x1gzqxud x108nfp6 xm7lytj x1ykpatu xlu9dua x1w2lkzu']")))
filters_language_dropdown.click()
filters_language_dropdown.send_keys(Keys.ENTER)
time.sleep(3)
Use following xpath to click on the filter and then click on all languages and then click on English if you want to change other language you need to pass let say 'French',you need to change instead of English.
Code:
browser.get("https://www.facebook.com/ads/library/?active_status=all&ad_type=all&country=ALL&q=%22%20%22&sort_data[direction]=desc&sort_data[mode]=relevancy_monthly_grouped&search_type=keyword_exact_phrase&media_type=all")
search=WebDriverWait(browser,10).until(EC.visibility_of_element_located((By.XPATH,"//input[#placeholder='Search by keyword or advertiser']")))
search.click()
search.clear()
search.send_keys("" "" + Keys.ENTER)
WebDriverWait(browser,10).until(EC.element_to_be_clickable((By.XPATH,"//div[text()='Filters']"))).click()
WebDriverWait(browser,10).until(EC.element_to_be_clickable((By.XPATH,"//div[text()='All languages']"))).click()
WebDriverWait(browser,10).until(EC.element_to_be_clickable((By.XPATH,"//div[text()='English']"))).click()
time.sleep(10) # to check the operation
Browser snapshot:

Refresh a page if an element is not visible and click, else click python selenium

I'm dealing with a very non responsive scroll bar in a window on a tableau dashboard. I cannot scroll to find the element using any code, python, or java execution, so I need to write a condition inside my for loop that refreshes the page, and clicks a location (the scroll bar) if the element isn't present, then checks again for the element's presence, but I can't seem to get it.
for key, value in series_dict.items():
xpath = driver.find_element_by_xpath('//*[#id="' + value + '"]') #how do i make this a boolean
while True:
if xpath == False:
driver.refresh()
#click the scroll bar here then break and start the whole for loop over
else:
try:
series_click = wait.until(EC.element_to_be_clickable((By.XPATH, '//*[#id="' + value + '"]')))
series_click.click()
time.sleep(5)
download_click = wait.until(EC.element_to_be_clickable((By.XPATH, '//*[#id="download"]/span[3]')))
download_click.click()
time.sleep(1)
except TimeoutError as e:
print('Error: {}'.format(str(e)))
continue
break
I've never nested so many conditionals before and I think I've gotten my continues, breaks, and order mixed up. I also don't know how to make my xpath object a boolean so that I can test it.
Any help with this would be appreciated.
Thank you!

How do I make selenium wait for a specific tag show up

I am uploading an image, and I used time.sleep(3) to make it finish upload. But sometimes 3 seconds isn't enough. I am not sure how the code should look like to make slenium to check for a specific tag is shown, then continue to do something.
#input image path
try:
element = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH, "/html/body/table[2]/tbody/tr[2]/td/h5/ul/li[1]/div[2]/ul/li/form[1]/input[5]")))
element.send_keys('C:\\1.png')
print("Add actionImageURL 1: " + str(element.get_attribute('value')))
except Exception:
print("actionImageURL 1 Path Failed")
#click upload button
try:
element = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH, "/html/body/table[2]/tbody/tr[2]/td/h5/ul/li[1]/div[2]/ul/li/form[1]/input[6]")))
element.click()
print("actionImageURL 1 Uploading...")
#maybe here is where i can do find for the tag and do a loop wait?
images = driver.find_elements_by_class_name('img-thumbnail ng-hide')
previewlink = images.get_attribute('src')
print(previewlink)
if preview == 't.notInPlaceActionImageURL_preview':
print('Link: ' + print(previewlink))
time.sleep(3)
print("actionImageURL 1 Uploaded")
except Exception:
print("actionImageURL 1 Upload Button NOT FOUND")
I do not know how to deal with html tag like this, see before and after when image is done uploading:
Before:
After:
implicit might solve my problem, but i am looking to workaround with the tag, and how do deal with tag like those. Thanks
If you code a method to wait you can use that wait method in your project
private static WebElement waitForElement(By locator, int timeout)
{
WebElement element=new WebDriverWait(driver,timeout).until(ExpectedConditions.presenceOfElementLocated(locator));
return element;
}
The above code is wait method
If you want to use this method
waitForElement(By.xpath("//button[#type='submit']"),50);
This for example ,you can use your web element here

python3 More button clickable in the 1st page but NOT clickable in the 2nd page

This is the extended question on how to click 'More' button on a webpage.
Below is my previous question and one person kindly answered for it.
Since I'm not that familiar with 'find element by class name' function, I just added that person's revised code on my existing code. So my revised code would not be efficient (my apology).
Python click 'More' button is not working
The situation is, there are two types of 'More' button. 1st one is in the property description part and the 2nd one is in the text reviews part. If you click only one 'More' button from any of the reviews, reviews will be expanded so that you can see the full text reviews.
The issue I run into is that I can click 'More' button for the reviews that are in the 1st page but not clickable for the reviews in the 2nd page.
Below is the error message I get but my code still runs (without stopping once it sees an error).
Message:
no such element: Unable to locate element: {"method":"tag name","selector":"span"}
Based on my understanding, there is entry class and corresponding span for every review. I don't understand why it says python can't find it.
from selenium import webdriver
from selenium.webdriver import ActionChains
from bs4 import BeautifulSoup
review_list=[]
review_appended_list=[]
review_list_v2=[]
review_appended_list_v2=[]
listed_reviews=[]
listed_reviews_v2=[]
listed_reviews_total=[]
listed_reviews_total_v2=[]
final_list=[]
#Incognito Mode
option = webdriver.ChromeOptions()
option.add_argument("--incognito")
#Open Chrome
driver=webdriver.Chrome(executable_path="C:/Users/chromedriver.exe",options=option)
#url I want to visit (I'm going to loop over multiple listings but for simplicity, I just added one listing url).
lists = ['https://www.tripadvisor.com/VacationRentalReview-g30196-d6386734-Hot_51st_St_Walk_to_Mueller_2BDR_Modern_sleeps_7-Austin_Texas.html']
for k in lists:
driver.get(k)
time.sleep(3)
#click 'More' on description part.
link = driver.find_element_by_link_text('More')
try:
ActionChains(driver).move_to_element(link)
time.sleep(1) # time to move to link
link.click()
time.sleep(1) # time to update HTML
except Exception as ex:
print(ex)
time.sleep(3)
# first "More" shows text in all reviews - there is no need to search other "More"
try:
first_entry = driver.find_element_by_class_name('entry')
more = first_entry.find_element_by_tag_name('span')
#more = first_entry.find_element_by_link_text('More')
except Exception as ex:
print(ex)
try:
ActionChains(driver).move_to_element(more)
time.sleep(1) # time to move to link
more.click()
time.sleep(1) # time to update HTML
except Exception as ex:
print(ex)
#begin parsing html and scraping data.
html =driver.page_source
soup=BeautifulSoup(html,"html.parser")
listing=soup.find_all("div", class_="review-container")
all_reviews = driver.find_elements_by_class_name('wrap')
for review in all_reviews:
all_entries = review.find_elements_by_class_name('partial_entry')
if all_entries:
review_list=[all_entries[0].text]
review_appended_list.extend([review_list])
for i in range(len(listing)):
review_id=listing[i]["data-reviewid"]
listing_v1=soup.find_all("div", class_="rating reviewItemInline")
rating=listing_v1[i].span["class"][1]
review_date=listing_v1[i].find("span", class_="ratingDate relativeDate")
review_date_detail=review_date["title"]
listed_reviews=[review_id, review_date_detail, rating[7:8]]
listed_reviews.extend([k])
listed_reviews_total.append(listed_reviews)
for a,b in zip (listed_reviews_total,review_appended_list):
final_list.append(a+b)
#loop over from the 2nd page of the reviews for the same listing.
for j in range(5,20,5):
url_1='-'.join(k.split('-',3)[:3])
url_2='-'.join(k.split('-',3)[3:4])
middle="-or%d-" % j
final_k=url_1+middle+url_2
driver.get(final_k)
time.sleep(3)
link = driver.find_element_by_link_text('More')
try:
ActionChains(driver).move_to_element(link)
time.sleep(1) # time to move to link
link.click()
time.sleep(1) # time to update HTML
except Exception as ex:
print(ex)
# first "More" shows text in all reviews - there is no need to search other "More"
try:
first_entry = driver.find_element_by_class_name('entry')
more = first_entry.find_element_by_tag_name('span')
except Exception as ex:
print(ex)
try:
ActionChains(driver).move_to_element(more)
time.sleep(2) # time to move to link
more.click()
time.sleep(2) # time to update HTML
except Exception as ex:
print(ex)
html =driver.page_source
soup=BeautifulSoup(html,"html.parser")
listing=soup.find_all("div", class_="review-container")
all_reviews = driver.find_elements_by_class_name('wrap')
for review in all_reviews:
all_entries = review.find_elements_by_class_name('partial_entry')
if all_entries:
#print('--- review ---')
#print(all_entries[0].text)
#print('--- end ---')
review_list_v2=[all_entries[0].text]
#print (review_list)
review_appended_list_v2.extend([review_list_v2])
#print (review_appended_list)
for i in range(len(listing)):
review_id=listing[i]["data-reviewid"]
#print review_id
listing_v1=soup.find_all("div", class_="rating reviewItemInline")
rating=listing_v1[i].span["class"][1]
review_date=listing_v1[i].find("span", class_="ratingDate relativeDate")
review_date_detail=review_date["title"]
listed_reviews_v2=[review_id, review_date_detail, rating[7:8]]
listed_reviews_v2.extend([k])
listed_reviews_total_v2.append(listed_reviews_v2)
for a,b in zip (listed_reviews_total_v2,review_appended_list_v2):
final_list.append(a+b)
print (final_list)
if len(listing) !=5:
break
How to enable clicking 'More' button for the 2nd and rest of the pages? so that I can scrape the full text reviews?
Edited Below:
The error messages I get are these two lines.
Message: no such element: Unable to locate element: {"method":"tag name","selector":"span"}
Message: stale element reference: element is not attached to the page document
I guess my whole codes still run because I used try and except function? Usually when python runs into an error, it stops running.
Try it like:
driver.execute_script("""
arguments[0].click()
""", link)

Resources