I need to scrape a website, but first I need to remove the pop-up window for cookies. Following other questions, I have written the following code (with the appropriate link for url):
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
PATH="C:\\Program Files (x86)\\chromedriver.exe"
driver=webdriver.Chrome(PATH)
driver.maximize_window()
url='https://dait.interno.gov.it/elezioni/open-data'
driver.get(url)
sel='#popup-buttons > button.agree-button.eu-cookie-compliance-default-button'
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR,sel))).click()
What confuses me is that it seemed to have worked once, and only once even if I didn't change anything in the code. Now it doesn't do anything anymore and I get:
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element is not clickable at point
I have obtained the CSS locator by "copying selector" when inspecting the page.
Related
I would like to create a code snippet for opening YouTube, accepting the cookies, finding the search bar, typing some string into it and finally clicking on the search button. Its not too hard but something is not working. I have tried using the WebDriverWait as well but still not working.
If I open the Google (of course from code) and doing the same procedure then everything works well. I have tried finding elements not only XPATH but also ID, and CSS_SELECTOR.
After the send_keys() function the error message is:
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
My code is:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_experimental_option('detach', True)
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options=options)
driver.get("https://www.youtube.com/")
driver.maximize_window()
cookie_accept = driver.find_element(By.XPATH, '//*[#id="content"]/div[2]/div[6]/div[1]/ytd-button-renderer[2]/yt-button-shape/button/yt-touch-feedback-shape/div/div[2]')
cookie_accept.click()
yt_searchbox = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="search"]')))
yt_searchbox.send_keys('Python Selenium')
I've tried without the WebDriverWait as well:
yt_search = driver.find_element(By.XPATH, '//*[#id="search"]')
yt_search.send_keys("Pyhton Selenium")
And it is not working either. I don't know what's going on.
The Search box within Youtube homepage is a dynamic element, so ideally to send a character sequence to the element you need to induce WebDriverWait for the element_to_be_clickable() and you can use either of the following locator strategies:
Using CSS_SELECTOR:
driver.get('https://www.youtube.com/')
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input#search"))).send_keys("Python Selenium")
Using XPATH:
driver.get('https://www.youtube.com/')
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//input[#id='search']"))).send_keys("Python Selenium")
Note: You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
Browser snapshot:
I am trying to download the data from https://projects.propublica.org/nonprofits for my research. When the page is open, a notification window pops up. I tried to use python selenium to close it. My code is as follows,
from selenium.webdriver import Chrome
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
driver = Chrome()
driver.get('https://projects.propublica.org/nonprofits')
driver.find_element(By.XPATH, "/html/body/div[1]/div/div[2]/p[2]/a").click()
I got the error message: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"/html/body/div[1]/div/div[2]/p[2]/a"}
(Session info: chrome=99.0.4844.51)
I revised my code as
driver.get('https://projects.propublica.org/nonprofits')
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "/html/body/div[1]/div/div[2]/button"))).click()
The error message is TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
Stacktrace:
Backtrace:
Ordinal0 [0x005E9943+2595139]
...
Any suggestion to overpass the notification windows is highly appreciated. Thank you.
The element you are trying to click is inside an iframe - you have to switch to that iframe first in order to access this element
You have to add waits to let the elements be loaded before accessing them.
This will work better:
from selenium.webdriver import Chrome
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
driver = Chrome()
driver.get('https://projects.propublica.org/nonprofits')
wait = WebDriverWait(driver, 20)
wait.until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe.syndicated-modal")))
wait.until(EC.visibility_of_element_located((By.XPATH, "//a[contains(#href,'newsletter-roadblock')]"))).click()
When finished working inside the iframe you will need to switch to the default content with
driver.switch_to.default_content()
I am trying to automate saving website's description and url. I loop the program and come to the function get_info(). Basically it need to add the first website on the google page I load and scroll down so when it executes again it can add other websites. The problem is that the program refresh the page everytime it executes get_info() and brings you back at the top.
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import time
browser=webdriver.Firefox()
def get_info():
browser.switch_to.window(browser.window_handles[2])
description = WebDriverWait(browser, 10).until(
EC.presence_of_element_located((By.TAG_NAME, "h3"))
).text
site = WebDriverWait(browser, 10).until(
EC.presence_of_element_located((By.TAG_NAME, "cite"))
)
site.click()
url=browser.current_url
browser.back()
browser.execute_script("window.scrollBy(0,400)","")
To stop Chrome from auto-reloading , you can do this
driver.execute_cdp_cmd('Emulation.setScriptExecutionDisabled', {'value': True})
Link to read about this Chrome Devtools flag - here
The question already has been asked here
Here my code:
from selenium import webdriver
from time import sleep
browser=webdriver.Chrome(r"C:\Users\Desktop\chromedriver.exe")
browser.maximize_window()
browser.get("https://www.youtube.com/watch?v=3yjO6yfHLcU&ab_channel=TRT%C4%B0zleTRT%C4%B0zleDo%C4%9Fruland%C4%B1")
browser.find_element_by_class_name("ytp-play-button ytp-button").click()
sleep(2)
I can not able to play videos on YouTube with selenium and python. How can I do that?
This is the Error :
NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":".ytp-play-button ytp-button"}
You are missing a wait / delay there.
Immediately after browser.get("the_url) the page is still not loaded, it takes some time, so the element you are trying to click is still not there.
You have to add some delay.
The correct way to do this is to add explicit wait, like this:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
element = WebDriverWait(browser, 20).until(
EC.element_to_be_clickable((By.CSS_SELECTOR, ".ytp-play-button ytp-button")))
element.click();
I am trying to access the sign in button on the url as shown in the code below. I have verified the content of the url as well as the href. They are both consistent with what appears using inspect element dev tool.
But on clicking the extracted element I get the error:
Message: element not interactable
I have no idea why is this occurring.
Kindly help me solve this issue
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
# setup the browser
browser = webdriver.Chrome('./chromedriver')
browser.get('https://libraries.usc.edu/')
browser.maximize_window()
# access the relevant a tag after inspecting it in dev tool inspect element
a_tag_elt = WebDriverWait(browser, 10).until(lambda browser :
browser.find_element_by_css_selector('div.site-header__signin a'))
# sanity check by printing out the details
print(type(a_tag_elt))
print(a_tag_elt.get_attribute('href'), a_tag_elt.get_attribute('innerHTML'))
# produces Message: element not interactable error
a_tag_elt.click()
# quit the browser
browser.quit()
To click on Sign In link Induce WebDriverWait() and element_to_be_clickable() and following css selector.
WebDriverWait(browser,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,"a.main-navigation__navbar>.main-navigation__navbar-text"))).click()
Or following xpath.
WebDriverWait(browser,5).until(EC.element_to_be_clickable((By.XPATH,"//span[text()='Sign In']"))).click()
You need to import below libraries.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
To click on the element with text as Sign In you need to induce WebDriverWait for the element_to_be_clickable() and you can use either of the following Locator Strategies:
Using CSS_SELECTOR:
WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a.main-navigation__navbar span"))).click()
Using XPATH:
WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, "//a[#class='main-navigation__navbar ']//span"))).click()
Note: You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC