Selenium - Element not found - iframe issue - python-3.x

Running a selenium script to do some automated testing on servicenow
Getting an element not found error when trying to populate a field on the webpage. The login page has an iframe. But after login, the next page I dont think has an iframe. Also tried driver.switch_to.default_content() but this didnt seem to help.
I know the element is there and has that ID because ive looked at the html. Also tried populating a couple of other fields but had the same issue.
Any suggestions? Thanks.
Originally the url it tries to go to is - https://dev85222.service-now.com/incident.do, but before that the browser goes to the login page which is
https://dev85222.service-now.com/navpage.do, then after logging in, you get directed to incident.do. Once the script gets the the second url it produces the error -Element not found
I think it might be to do with switching iframes
The code -
from selenium import webdriver
import time
import unittest
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.ui import Select
from datetime import date
from selenium.webdriver.common.action_chains import ActionChains
class IncidentCreate(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
def test_selenium(self):
#
identifier = "AUTOMATED TESTING"
module = "INCIDENT"
action = "CREATE"
job_name = ""
today = str(date.today())
driver = self.driver
base_url = "https://dev85999882.service-now.com/incident.do"
driver.get(base_url)
driver.implicitly_wait(5)
driver.switch_to.frame("gsft_main")
username = driver.find_element_by_id("user_name")
username.send_keys("username")
password = driver.find_element_by_id("user_password")
password.send_keys("password")
password.send_keys(Keys.RETURN)
identifier_inc = ("AUTOMATED TESTING - INCIDENT - %s" %today)
driver.switch_to.default_content()
time.sleep (10)
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "incident.category"))
)
except:
print("Element not found")
The error - Element not found
E
======================================================================
ERROR: test_selenium (main.IncidentCreate)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:/Users/user/Documents/Automation/FFOX_CLOUD_INC_CREATEv1.py", line 66, in test_selenium
category = driver.find_element_by_id("incident.category")
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 359, in find_element_by_id
return self.find_element(by=By.ID, value=id_)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 966, in find_element
'value': value})['value']
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 320, in execute
self.error_handler.check_response(response)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: [id="incident.category"]
----------------------------------------------------------------------
Ran 1 test in 41.024s

switched to default content, right after clicking enter to login from the first page. This resolved the issue - driver.switch_to.default_content()

Related

Running this code gives me a timeout exception error, why?

I'm trying to build a script to automate filling in a textbox with selenium, but I cant seem to get it to work.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as ec
import time
browser =
webdriver.Chrome('C:/Users/xiang/PycharmProjects/testo/chromedriver.exe')
browser.get('https://zbib.org/')
wait = WebDriverWait(browser, 10)
name = "form-control form-control form-control-lg id-input"
try:
input = wait.until(ec.presence_of_element_located((By.CLASS_NAME,
name)))
finally:
browser.quit()
I expected there to be no error and the browser/driver doesn't quit, but I get this error in the terminal and the browser/driver quits:
Traceback (most recent call last):
File "C:/Users/xiang/PycharmProjects/testo/bib.py", line 14, in
<module>
input = wait.until(ec.presence_of_element_located((By.CLASS_NAME,
name)))
File "C:\Users\xiang\PycharmProjects\testo\venv\lib\site-
packages\selenium\webdriver\support\wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
Please help, thanks!
Your code can definitely throw an Exception.
WebDriverWait will throw a TimeoutException if it does not meet the request after the timeout set time.
You can ignore the exception by adding this:
from selenium.common.exceptions import TimeoutException
try:
input = wait.until(ec.presence_of_element_located((By.CLASS_NAME,
name)))
except TimeoutException:
pass
finally:
browser.quit()
Since your className has spaces you should be using css selector. Your name variable would be:
name = ".form-control.form-control.form-control-lg.id-input"
Your code would then be:
from selenium.common.exceptions import TimeoutException
try:
input = wait.until(ec.presence_of_element_located((By.CSS_SELECTOR,
name)))
except TimeoutException:
pass
finally:
browser.quit()
To ensure your browser will be closed you can also use the context manager like this:
name = ".form-control.form-control.form-control-lg.id-input"
chromedriver = 'C:/Users/xiang/PycharmProjects/testo/chromedriver.exe'
with webdriver.Chrome(chromedriver) as browser:
browser.get('https://zbib.org/')
wait = WebDriverWait(browser, 10)
try:
input = wait.until(ec.presence_of_element_located((By.CSS_SELECTOR, name)))
except TimeoutException:
pass

How to increase the request page time in python 3 while scraping web pages?

I have started scraping reviews from e-commerce platform and perform sentiment analysis and share it with people on my blog to make the life of people easier and understand everything about the product in just one article.
I am using python packages like selenium and bs4. Here is my code:
from selenium import webdriver
from selenium.webdriver.common.by import By
from contextlib import closing
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver import Firefox
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import time
import requests
import re
from bs4 import BeautifulSoup
def remove_non_ascii_1(text):
return ''.join([i if ord(i) < 128 else ' ' for i in text])
with closing(Firefox()) as browser:
site = "https://www.flipkart.com/honor-8-pro-midnight-black-128-gb/product-reviews/itmeymafrghbjcpf?page=1&pid=MOBEWXHMVYBBMZGJ"
browser.get(site)
file = open("review.txt", "w")
for count in range(1, 100):
nav_btns = browser.find_elements_by_class_name('_33m_Yg')
button = ""
for btn in nav_btns:
number = int(btn.text)
if(number==count):
button = btn
break
button.send_keys(Keys.RETURN)
WebDriverWait(browser, timeout=10).until(EC.presence_of_all_elements_located((By.CLASS_NAME, "_2xg6Ul")))
read_more_btns = browser.find_elements_by_class_name('_1EPkIx')
for rm in read_more_btns:
browser.execute_script("return arguments[0].scrollIntoView();", rm)
browser.execute_script("window.scrollBy(0, -150);")
rm.click()
page_source = browser.page_source
soup = BeautifulSoup(page_source, "lxml")
ans = soup.find_all("div", class_="_3DCdKt")
for tag in ans:
title = str(tag.find("p", class_="_2xg6Ul").string).replace(u"\u2018", "'").replace(u"\u2019", "'")
title = remove_non_ascii_1(title)
title.encode('ascii','ignore')
content = tag.find("div", class_="qwjRop").div.prettify().replace(u"\u2018", "'").replace(u"\u2019", "'")
content = remove_non_ascii_1(content)
content.encode('ascii','ignore')
content = content[15:-7]
votes = tag.find_all("span", class_="_1_BQL8")
upvotes = int(votes[0].string)
downvotes = int(votes[1].string)
file.write("Review Title : %s\n\n" % title )
file.write("Upvotes : " + str(upvotes) + "\n\nDownvotes : " + str(downvotes) + "\n\n")
file.write("Review Content :\n%s\n\n\n\n" % content )
file.close()
The code is working fine on platform like Amazon, but on Flipkart, after crawling 14 pages I get an error saying "Someting is Wrong!!!" and the crawling stops.
In command line I get this error:
C:\Users\prate\Desktop\Crawler\Git_Crawler\New>python scrape.py
Traceback (most recent call last):
File "scrape.py", line 37, in
WebDriverWait(browser, timeout=10).until(EC.presence_of_all_elements_located((By.CLASS_NAME, "_2xg6Ul")))
File "C:\Users\prate\AppData\Local\Programs\Python\Python36\lib\site-packages\selenium\webdriver\support\wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
There is no message printed as well. I think if I increase the request time interval on the platform it might let me crawl.
What should I do?
The error says it all :
C:\Users\prate\Desktop\Crawler\Git_Crawler\New>python scrape.py Traceback (most recent call last): File "scrape.py", line 37, in WebDriverWait(browser, timeout=10).until(EC.presence_of_all_elements_located((By.CLASS_NAME, "_2xg6Ul"))) File "C:\Users\prate\AppData\Local\Programs\Python\Python36\lib\site-packages\selenium\webdriver\support\wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message:
If you look at the API Docs of the expected_conditions clause presence_of_all_elements_located(locator) it is defined as :
An expectation for checking that there is at least one element present on a web page. locator is used to find the element returns the list of WebElements once they are located
Now, if you browse to the intended webpage :
https://www.flipkart.com/honor-8-pro-midnight-black-128-gb/product-reviews/itmeymafrghbjcpf?page=1&pid=MOBEWXHMVYBBMZGJ
You will find the webpage have no products or reviews and the Locator Strategy which you have adapted as (By.CLASS_NAME, "_2xg6Ul") doesn't identifies any element on the webpage.
Hence even though the synchronization time elapses, no webelements are added to the list and selenium.common.exceptions.TimeoutException is raised.
As you mentioned The code is working fine on platform like Amazon it is worth to mention that the website https://www.flipkart.com is ReactJS based and may differ from website to website

Unable to click HREF under headers (invisible elements)

I am wanting to click all the Href tabs under the main headers and to navigate to those pages to scrape them. For speed of the job, I do am wanting to click the href without having to click the headers. My question is, is there a way to click these buttons even though it is not visible like the page on the right? It does not seem to be working for me. It seems to give me:
Traceback (most recent call last):
File "C:/Users/Bain3/PycharmProjects/untitled4/Centrebet2.py", line 58, in <module>
EC.element_to_be_clickable((By.XPATH, '(//*[#id="accordionMenu1_ulSports"]/li/ul/li/ul/li/a)[%s]' % str(index + 1)))).click()
File "C:\Users\Bain3\Anaconda3\lib\site-packages\selenium\webdriver\support\wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
I have replaced
EC.element_to_be_clickable((By.XPATH, '(//*[#id="accordionMenu1_ulSports"]/li/ul/li/ul/li/a)[%s]' % str(index + 1)))).click()
with
driver.find_element_by_xpath('(//*[#id="accordionMenu1_ulSports"]/li/ul/li/ul/li/a)[%s]' % str(index + 1)).click()
This however does not seem to remedy it as it only clicks visible elements.
My code below is:
from random import shuffle
from selenium.webdriver.support.ui import WebDriverWait as wait
from selenium import webdriver as web
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import TimeoutException
from random import randint
from time import sleep
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import csv
import requests
import time
from selenium import webdriver
success = False
while not success:
try:
driver = webdriver.Chrome()
driver.set_window_size(1024, 600)
driver.maximize_window()
driver.get('http://centrebet.com/')
success = True
except:
driver.quit()
sleep(5)
sports = driver.find_element_by_id("accordionMenu1_ulSports")
if sports.get_attribute("style") == "display: none;":
driver.find_element_by_xpath('//ul[#id="menu_acc"]/li[3]/a').click()
driver.find_element_by_xpath(".//*[#data-type ='sports_l1'][contains(text(), 'Soccer')]").click()
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
options = driver.find_elements_by_xpath('//*[#id="accordionMenu1_ulSports"]/li/ul/li/ul/li/a')
# Get list of inetegers [1, 2, ... n]
indexes = [index for index in range(len(options))]
# Shuffle them
shuffle(indexes)
for index in indexes:
# Click on random option
wait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, '(//*[#id="accordionMenu1_ulSports"]/li/ul/li/ul/li/a)[%s]' % str(index + 1)))).click()
I have also tried:
driver.execute_script('document.getElementByxpath("//*[#id="accordionMenu1_ulSports"]/li/ul/li/ul/li/a").style.visibility = "visible";')
To remedy this. Though this simply gives an error. Any ideas on how to resolve this issue of invisible elements?
driver.execute_script('document.getElementByxpath("//*[#id="accordionMenu1_ulSports"]/li/ul/li/ul/li/a").style.visibility = "visible";')
gives you error because it's not correct way to use XPath in Javascript. Correct way you can find here
To scrape required data you can use below code:
import requests
import time
from selenium import webdriver
url = "http://centrebet.com/"
success = False
while not success:
try:
driver = webdriver.Chrome()
driver.set_window_size(1024, 600)
driver.maximize_window()
driver.get(url)
success = True
except:
driver.quit()
time.sleep(5)
sports = driver.find_element_by_id("accordionMenu1_ulSports")
links = [url + link.get_attribute("onclick").replace("menulink('", "").replace("')", "") for link in sports.find_elements_by_xpath('.//a[starts-with(#onclick, "menulink")]')]
for link in links:
print(requests.get(link).text)
Instead of clicking on each link, you can request content of each page with HTTP-GET
You can even try using JavascriptExecutor.
Use below code to make your style attribute = display:block;
driver.execute_script("arguments[0].style.display = 'none'", driver.find_element_by_xpath("//*[#id='accordionMenu1_ulSports']/li/ul/li/ul"))
Note : Make sure you are using correct xpath. your <ul> element is hidden not <a> so so take xpath of that <ul> tag only and try

How to login to gmail using chrome in windows using python

i am using python script for login to gmail in chrome, but after clicking next the code returning error, can anyone please help me what's wrong, i am new to python. below is the code which i used.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.select import Select
import threading
import os,time,csv,datetime
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
a = webdriver.Chrome()
a.get("https://accounts.google.com/")
user = a.find_element_by_id("Email")
user.send_keys('username#gmail.com')
login = a.find_element_by_id("next")
login.click()
pwd = WebDriverWait(a, 10).until(EC.presence_of_element_located((By.ID, "Passwd")))
pwd.send_keys('p######')
login = a.find_element_by_id("signIn")
login.click()
clk = WebDriverWait(a, 10).until(EC.presence_of_element_located((By.LINK_TEXT, 'myaccount')))
clk.click()
logout = WebDriverWait(a, 10).until(EC.presence_of_element_located((By.ID, "gb_71")))
logout.click()
Below is the error:
Traceback (most recent call last):
File "C:\Users\mapraveenkumar\Documents\Python\gmail.py", line 20, in
clk = WebDriverWait(a, 10).until(EC.presence_of_element_located((By.LINK_TEXT, 'myaccount')))
File "C:\Users\mapraveenkumar\AppData\Local\Programs\Python\Python36-32\lib\site-packages\selenium\webdriver\support\wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
That's most likely because after clicking on the "Next" button, the page takes some time to load and present the Password text field. Try using a wait statement to make sure your script waits until the element can be located.
Example:
...
login = a.find_element_by_id("next")
login.click()
pwd = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "Passwd"))
pwd.send_keys('password')
...

Weibo login in selenium in python?

I'm doing weibo login in selenium, but I can't handle window popup.
This is my code. What is problem?
from selenium import webdriver
username = 'your id'
password = 'your password'
driver = webdriver.Firefox()
driver.get("http://overseas.weibo.com/")
driver.implicitly_wait(10)
handles = driver.window_handles
driver.find_elements_by_link_text('登入微博')[0].click()
driver.implicitly_wait(10)
driver.switch_to_alert()
driver.find_element_by_name('memberid').send_keys(username)
driver.find_element_by_name('passwd').send_keys(password)
driver.find_elements_by_link_text('登入')[0].click()
Traceback (most recent call last):
File "D:/python34/weibo_login.py", line 35, in
driver.find_element_by_name('memberid').send_keys(username)
File "C:\Python34\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 362, in find_element_by_name
return self.find_element(by=By.NAME, value=name)
File "C:\Python34\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 744, in find_element
{'using': by, 'value': value})['value']
File "C:\Python34\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 233, in execute
self.error_handler.check_response(response)
File "C:\Python34\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 194, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"name","selector":"memberid"}
Stacktrace:
at FirefoxDriver.prototype.findElementInternal_ (file:///C:/Users/hena/AppData/Local/Temp/tmpwk788t0k/extensions/fxdriver#googlecode.com/components/driver-component.js:10770)
at fxdriver.Timer.prototype.setTimeout/<.notify (file:///C:/Users/hena/AppData/Local/Temp/tmpwk788t0k/extensions/fxdriver#googlecode.com/components/driver-component.js:625)
Actually opened login form is inside an iframe. It's not an alert. You need to switch this particular iframe first before find element and sendKeys as below :-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
username = 'your id'
password = 'your password'
driver = webdriver.Firefox()
driver.get("http://overseas.weibo.com/")
wait = WebDriverWait(browser, 10)
link = wait.until(EC.visibility_of_element_located((By.LINK_TEXT, "登入微博")))
link.click()
frame = wait.until(EC.visibility_of_element_located((By.CLASS_NAME, "cboxIframe")))
driver.switch_to_frame(frame)
user = wait.until(EC.visibility_of_element_located((By.ID, "memberid")))
user.send_keys(username)
passwd = wait.until(EC.visibility_of_element_located((By.ID, "passwd")))
passwd.send_keys(password)
button = wait.until(EC.visibility_of_element_located((By.ID, "login")))
button.click()
Hope it helps...:)

Resources