Switching Between Windows of JS Page Selenium Python [duplicate] - python-3.x

This question already has answers here:
Switching into new window using Selenium after button click
(1 answer)
Windows Handling Closes the whole browser if i try to close the current window in python
(2 answers)
Closed 3 years ago.
Does driver.switch_to.window Function work with Javascript Pages for Selenium Python?
The question relates to whether driver.switch_to.window would work with JS Pages. It doesnt as per my recent finding.
I recently discovered in my project that it doesnt work.I used the work around of using the back button in browser
from selenium import webdriver
import time
from selenium.webdriver.common.keys import Keys
import pdb
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
chrome_path=r'C:\webdrivers\chromedriver.exe'
driver=webdriver.Chrome(chrome_path)
url='https://www.google.com/'
URL_List=['1line williams']
for i in range(0,2):
driver.get(url)
SearchPage = driver.window_handles[0]
searchbar=driver.find_element_by_name('q')
time.sleep(2)
searchbar.send_keys(URL_List[i])
searchbar.send_keys(Keys.ENTER)
if i==2:
PipelineID=1
SelectLink=driver.find_element_by_xpath('//*[#id="rso"]/div[1]/div/div/div/div/div[1]/a/h3').click()
time.sleep(2)
Links=driver.find_elements_by_class_name("links")
aElements = driver.find_elements_by_tag_name("a")
for name in aElements:
window_after=driver.window_handles[0]
time.sleep(5)
SelectPipeline=driver.find_element_by_xpath('//*[#id="splashContent_pipelines"]/div['+str(PipelineID)+']/p[2]/a[1]').click()
time.sleep(2)
Location=driver.find_element_by_xpath('//*[#id="navmenu-v"]/li[4]/a').click()
time.sleep(2)
File=driver.find_element_by_xpath('//*[#id="navmenu-v"]/li[4]/ul/li/a').click()
time.sleep(15)
document=driver.find_element_by_tag_name('iframe')
driver.switch_to.frame(document)
time.sleep(2)
DownloadFile=driver.find_element_by_xpath('//*[#id="list:downloadBtn"]').click()
time.sleep(2)
driver.execute_script("window.history.go(-2)")
PipelineID=PipelineID+1
time.sleep(3)
if PipelineID>3:
driver.quit()
break
This is the code Im using. Im using Selenium to go to google use the keyword and download data in Location column. Here I tried using driver.switch_to.window instead of execute script cmd at last(browser back button)but it dint work
So once again I would like to know if driver.switch_to.window works on JS Pages or not.

Related

How to mouse over with Python Selenium 4.3.0?

ENV
Python 3.10
Selenium 4
Hi,
I use :
def findAllByXPath(p_driver, xpath, time=5):
try:
#return WebDriverWait(p_driver, time).until(EC.presence_of_all_elements_located((By.XPATH, (xpath))))
return WebDriverWait(p_driver, time).until(EC.presence_of_all_elements_located((By.XPATH, xpath)))
except Exception as ex:
print(f"ERROR findAllByXPath : {ex}")
posts = findAllByXPath(p_driver,"//article//a",2)
for index in range(0,len(posts)):
p_driver.execute_script("arguments[0].scrollIntoView()", posts[index])
time.sleep(random.uniform(0.5, 0.8))
action.move_to_element(posts[index]).perform()
To make a mouse over and try to get elements.
I am trying to get the number of comments on an Instagram post from a page profile.
If you pass manually the mouse over a post, the number of comments or likes is displayed.
But when I try to do it in python, it doesn't find the element.
I tried the XPath from console
$x(//article//li//span[string-length(text())>0]")
It gives results when I freeze the browser with F8.
action.move_to_element(post).perform() doesn't work? I suspect this version 4.3.0 removed many functions like ActionChains from the Selenium version 3, didn't it?
How can I extract this element?
Or Am I doing something wrong?
ActionChains works nicely in Selenium v4!
I don't have an Instagram account so I cannot try there but you can see by running this code that move_to_element() works and how it works.
The About button (top left corner of stackoverflow homepage) get highlighted.
Remember to don't move your mouse while running of course!
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Firefox()
actions = ActionChains(driver)
driver.get("https://stackoverflow.com/")
about_button = WebDriverWait(driver, 30).until(EC.visibility_of_element_located((By.XPATH, "//a[#href='https://stackoverflow.co/']")))
actions.move_to_element(about_button)
actions.perform()
EDIT: I am using Selenium 4.4.3 and Python 3.10 but it should work with Selenium 4.3.0 as well.

Cannot scroll all the way down the page, because it keeps refreshing Selenium Python

I am trying to automate saving website's description and url. I loop the program and come to the function get_info(). Basically it need to add the first website on the google page I load and scroll down so when it executes again it can add other websites. The problem is that the program refresh the page everytime it executes get_info() and brings you back at the top.
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import time
browser=webdriver.Firefox()
def get_info():
browser.switch_to.window(browser.window_handles[2])
description = WebDriverWait(browser, 10).until(
EC.presence_of_element_located((By.TAG_NAME, "h3"))
).text
site = WebDriverWait(browser, 10).until(
EC.presence_of_element_located((By.TAG_NAME, "cite"))
)
site.click()
url=browser.current_url
browser.back()
browser.execute_script("window.scrollBy(0,400)","")
To stop Chrome from auto-reloading , you can do this
driver.execute_cdp_cmd('Emulation.setScriptExecutionDisabled', {'value': True})
Link to read about this Chrome Devtools flag - here
The question already has been asked here

How find an specific element of page in Selenium python, because I can't find them

I have tried all ways that I knew, but can't found elements "login" and "password". Could someone help me, please? I am tired...
link: https://trading.finam.ru/
from selenium import webdriver
import traceback
from time import sleep
url = 'https://trading.finam.ru/'
driver = webdriver.Chrome(executable_path=r"C:\Users\Idensas\PycharmProjects\pythonProject\selenium\chromedriver.exe")
driver.get(url)
sleep(2)
try:
x = driver.find_element_by_xpath('//label[contains(text(),"Логин")]')
x.send_keys('123')
except Exception:
traceback.print_exc()
finally:
driver.close()
driver.quit()
The picture of that site
The following xpaths will work:
//input[#name='login']
//input[#name='password']
I see the site is loaded slowly so you must put some delay, preferably explicit wait there like:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait.until(EC.visibility_of_element_located((By.XPATH, "//input[#name='login']")))
before trying to insert a texts there
Any website takes sometime to completely load and all the elements are visible to user.
To check the time take taken by your website in different regions you run a check on https://gtmetrix.com/
You can use the simple sleep for few secs until the page is loaded
from selenium import webdriver
import time
driver=webdriver.Chrome(//chromedriver_path_in_your_local//)
driver.get("https://trading.finam.ru/")
time.sleep(5)
login=driver.find_element_by_xpath("//input[#name='login']")
password=driver.find_element_by_xpath("//input[#name='password']")

How to zoom in a webpage using Python Selenium in FireFox

I am working on Ubuntu and writing a web automation script which can open a page and zoom in the page. I am using python selenium. Below is the code:
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("https://www.wikipedia.org/")
driver.maximize_window()
time.sleep(2)
This opens the browser and opens the wikipedia page. Now I have to do the zoom in functionality. For this I tried below line of code:
driver.execute_script("document.body.style.zoom='150%'")
It didn't work. So I tried below line of code:
driver.execute_script('document.body.style.MozTransform = "scale(1.50)";')
driver.execute_script('document.body.style.MozTransformOrigin = "0 0";')
It kind of worked but the webpage height and width were disturbed and it looks like below:
It should look like below:
Can anyone please suggest some good working solution to zoom in a webpage using python selenium in FireFox.
I didn't find any working solution so what I did is, I used pyautogui to simulate the keys.
pyautogui.keyDown('ctrl')
pyautogui.press('+')
pyautogui.keyUp('ctrl')
This made the webpage zoom to 110%. You can run it again to make it to 120% and so on.
Thanks

Selenium WebDriverWait returns an error on Python while web scraping

I am creating a script for Facebook scraping.
I am using Selenium 2.53.6, Gecko driver 0.11.1 with Firefox 50 under Ubuntu 14.04.5 LTS. I have also tried it under Windows 10 using Gecko as well as Chromedriver too, with the same results (that I will describe below) :(
The following code fragment I use that I copied from the original documentation in section Explicitly Waits is:
import datetime, time, sys, argparse
from time import strftime
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.support.ui import WebDriverWait # available since 2.4.0
from selenium.webdriver.support import expected_conditions as EC # available since 2.26.0
...
...
def main(usrEmail, pwd, numberOfScrolls, secondsWait, searchItem, outFilename):
# Open Firefox Browser
binary = FirefoxBinary('/usr/bin/firefox')
browser = webdriver.Firefox(firefox_binary=binary)
#put browser in specific position
browser.set_window_position(400, 0)
# goto facebook
browser.get("http://www.facebook.com")
#waiting( 5 )
try:
element = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "u_0_p")))
finally:
logging("logging in...")
The problem is that I get this error:
File "fb.py", line 86, in main
element = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "u_0_p")))
NameError: name 'By' is not defined
As a workaround I am just using delays in a form time.sleep( delay ) and the program works pretty nice, but it would be better if I had the WebDriverWait feature.
Do I miss something obvious or it is a kind of selenium bug or a web browser feature?
Any help / clarification will be greatly appreciated!
Thanks for your time!!
Could be your missing the definition of By:
from selenium.webdriver.common.by import By

Resources