How to zoom in a webpage using Python Selenium in FireFox - python-3.x

I am working on Ubuntu and writing a web automation script which can open a page and zoom in the page. I am using python selenium. Below is the code:
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("https://www.wikipedia.org/")
driver.maximize_window()
time.sleep(2)
This opens the browser and opens the wikipedia page. Now I have to do the zoom in functionality. For this I tried below line of code:
driver.execute_script("document.body.style.zoom='150%'")
It didn't work. So I tried below line of code:
driver.execute_script('document.body.style.MozTransform = "scale(1.50)";')
driver.execute_script('document.body.style.MozTransformOrigin = "0 0";')
It kind of worked but the webpage height and width were disturbed and it looks like below:
It should look like below:
Can anyone please suggest some good working solution to zoom in a webpage using python selenium in FireFox.

I didn't find any working solution so what I did is, I used pyautogui to simulate the keys.
pyautogui.keyDown('ctrl')
pyautogui.press('+')
pyautogui.keyUp('ctrl')
This made the webpage zoom to 110%. You can run it again to make it to 120% and so on.
Thanks

Related

Using Selenium to select a menu item from a ribbon

I am attempting to use Selenium to automate tasks in the office. One such task involves jumping onto a website and navigating its menus to a specific page. In this case, I am trying to have get selenium to select the Applications menu, then the Warranty Tools sub menu, then the Warranty Navigator link.
Below is the code I worked out so far. It will load into the website correctly and navigate to the opening page. From there I am attempting to get it to select the 'Applications' menu. Instead, the program times out without selecting the intended element. I have attempted to use css_selector, xpath(included in example), and class to no avail.
import selenium
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support.ui import Select
from time import sleep
driver = webdriver.Chrome()
driver.get('https://www.solutionnavigator.com/s/login/?language=en_US')
driver.maximize_window()
email = WebDriverWait(driver,10).until(EC.element_to_be_clickable(('xpath','element').send_keys('email')
password = WebDriverWait(driver,10).until(EC.element_to_be_clickable(('xpath','element'))).send_keys('password')
login = WebDriverWait(driver,10).until(EC.element_to_be_clickable(('xpath','element'))).click()
menu_content = driver.find_element('xpath,'element').click()
time.sleep(20)

How to mouse over with Python Selenium 4.3.0?

ENV
Python 3.10
Selenium 4
Hi,
I use :
def findAllByXPath(p_driver, xpath, time=5):
try:
#return WebDriverWait(p_driver, time).until(EC.presence_of_all_elements_located((By.XPATH, (xpath))))
return WebDriverWait(p_driver, time).until(EC.presence_of_all_elements_located((By.XPATH, xpath)))
except Exception as ex:
print(f"ERROR findAllByXPath : {ex}")
posts = findAllByXPath(p_driver,"//article//a",2)
for index in range(0,len(posts)):
p_driver.execute_script("arguments[0].scrollIntoView()", posts[index])
time.sleep(random.uniform(0.5, 0.8))
action.move_to_element(posts[index]).perform()
To make a mouse over and try to get elements.
I am trying to get the number of comments on an Instagram post from a page profile.
If you pass manually the mouse over a post, the number of comments or likes is displayed.
But when I try to do it in python, it doesn't find the element.
I tried the XPath from console
$x(//article//li//span[string-length(text())>0]")
It gives results when I freeze the browser with F8.
action.move_to_element(post).perform() doesn't work? I suspect this version 4.3.0 removed many functions like ActionChains from the Selenium version 3, didn't it?
How can I extract this element?
Or Am I doing something wrong?
ActionChains works nicely in Selenium v4!
I don't have an Instagram account so I cannot try there but you can see by running this code that move_to_element() works and how it works.
The About button (top left corner of stackoverflow homepage) get highlighted.
Remember to don't move your mouse while running of course!
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Firefox()
actions = ActionChains(driver)
driver.get("https://stackoverflow.com/")
about_button = WebDriverWait(driver, 30).until(EC.visibility_of_element_located((By.XPATH, "//a[#href='https://stackoverflow.co/']")))
actions.move_to_element(about_button)
actions.perform()
EDIT: I am using Selenium 4.4.3 and Python 3.10 but it should work with Selenium 4.3.0 as well.

How to click the Continue button using Selenium and Python

I'm trying to automate some tedious copy / paste I do monthly from my bank's online service via Selenium and Python 3. Unfortunately, I can't get Selenium to click the log-in link.
It's the blue continue button at https://www1.bmo.com/onlinebanking/cgi-bin/netbnx/NBmain?product=5.
Strangely, when I try to click that link manually in the browser launched by Selenium, it doesn't work either - whereas it does work in a browser I launch manually.
I suspect the issue is that the bank's website is smart enough to detect that I'm automating the browser activity. Is there any way to get around that?
If not, could it be something else?
I've tried using Chrome and Firefox - to no avail. I'm using a 64 bit Windows 10 machine with Chrome 73.0.3683.103 and Firefox 66.0.
Relevant code is below.
#websites and log in information
bmo_login_path = 'https://www1.bmo.com/onlinebanking/cgi-bin/netbnx/NBmain?product=5'
bmo_un = 'fake_user_name'
bmo_pw = 'fake_password'
#Selenium setup
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
chrome_driver_path = 'C:\\Path\\To\\Driver\\chromedriver.exe'
gecko_driver_path = 'C:\\Path\\To\\Driver\\geckodriver.exe'
browswer_bmo = webdriver.Firefox(executable_path = gecko_driver_path)
#browswer_bmo = webdriver.Chrome(executable_path = chrome_driver_path)
#log into BMO
browswer_bmo.get(bmo_login_path)
time.sleep(5)
browswer_bmo.find_element_by_id('siBankCard').send_keys(bmo_un)
browswer_bmo.find_element_by_id('regSignInPassword').send_keys(bmo_pw)
browswer_bmo.find_element_by_id('btnBankCardContinueNoCache1').click()
Sending the keys works perfectly. I may actually have the wrong element ID (I was trying to test that in Chrome when I realized I couldn't click the link manually) - but I think the bigger issue is that I can't manually click the link in the browser launched by Selenium. Thank you for any ideas.
EDIT
This is a screenshot that I get of all I get when I try to click the continue button.
Ultimately the error message I get in my IDE (Jupyter Notebook) is:
TimeoutException: Message: timeout
(Session info: chrome=74.0.3729.108)
(Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729#{#29}),platform=Windows NT 10.0.17134 x86_64)
To click on the button with text as Continue you can fill up the Card Number and Password field inducing WebDriverWait for the element_to_be_clickable() and you can use the following solution:
Code Block:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = webdriver.ChromeOptions()
options.add_argument('start-maximized')
options.add_argument('disable-infobars')
options.add_argument('--disable-extensions')
driver = webdriver.Chrome(chrome_options=options, executable_path=r'C:\WebDrivers\chromedriver.exe')
driver.get('https://www1.bmo.com/onlinebanking/cgi-bin/netbnx/NBmain?product=5')
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input.dijitReset.dijitInputInner#siBankCard[name='FBC_Number']"))).send_keys("1234567890112233")
driver.find_element_by_css_selector("input.dijitReset.dijitInputInner#regSignInPassword[name='FBC_Password']").send_keys("fake_password")
driver.find_element_by_css_selector("span.dijitReset.dijitInline.dijitIcon.dijitNoIcon").click()
# driver.quit()
Browser Snapshot:
I was able to fix this issue and solve the problem by adding the following line below the options variables. This disables the chrome check for automation. I used the whole sale code and then added the following line in the correct location before starting the driver.
options.add_experimental_option("excludeSwitches", ['enable-automation'])
ref: https://help.applitools.com/hc/en-us/articles/360007189411--Chrome-is-being-controlled-by-automated-test-software-notification

Retrieving elements with custom HTML attributes

I have the following website: https://www.kvk.nl/handelsregister/publicaties/, where I would like to retrieve the login link with Selenium, Scrapy and Python. So for the relevant function, I have the following code:
def start_requests(self):
self.driver = webdriver.Chrome(executable_path=os.path.join(os.getcwd(), "Drivers", "chromedriver.exe"))
self.driver.get(self.initial_url)
test = access_page_wait.until(expected_conditions.visibility_of_element_located((By.CSS_SELECTOR, 'a[data-ui-test-class="linkCard_toegangscode"]')))
if test.is_displayed():
print("+1")
else:
print("-1")
However, this does not seem to work, since it just waits 15 seconds and then it stops. It will never reach +1 or -1.
Now my question is, how can we point selenium to the correct element. It also does not seem to work using XPATH find_elements_by_xpath("//a[#data-ui-test-class='linkCard_toegangscode']").
Should I use another selection approach and if yes, which one?
Because there is Frame which stopping you to access the element.Switch_To iframe and then access the element.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
import os
driver = webdriver.Chrome(executable_path=os.path.join(os.getcwd(), "Drivers", "chromedriver.exe"))
driver.get("https://www.kvk.nl/handelsregister/publicaties/")
driver.switch_to.frame(0)
test=WebDriverWait(driver,10).until(expected_conditions.visibility_of_element_located((By.CSS_SELECTOR, 'a[data-ui-test-class="linkCard_toegangscode"]')))
if test.is_displayed():
print("+1")
else:
print("-1")
Try the above code.It should print what you are looking after.

Selenium WebDriverWait returns an error on Python while web scraping

I am creating a script for Facebook scraping.
I am using Selenium 2.53.6, Gecko driver 0.11.1 with Firefox 50 under Ubuntu 14.04.5 LTS. I have also tried it under Windows 10 using Gecko as well as Chromedriver too, with the same results (that I will describe below) :(
The following code fragment I use that I copied from the original documentation in section Explicitly Waits is:
import datetime, time, sys, argparse
from time import strftime
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.support.ui import WebDriverWait # available since 2.4.0
from selenium.webdriver.support import expected_conditions as EC # available since 2.26.0
...
...
def main(usrEmail, pwd, numberOfScrolls, secondsWait, searchItem, outFilename):
# Open Firefox Browser
binary = FirefoxBinary('/usr/bin/firefox')
browser = webdriver.Firefox(firefox_binary=binary)
#put browser in specific position
browser.set_window_position(400, 0)
# goto facebook
browser.get("http://www.facebook.com")
#waiting( 5 )
try:
element = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "u_0_p")))
finally:
logging("logging in...")
The problem is that I get this error:
File "fb.py", line 86, in main
element = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "u_0_p")))
NameError: name 'By' is not defined
As a workaround I am just using delays in a form time.sleep( delay ) and the program works pretty nice, but it would be better if I had the WebDriverWait feature.
Do I miss something obvious or it is a kind of selenium bug or a web browser feature?
Any help / clarification will be greatly appreciated!
Thanks for your time!!
Could be your missing the definition of By:
from selenium.webdriver.common.by import By

Resources