I looked up Selenium python documentation and it allows one to take screenshots of an element. I tried the following code and it worked for small pages (around 3-4 actual A4 pages when you print them):
from selenium.webdriver import FirefoxOptions
firefox_profile = webdriver.FirefoxProfile()
firefox_profile.set_preference("browser.privatebrowsing.autostart", True)
# Configure options for Firefox webdriver
options = FirefoxOptions()
options.add_argument('--headless')
# Initialise Firefox webdriver
driver = webdriver.Firefox(firefox_profile=firefox_profile, options=options)
driver.maximize_window()
driver.get(url)
driver.find_element_by_tag_name("body").screenshot("career.png")
driver.close()
When I try it with url="https://waitbutwhy.com/2020/03/my-morning.html", it gives the screenshot of the entire page, as expected. But when I try it with url="https://waitbutwhy.com/2018/04/picking-career.html", almost half of the page is not rendered in the screenshot (the image is too large to upload here), even though the "body" tag does extend all the way down in the original HTML.
I have tried using both implicit and explicit waits (set to 10s, which is more than enough for a browser to load all contents, comments and discussion section included), but that has not improved the screenshot capability. Just to be sure that selenium was in fact loading the web page properly, I tried loading without the headless flag, and once the webpage was completely loaded, I ran driver.find_element_by_tag_name("body").screenshot("career.png"). The screenshot was again half-blank.
It seems that there might be some memory constraints put on the screenshot method (although I couldn't find any), or the logic behind the screenshot method itself is flawed. I can't figure it out though. I simply want to take the screenshot of the entire "body" element (preferably in a headless environment).
You may try this code, just that you need to install a package from command prompt using the command pip install Selenium-Screenshot
import time
from selenium import webdriver
from Screenshot import Screenshot_Clipping
driver = webdriver.Chrome()
driver.maximize_window()
driver.implicitly_wait(10)
driver.get("https://waitbutwhy.com/2020/03/my-morning.html")
obj=Screenshot_Clipping.Screenshot()
img_loc=obj.full_Screenshot(driver, save_path=r'.', image_name='capture.png')
print(img_loc)
time.sleep(5)
driver.close()
Outcome/Result comes out to be like, you just need to zoom the screenshot saved
Hope this works for you!
Related
I want to scrape the comments off this page using beautifulsoup - https://www.x....s.com/video_id/the-suburl
The comments are loaded on click via Javascript. The comments are paginated and each page loads comments on click too. I wish to fetch all comments, for each comment, I want to get the poster profile url, the comment, no. of likes, no of dislikes, and time posted (as stated on the page).
The comments can be a list of dictionaries.
How do I go about this?
This script will print all comments found on the page:
import json
import requests
from bs4 import BeautifulSoup
url = 'https://www.x......com/video_id/gggjggjj/'
video_id = url.rsplit('/', maxsplit=2)[-2].replace('video', '')
u = 'https://www.x......com/threads/video/ggggjggl/{video_id}/0/0'.format(video_id=video_id)
comments = requests.post(u, data={'load_all':1}).json()
for id_ in comments['posts']['ids']:
print(comments['posts']['posts'][id_]['date'])
print(comments['posts']['posts'][id_]['name'])
print(comments['posts']['posts'][id_]['url'])
print(BeautifulSoup(comments['posts']['posts'][id_]['message'], 'html.parser').get_text())
# ...etc.
print('-'*80)
This would be done with Selenium. Selenium emulates a browser. Depending on your preferences you can use a chrome driver or the Firefox driver which is the geckodriver.
Here is a link on how to install the chrome webdriver:
http://jonathansoma.com/lede/foundations-2018/classes/selenium/selenium-windows-install/
Then in your code here is how you would set it up:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
# this part may change depending on where you installed the webdriver.
# You may have to define the path to the driver.
# For me my driver is in C:/bin so I do not need to define the path
chrome_options = Options()
# or '-start maximized' if you want the browser window to open
chrome_options.add_argument('--headless')
driver = webdriver.Chrome(options=chrome_options)
driver.get(your_url)
html = driver.page_source # downloads the html from the driver
Selenium has several functions that you can use to perform certain actions such as click on elements on the page. Once you find an element with selenium you can use the .click() method to interact with the element.
Let me know if this helps
The target of this project is to automate checking sites with Microsoft edge browser using selenium-python i downloaded the webdriver for the edge legacy from this link and i went for the latest release 17134 extracted it with out any problems now lets say i want to visit facebook in an automated way with firefox using the geckodriver
firefox code sample with selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.options import Options
# setting up headless option for faster execution
options = Options()
options.headless = True
browser = (webdriver.Firefox(options=options))
browser.get('https://www.facebook.com/')
but when I try to use Microsoft edge that is built in windows 10 I get an attribute error
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.edge.options import Options
options = Options()
options.headless = True
#browser = webdriver.edge(options=options)
browser = webdriver.edge()
ps : when I uncomment this part (browser = webdriver.edge(options=options)) I get module not found error
what is the right way to call Microsoft edge browser , or what I am doing wrong
when I use Edge and try to make Edge headless. I also find it hard to do that with slight changes as Chrome. And I refer to the official documentation and get a official solution. Besides selenium, you need to install msedge-selenium-tools, just pip install itpip install msedge-selenium-tools. And use Edge Class in msedge tools. Just like:
from msedge.selenium_tools import Edge
driver = Edge(executable_path='where')
And if we want to make Edge headless, we need to use EdgeOptions Class which selenium.webdriver doesn't offer. selenium.webdriver only provides us with ChromeOptions, FirefoxOptions and Ie's. EdgeOptions is in a separated package msedge.selenium_tools.Then we add argument as what we do on Firefox or Chrome. Before that, we need to set the attribute use_chromium as True. The whole codes:
from msedge.selenium_tools import EdgeOptions
from msedge.selenium_tools import Edge
# make Edge headless
edge_options = EdgeOptions()
edge_options.use_chromium = True # if we miss this line, we can't make Edge headless
# A little different from Chrome cause we don't need two lines before 'headless' and 'disable-gpu'
edge_options.add_argument('headless')
edge_options.add_argument('disable-gpu')
driver = Edge(executable_path='where', options=edge_options)
Hope it helps. Sorry for my awkward explaination.
I am using this WebDriver Package. Works perfectly. This package auto-download and runs your system-compatible browser smoothly. If you want to install and run a specific version, that is also possible. To know the instructions click here.
This code is for Selenium 4 [Python 3.10.*]
class MyEdge:
def get_browser(self):
options = webdriver.EdgeOptions()
# If you want to avoid popup browser use '--headless'
options.add_argument('--headless')
# Ref: https://learn.microsoft.com/en-us/microsoft-edge/webdriver-chromium/?tabs=python#using-chromium-specific-options
self.driver = webdriver.Edge(options= options,service=Service(EdgeChromiumDriverManager().install()))
return self.driver
More optional arguments:
--headless, --no-sandbox, '--disable-gpu', '--window-size=1280x1696', '--user-data-dir=/tmp/user-data', '--hide-scrollbars', '--enable-logging', '--log-level=0', , '--single-process', '--data-path=/tmp/data-path', '--ignore-certificate-errors', '--homedir=/tmp', '--disk-cache-dir=/tmp/cache-dir'
Make sure to impore:
# Import
from selenium import webdriver
from selenium.webdriver.edge.service import Service
from webdriver_manager.microsoft import EdgeChromiumDriverManager
I try to refer to the official documentation for WebDriver for Microsoft Edge (EdgeHTML). But I did not get any information about the Headless mode in it.
WebDriver (EdgeHTML)
I also try to refer to some old threads to find any information on this topic. It looks like we cannot use the Headless mode with the MS Edge legacy browser.
Headless Edge driven through Selenium by C#
I found one article that also said that 'User cannot use IE10, IE11, Edge, Opera & Safari for headless testing.'
Headless Browsers Testing using Selenium Webdriver
From the above references, it looks like you cannot use Headless mode with the MS Edge legacy browser.
As a workaround, I suggest you try to make a test with the MS Edge Chromium browser. I found that it supports Headless mode.
Using Chromium-Specific Options
I am writing some tests using selenium with Python. So far my suite works perfectly with Chrome and Firefox. However, the same code is not working when I try with the Edge (EdgeHTML). I am using the latest version at the time of writing which is release 17134, version: 6.17134. My tests are running on Windows 10.
The problem is that Edge is not waiting for the page to load. As part of every test, a login is first performed. The credentials are entred and the form submitted. Firefox and Chrome will now wait for the page we are redirected to, to load. However, with Edge, the next code is executed as soon as the login submit button is clicked which of course results in a failed test.
Is this a bug with Edge? It seems a bit too fundamental to be the case. Does the browser need to be configured in a certain manner? I cannot see anything in the documentation.
This is the code run with the last statement resulting in a redirect as we have logged in:
self.driver.find_element_by_id("login-email").send_keys(username)
self.driver.find_element_by_id("login-password").send_keys(password)
self.driver.find_element_by_id("login-messenger").click()
Edge decides it does not need to wait and will then execute the next code which is to navigate to a protected page. The code is:
send_page = SendPage(driver)
send_page.load_page()
More concisely:
self.driver.find_element_by_id("login-messenger").click()
# should wait now for the login redirect before excuting the line below but it does not!
self.driver.get(BasePage.base_url + self.uri)
I can probably perform a workaround by waiting for an element on the extent page to be present thus making Edge wait. This does not feel like the right thing to do. I certainly don't want to have to keep making invasive changes just for Edge.
Any advice please on what I should do?
Is this a bug with Edge? It seems a bit too fundamental to be the
case. Does the browser need to be configured in a certain manner? I
cannot see anything in the documentation.
No, I think it is not a bug with Edge browser. Because of the difference between the browser's performance, perhaps Edge browser will spend more time to load the page.
Generally, we could use the time.sleep(secs) method, WebDriverWait() method and implicitly_wait() method to wait the page load.
The code block below shows you how to wait for a page load to complete. It uses a timeout. It waits for an element to show on the page (you need an element id).
Then if the page is loaded, it shows page loaded. If the timeout period (in seconds) has passed, it will show the timeout error.
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Firefox()
driver.get('https://pythonbasics.org')
timeout = 3
try:
element_present = EC.presence_of_element_located((By.ID, 'main'))
WebDriverWait(driver, timeout).until(element_present)
except TimeoutException:
print("Timed out waiting for page to load")
finally:
print("Page loaded")
More detail information, please check the following articles.
Wait until page is loaded with Selenium WebDriver for Python
How to wait for elements in Python Selenium WebDriver
I'm trying to automate some tedious copy / paste I do monthly from my bank's online service via Selenium and Python 3. Unfortunately, I can't get Selenium to click the log-in link.
It's the blue continue button at https://www1.bmo.com/onlinebanking/cgi-bin/netbnx/NBmain?product=5.
Strangely, when I try to click that link manually in the browser launched by Selenium, it doesn't work either - whereas it does work in a browser I launch manually.
I suspect the issue is that the bank's website is smart enough to detect that I'm automating the browser activity. Is there any way to get around that?
If not, could it be something else?
I've tried using Chrome and Firefox - to no avail. I'm using a 64 bit Windows 10 machine with Chrome 73.0.3683.103 and Firefox 66.0.
Relevant code is below.
#websites and log in information
bmo_login_path = 'https://www1.bmo.com/onlinebanking/cgi-bin/netbnx/NBmain?product=5'
bmo_un = 'fake_user_name'
bmo_pw = 'fake_password'
#Selenium setup
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
chrome_driver_path = 'C:\\Path\\To\\Driver\\chromedriver.exe'
gecko_driver_path = 'C:\\Path\\To\\Driver\\geckodriver.exe'
browswer_bmo = webdriver.Firefox(executable_path = gecko_driver_path)
#browswer_bmo = webdriver.Chrome(executable_path = chrome_driver_path)
#log into BMO
browswer_bmo.get(bmo_login_path)
time.sleep(5)
browswer_bmo.find_element_by_id('siBankCard').send_keys(bmo_un)
browswer_bmo.find_element_by_id('regSignInPassword').send_keys(bmo_pw)
browswer_bmo.find_element_by_id('btnBankCardContinueNoCache1').click()
Sending the keys works perfectly. I may actually have the wrong element ID (I was trying to test that in Chrome when I realized I couldn't click the link manually) - but I think the bigger issue is that I can't manually click the link in the browser launched by Selenium. Thank you for any ideas.
EDIT
This is a screenshot that I get of all I get when I try to click the continue button.
Ultimately the error message I get in my IDE (Jupyter Notebook) is:
TimeoutException: Message: timeout
(Session info: chrome=74.0.3729.108)
(Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729#{#29}),platform=Windows NT 10.0.17134 x86_64)
To click on the button with text as Continue you can fill up the Card Number and Password field inducing WebDriverWait for the element_to_be_clickable() and you can use the following solution:
Code Block:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = webdriver.ChromeOptions()
options.add_argument('start-maximized')
options.add_argument('disable-infobars')
options.add_argument('--disable-extensions')
driver = webdriver.Chrome(chrome_options=options, executable_path=r'C:\WebDrivers\chromedriver.exe')
driver.get('https://www1.bmo.com/onlinebanking/cgi-bin/netbnx/NBmain?product=5')
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input.dijitReset.dijitInputInner#siBankCard[name='FBC_Number']"))).send_keys("1234567890112233")
driver.find_element_by_css_selector("input.dijitReset.dijitInputInner#regSignInPassword[name='FBC_Password']").send_keys("fake_password")
driver.find_element_by_css_selector("span.dijitReset.dijitInline.dijitIcon.dijitNoIcon").click()
# driver.quit()
Browser Snapshot:
I was able to fix this issue and solve the problem by adding the following line below the options variables. This disables the chrome check for automation. I used the whole sale code and then added the following line in the correct location before starting the driver.
options.add_experimental_option("excludeSwitches", ['enable-automation'])
ref: https://help.applitools.com/hc/en-us/articles/360007189411--Chrome-is-being-controlled-by-automated-test-software-notification
Below is a script that opens a URL, saves the image as a JPEG file, and also saves some html attribute (i.e. the Accession Number) as the file name. The script runs but saves corrupted images; size = 210 bytes with no preview. When I try to open them, the error message suggests the file is damaged.
The reason I am saving the images instead of doing a direct request is to get around the site's security measures, it doesn't seem to allow web scraping. My colleague who tested the script on Windows below got a robot check request (just once at the beginning of the loop) before the images successfully downloaded. I do not get this check from the site, so I believe my script is actually pulling the robot check instead of the webpage as it hasn't allowed me to manually bypass the check. I'd appreciate help addressing this issue, perhaps forcing the robot check when the script opens the first URL.
Dependencies
I am using Python 3.6 on MacOS. If anyone testing this for me is also using Mac and is using Selenium for the first time, please note that a file called "Install Certificates.command" first needs to be executed before you can access anything. Otherwise, it will throw a "Certificate_Verify_Failed" error. Easy to search in Finder.
Download for Selenium ChromeDriver utilized below: https://chromedriver.storage.googleapis.com/index.html?path=2.41/
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
import urllib
import time
urls = ['https://www.metmuseum.org/art/collection/search/483452',
'https://www.metmuseum.org/art/collection/search/460833',
'https://www.metmuseum.org/art/collection/search/551844']
#Set up Selenium Webdriver
options = webdriver.ChromeOptions()
#options.add_argument('--ignore-certificate-errors')
options.add_argument("--test-type")
driver = webdriver.Chrome(executable_path="/Users/user/Desktop/chromedriver", chrome_options=options)
for link in urls:
#Load page and pull HTML File
driver.get(link)
time.sleep(2)
soup = BeautifulSoup(driver.page_source, 'lxml')
#Find details (e.g. Accession Number)
details = soup.find_all('dl', attrs={'class':'artwork__tombstone--row'})
for d in details:
if 'Accession Number' in d.find('dt').text:
acc_no = d.find('dd').text
pic_link = soup.find('img', attrs={'id':'artwork__image', 'class':'artwork__image'})['src']
urllib.request.urlretrieve(pic_link, '/Users/user/Desktop/images/{}.jpg'.format(acc_no))
time.sleep(2)