I want to videorecord Selenium test cases in IE, Firefox, Safari, ...
When using Selenium Grid, a node can be processing multiple tests at the same time. I need to record a video of each browser currently executing the testcase.
I was thinking to maybe make a wrapper script where I pass the browsername and other args, which would then invoke the browser, get the window handle and record the session in C#.
Is this a good idea? Are there other solutions already available?
Thanks
have a look Test case recording
Related
I realized today that you can merge Chrome DevTools Protocol with Selenium in order to automate some very specific parts of a process within a website.
for instance: after some initial conditions have met, automate the process of uploading some files to an account and etc...
According to the official repository you use a sentence like the following on cmd to create a new chrome session with your user data:
chrome.exe --remote-debugging-port=9222 --user-data-dir:"C:\Users\ResetStoreX\AppData\Local\Google\Chrome\User Data"
So in my case, the sentence above generates the following output:
The thing is, in my original session I had some Chrome extensions added, and I know that If I were to work only with Selenium using its chromedriver.exe, I could easily add an extension (which must be compressed as a .crx file) by using the following sentence:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
opt = Options() #the variable that will store the selenium options
opt.add_extension(fr'{extension_path}')
But it seems that Chrome DevTools Protocol can't just add as much Options as Selenium, so I would have to install all my extensions in this pseudo session of mine again, no problem.
But, after installing such extensions, will those keep installed and ready for use after I execute again chrome.exe --remote-debugging-port=9222 --user-data-dir:"C:\Users\ResetStoreX\AppData\Local\Google\Chrome\User Data", and if so, where?
Or if not, does that mean that I would have to re install those every single time I need to do tests with Chrome DevTools Protocol and my Chrome extensions? Thanks in advice.
Can confirm, a session opened with Chrome DevTools Protocol somehow stores permanently the extensions you re installed. It also remembers if you used some specific credentials for logging in to some sites.
I have an URL to scrape and i ask me what's the best method.
With selenium for example:
executable_path = "....\\chromedriver" browser = webdriver.Chrome(executable_path=executable_path)
url = "xxxxxxxxxx" browser.get(url) timeout = 20
# find_elements_by_xpath returns an array of selenium objects.
titles_element = browser.find_elements_by_css_selector('[data-test-id="xxxx"]'
This method launches Chrome Browser. On windows i have to install both "Chrome browser" and a Chrome Driver in the same version. But what happens in a Linux server: no problem to install Chrome driver but it's not a problem to install a Chrome browser on a server without graphic interface?
Do you suggest me to use rather request module than selenium because my URL is already built.
The risk to be caught by website is more important with selenium or request?
If you have just one URL to scrape Selenium is better because it's easier to code than requests.
For exemple : if you need to scroll down to make your data appear, it will be harder to do without a browser
If you want to do intensive scraping maybe you should try requests with beautifulsoup, it will use way less ressource on your server.
You can also use scrapy, it's very easy to spoof the user agent with it, this makes your bot harder to detect.
If you scrape responsibly with a delay between 2 requests, you should not be detected with either method. You can check the robot.txt document to be safe
I am working on a webpage that depends on browser extensions to perform certain tasks.
From the webpage I can communicate with the chrome browser using 'externally_connectable' and:
chrome.runtime.sendMessage(string extensionId, any message, object options, function responseCallback)
The good thing here is that from the point of view of my website I am sure I am communicating with my extension and not with a 'reverse engineered' version of my extension.
In firefox extensions however there is no support for externally_connectable and I have to use events or window.postmessage:
const event = new CustomEvent('msg-for-content-script');
document.querySelector('body').dispatchEvent(event);
This works ok, but the problem is that when somebody manages to reverse engineer my extension I am not able to check whether the extension I am communicating with is really my extension.
Is there anybody who can give advice on how to make sure that I am communicating with the right extension?
I use simple uptime monitors like statuscake, uptimerobot, etc. to verify that my sites are up. The problem is that some of the sites are ASP.NET applications with complex __doPostback interactions -- basically, the user fills out a form, clicks submit, and then ASP.NET-generated javascript takes them to the next page.
My idea was to write a CasperJS (basically an easier API for PhantomJS) script to simulate this user interaction and test to make sure it works.
I have the test running in CasperJS and now I'd like to expose the test as its own REST API so I can have my uptime monitor hit it every few minutes. The REST API would return 200 if the test is successful; some error code if not.
I would normally throw restify or express around the logic, but you need to run CasperJS via casperjs file.js instead of via node, which means I can't run restify within it. I've looked at PhantomJS, Nightmare, and Zombie. If you know for sure those would work for this let me know; otherwise I had issues with their API that lead me back to CasperJS.
This feels a bit like exposing a test suite as an API if that gives any ideas.
Any suggestions?
PhantomJS has a built-in server, you may use with CasperJS like shown in this answer: CasperJS passing data back to PHP
I'm trying to setup some automated testing using Browserstack's Selenium and their Node.js driver. I want to check if the page is showing any insecure content warnings when accessing the URL via HTTPS.
Is there a way to detect that in Selenium? If one browser does it easier than another that's fine.
Here are a few different ways to detect this using Selenium and other tools:
iterate through all links and ensure they all start with https:// (though via Selenium, this won't detect complex loaded content, XHR, JSONP, and interframe RPC requests)
automate running the tool on Why No Padlock?, which may not do more than the above method
utilize Sikuli to take a screenshot of the region of the browser address bar showing the green padlock (in the case of Chrome) and fail if not present (caveat of using this in parallel testing mentioned here
There is also mention here of the Content Security Policy in browsers, which will prevent the loading of any non-secure objects and perform a callback to an external URL when encountered.
UPDATE:
These proposed solutions intend to detect any non-secure objects being loaded to the page. This should be the best practice for asserting the content is secure. However, if you literally need to detect whether the specific browser's insecure content warning message is being displayed (aka, software testing the browser vs your website), then utilizing Sikuli to match either the visible existence warning messages or the non-existence of your page's content could do the job.
Firefox creates a log entry each time it runs into mixed content, so you can check the logs in selenium. Example:
driver = webdriver.Firefox()
driver.get("https://googlesamples.github.io/web-fundamentals/fundamentals/security/prevent-mixed-content/simple-example.html")
browser_logs = driver.get_log("browser")
and, in browser_logs look for
{u'timestamp': 1483366797638, u'message': u'Blocked loading mixed active content "http://googlesamples.github.io/web-fundamentals/samples/discovery-and-distribution/avoid-mixed-content/simple-example.js"', u'type': u'', u'level': u'INFO'}
{u'timestamp': 1483366797644, u'message': u'Blocked loading mixed active content "http://googlesamples.github.io/web-fundamentals/samples/discovery-and-distribution/avoid-mixed-content/simple-example.js"', u'type': u'', u'level': u'INFO'}