I am looking to write a test where I can switch between Offline mode and back to Online mode mid way through a cucumber test. I can manually achieve this via Dev Tools in Chrome but is there a way to automate this using Poltergeist JS or Headless Chrome.
I know that page.driver is accessible, infact I use this for setting cookie values in another test
Given(/^I set the "([^"]*)" cookie value to "([^"]*)" for the domain "([^"]*)"$/) do |cookieName,cookieValue,cookieDomain|
if "#{DRIVER}" == "headless_chrome"
page.driver.browser.manage.add_cookie name: cookieName, value: cookieValue, domain: cookieDomain
else
page.driver.set_cookie(cookieName, cookieValue, {:domain => cookieDomain})
end
sleep 1
end
Unless I'm missing something I can't see how to switch between Offline and Online modes. Anyone done or do this in their test setup?
When using Selenium with Chrome as the driver you can use network_conditions=
page.driver.browser.network_conditions = { offline: true }
I don't believe Poltergeist had similar functionality.
Related
I wish to connect to a website and download some pdf files. The website allows us to view the content only after log in. It asks us to log in using OTP and can't be login at more than 3 devices simultaneously.
I wish to download all the pdf listed. So I previously tried the
python playwright open --save-storage websitename.json
to save the login. But it doesn't work for that specific website.
The website.json file was empty whereas it worked for other websites.
Therefore the only solution I could think of know, is to connect to the current browser, open that website and then download those pdfs.
If you have some solution for this or even some other approach please do inform.
I was also thinking about switching over to puppeteer for the same.
But, I don't know the html parsing using node.js, since I feel using css selectors more comfortable, so I can't switch it.
Playwright is basically same as Puppeteer. So it wouldn't be a problem if you switch between the two.
You can use puppeteer-core or playwright to control your existing browser installation, for example Chrome, and then use the existing user data (Profile) folder to load the specified website login info (cookies, webstorage, etc).
const launchOptions = {
headless: false,
executablePath: '/Applications/Google Chrome/Contents/MacOS/Google Chrome', // For MacOS
// executablePath: 'C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe', // For Windows
// executablePath: '/usr/bin/google-chrome' // For Linux
args: [
'--user-data-dir=/Users/username/Library/Application Support/Google/Chrome/', // For MacOS
// '--user-data-dir=%userprofile%\\AppData\\Local\\Chrome\\User Data', // For Windows
// '--profile-directory=Profile 1' // This to select default or specified Profile
]
}
const puppeteer = require('puppeteer-core')
const browser = await puppeteer.launch(launchOptions)
For more details about Playwright's method, you can check this workaround:
https://github.com/microsoft/playwright/issues/1985
To connect to an already running browser (Chrome) session, you can use connect_over_cdp method (added in v1.9 of playwright).
For this, you need to start Chrome in debug mode. Create a desktop shortcut for Chrome and edit Target section of shortcut properties to start it with debug mode. Add --remote-debugging-port=9222 to the target box in shortcut properties so that the target path becomes:
C:\Program Files\Google\Chrome\Application\chrome.exe" --remote-debugging-port=9222
Now start Chrome and check if it is in debug mode. For this open a new tab and paste this url in the address bar: http://localhost:9222/json/version. If you are in debug mode, you should see now a page with a json response, otherwise if you are in "normal" mode, it will say "Page not found" or something similar.
Now in your python script, write following code to connect to chrome instance:
browser = playwright.chromium.connect_over_cdp("http://localhost:9222")
default_context = browser.contexts[0]
page = default_context.pages[0]
Here is the full script code:
# Import the sync_playwright function from the sync_api module of Playwright.
from playwright.sync_api import sync_playwright
# Start a new session with Playwright using the sync_playwright function.
with sync_playwright() as playwright:
# Connect to an existing instance of Chrome using the connect_over_cdp method.
browser = playwright.chromium.connect_over_cdp("http://localhost:9222")
# Retrieve the first context of the browser.
default_context = browser.contexts[0]
# Retrieve the first page in the context.
page = default_context.pages[0]
# Print the title of the page.
print(page.title)
# Print the URL of the page.
print(page.url)
I'm working on a CLI with OCLIF. In one of the commands, I need to simulate a couple of clicks on a web page (using the WebdriverIO framework for that). Before you're able to reach the desired page, there is a redirect to a page with a login prompt. When I use WebdriverIO methods related to alerts such as browser.getAlertText(), browser.sendAlertText() or browser.acceptAlert, I always get the error no such alert.
As an alternative, I tried to get the URL when I am on the page that shows the login prompt. With the URL, I wanted to do something like browser.url(https://<username>:<password>#<url>) to circumvent the prompt. However, browser.url() returns chrome-error://chromewebdata/ as URL when I'm on that page. I guess because the focus is on the prompt and that doesn't have an URL. I also don't know the URL before I land on that page. When being redirected, a query string parameter containing a token is added to the URL that I need.
A screenshot of the prompt:
Is it possible to handle this scenario with WebdriverIO? And if so, how?
You are on the right track, probably there are some fine-tunings that you need to address to get it working.
First off, regarding the chrome-error://chromewebdata errors, quoting Chrome DOCs:
If you see errors with a location like chrome-error://chromewebdata/
in the error stack, these errors are not from the extension or from
your app - they are usually a sign that Chrome was not able to load
your app.
When you see these errors, first check whether Chrome was able to load
your app. Does Chrome say "This site can't be reached" or something
similar? You must start your own server to run your app. Double-check
that your server is running, and that the url and port are configured
correctly.
A lot of words that sum up to: Chrome couldn't load the URL you used inside the browser.url() command.
I tried myself on The Internet - Basic Auth page. It worked like a charm.
URL without basic auth credentials:
URL WITH basic auth credentials:
Code used:
it('Bypass HTTP basic auth', () => {
browser.url('https://admin:admin#the-internet.herokuapp.com/basic_auth');
browser.waitForReadyState('complete');
const banner = $('div.example p').getText().trim();
expect(banner).to.equal('Congratulations! You must have the proper credentials.');
});
What I'd do is manually go through each step, trying to emulate the same flow in the script you're using. From history I can tell you, I dealt with some HTTP web-apps that required a refresh after issuing the basic auth browser.url() call.
Another way to tackle this is to make use of some custom browser profiles (Firefox | Chrome) . I know I wrote a tutorial on it somewhere on SO, but I'm too lazy to find it. I reference a similar post here.
Short story, manually complete the basic auth flow (logging in with credentials) in an incognito window (as to isolate the configurations). Open chrome://version/ in another tab of that session and store the contents of the Profile Path. That folder in going to keep all your sessions & preserve cookies and other browser data.
Lastly, in your currentCapabilities, update the browser-specific options to start the sessions with a custom profile, via the '--user-data-dir=/path/to/your/custom/profile. It should look something like this:
'goog:chromeOptions': {
args: [
'--user-data-dir=/Users/iamdanchiv/Desktop/scoped_dir18256_17319',
],
}
Good luck!
We have a requirement to open Google Chrome Browser from Internet Explorer 8. To do this, we are using JavaScript ActiveXobject with the following code.
Code Snippet:
var URL ="http://www.google.com"
var chromeCommand = "Chrome --app="+URL+" --allow-outdated-plugins";
var shell = new ActiveXObject("WScript.Shell");
shell.run(chromeCommand);
For this, we need to set Enable "Initialize and Script ActiveX Controls not Marked as Safe for Scripting" radio button from Tools->Internet Options->Security Tab->Trusted sites -> Custom Level ...
Is this enable setting will harm anything on security concerns?
Please let me either we can open Chrome browser in this way or not, or else any alternative to do this.
Will this code work for Linux OS Internet Explorer?
For some reason, after logging into a site like gmail, htmlunit is not working. It is not able to find html elements.
The following is a very simple ruby script that shows the problem, note it assumes that webdriver server is running on the same machine running it:
require 'rubygems'
require 'watir-webdriver'
require 'rspec/expectations'
##
## THE FOLLOWING TWO WAYS WORK
#
#browser = Watir::Browser.new(:remote, :url => "http://127.0.0.1:4444/wd/hub", :desired_capabilities => :firefox)
#browser = Watir::Browser.new(:remote, :url => "http://127.0.0.1:4444/wd/hub", :desired_capabilities => :internet_explorer)
##
## THIS WAY FAILS
##
capabilities = Selenium::WebDriver::Remote::Capabilities.htmlunit(:javascript_enabled => true)
browser = Watir::Browser.new(:remote, :url => "http://127.0.0.1:4444/wd/hub", :desired_capabilities => capabilities)
#Login to gmail
browser.goto "http://gmail.com"
browser.text_field(:id,'Email').set 'roberttestingstuff041'
browser.text_field(:id,'Passwd').set 'k4238chsj55983w'
browser.button(:id,'signIn').click
sleep 5.0 #sleep shouldnt be needed, but just to be sure we are waiting long enough for log in to complete
frame = browser.frame(:id,'canvas_frame')
#It fails on the next line when using htmlunit
frame.link(:text, 'Sign out').exist?.should == true
frame.link(:text, 'Sign out').visible?.should == true
frame.div(:id, 'guser').exist?.should == true
frame.div(:text,'Compose mail').exist?.should == true
Note that if I create the browser object using firefox or IE, this simple test works.
It seems to get hung up on the redirects that happen during the login process. The site I am really trying to test follows a very similar pattern, so I set up this simplified example with gmail which seems to show the same problem.
Can anyone help turn this into a passing test? Note that I can get a similar test to work using Celerity, which is also based on HTMLUnit, so I believe there should be some way to make this work?
This is the error that shows in the webdriver server, clearly showing it failing to find the attribute:
12:31:16.321 INFO - WebDriver remote server: INFO: Executing: [find element: By.xpath: .//a[normalize-space()='Sign out'
] at URL: /session/1297704604365/element)
12:31:17.996 WARN - WebDriver remote server: WARN:
org.openqa.selenium.NoSuchElementException: Unable to locate a node using .//a[normalize-space()='Sign out']
System info: os.name: 'Windows 7', os.arch: 'x86', os.version: '6.1', java.version: '1.6.0_21'
Driver info: driver.version: EventFiringWebDriver
at org.openqa.selenium.htmlunit.HtmlUnitDriver.findElementByXPath(HtmlUnitDriver.java:699)
at org.openqa.selenium.By$6.findElement(By.java:205)
at org.openqa.selenium.htmlunit.HtmlUnitDriver$4.call(HtmlUnitDriver.java:1133)
I'm thinking that Gmail is detecting our headless browser (in this case HtmlUnit with Rhino) does not support JavaScript.
If you look at the return from Gmail after
browser.button(:id,'signIn').click
You will see that we are on a "JavaScript must be enabled" page
p browser.text
"<style> #loading {display:none} </style> <font face=arial>JavaScript must be enabled in order for you to use Gmail in standard view. However, it seems JavaScript is either disabled or not supported by your browser. To use standard view, enable JavaScript by changing your browser options, then try again. <p>To use Gmail's basic HTML view, which does not require JavaScript, click here.</p></font><p><font face=arial>If you want to view Gmail on a mobile phone or similar device click here.</font></p> \n Loading tim.koops#gmail.com\342\200\246 \n\n\n\n Loading standard view | Load basic HTML (for slow connections)"
In this case we could go to the HTML only version of Gmail to get you through, but unfortunately I think we're stuck for now. I will pass on this failing test case to the webdriver developers for review.
Also, hope those aren't your real Gmail credentials!
you need to enable javascript for the HtmlUnit driver.
It's disabled by default.
Use the Capability HTMLUNITWITHJS as opposed to the default HTMLUNIT.
I'm using the names in the Python bindings, but I'm sure Ruby is using something similar.
My problem is that I have a user that is having a problem displaying a portion of website I am creating, but I am unable to reproduce it on any of my browsers, even with the same version of the browser.
What I'm looking for is probably a website that I can send the user to which will tell me what version of the browser they are running along with the plugs installed and any other information that might affect the display of a page.
Any one know of anything like this?
Edit: The problem is related to CSS. They want some special image around all the text inputs, but on the users computer the text input displays partially outside of the image which is setup as a background.
I need more user specific information than Google Analytics as you can't separate out a specific user. I also suspect that it's more complicated than just the user agent.
I also can put the website out there publicly because they want to keep their idea private until it's released...grr.
I find that sending users to the Support Details site (http://supportdetails.com/) is a great way to get systems and browser specifics. At that site all they have to do is enter your email address and the site will send details such as:
Operating System
Screen Resolution
Browser Name and version
Browser size (view port)
IP Address
Color Depth
Javascript enabled (Y/N)
Flash version installed
Cookies enabled (Y/N).
Those pieces of info can also be exported as csv or PDF. Pretty sweet.
The site is made by an agency called Imulus.
Unfortunately, I don't know of any site that will log every detail about the users browser, as you request.
But perhaps browsershots.org could help with your debugging? It allows you to test you design in a lot of different browsers very easily.
EDIT: ... unfortunately restricted to the initial design on page load, since it simply takes a screenshot for you.
The classic approach is to use the useragent to determine the browser and OS
Looks like this site will display it for you.
As for plugins there are various ways to test in javascript for the plugins you are looking for.
You have to test for these on the client side as there is (to my knowledge) no way of detecting these on the server side.
The following crude example shows how to test for acrobat reader in IE and Mozilla browsers and returns if it was installed and if so what version in an object.
function TestAcro()
{
var acrobat=new Object();
acrobat.installed=false;
acrobat.version='0.0';
if (navigator.plugins && navigator.plugins.length)
{
for ( var x = 0, l = navigator.plugins.length; x < l; ++x )
{
//Note: Adobe changed the name of Acrobat to Adobe Reader
if ((navigator.plugins[x].name.indexOf('Acrobat') != -1) | (navigator.plugins[x].description.indexOf('Acrobat') != -1) | (navigator.plugins[x].name.indexOf('Adobe Reader') != -1) |(navigator.plugins[x].description.indexOf('Adobe Reader') != -1))
{
acrobat.version=parseFloat(navigator.plugins[x].description.split('Version ')[1]);
if (acrobat.version.toString().length == 1) acrobat.version+='.0';
acrobat.installed=true;
break;
}
}
}
else if (window.ActiveXObject)
{
for (x=2; x<10; x++)
{
try
{
oAcro=eval("new ActiveXObject('PDF.pdfCtrl."+x+"');");
if (oAcro)
{
acrobat.installed=true;
acrobat.version=x+'.0';
}
}
catch(e) {}
}
try
{
oAcro4=new ActiveXObject('PDF.pdfCtrl.1');
if (oAcro4)
{
acrobat.installed=true;
acrobat.version='4.0';
}
}
catch(e) {}
try
{
oAcro7=new ActiveXObject('AcroPDF.PDF.1');
if (oAcro7)
{
acrobat.installed=true;
acrobat.version='7.0';
}
}
catch(e){}
}
return acrobat;
}
Google analytics? If you have any sort of web analytics program installed on your web server, generally they also give info such as the operating system, web browser, etc. You could use the user's IP address to find his info in your logs.
Also, what issue are they having? We might be able to help..
I did find this program, but unfortunately it's not a free service, nor is there really anyway for me to get the information on that page (unless I pay for it): http://www.cyscape.com/showbrow.aspx
The useragent and related HTTP headers that are sent in all requests can give you some information (Browser and version), but for detail about the client-side installation, you may be out of luck for an automated capture mechanism that obtain a list of arbitrary plugins installed on the client browser. This would be a security violation, so unless a browser intentionally exposes them, you wouldn't get access to this without installing a client-side binary.
Depending on the relationship with the user, you could try something like Go2Meeting or CoPilot so that you can see the bug in action yourself. This would also allow you to peruse the browser settings and plugins.
If it is a CSS issue and the issue is with IE (most often) you may want to consider using the IE 7 library.
When it comes to CSS... I get it working properly in Mozilla browsers then I see what I need to conditionally hack to make it work in IE. This library comes in handy.
Also if possible I would try to limit support to the major modern browsers out there.
And if possible try to include the mobile browsers (iPhone, etc).
Hope this helps.
I've been using Ocean's Browser Capabilities in my ASP.NET web sites. It is really easy to get many properties. Specifically I'm using the Ocean2.Web.HttpCapabilities library.
To get the browser type and capabilities:
string browserSettings = Ocean2.Web.HttpCapabilities.BrowserCaps.Build.ProcessDefault(HttpContext.Current.Request);
Here is a sample of the results:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; WOW64; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.0.04506; Media Center PC 5.0; InfoPath.2)
os - Windows Vista
platform - WinNT
win16 - false
win32 - true
win64 - true
type - IE7
browser - IE
version - 7.0
BrowserBuild - aol - false
cookies - true
javascript - true
ecmascriptversion - 1.2
vbscript - true
activexcontrols - true
javaapplets - true
screenBitDepth - 1
mobileDeviceManufacturer - Unknown
mobileDeviceModel - Unknown
You could also try this:
BROWSER PROBE finds details about your browser, plugins, system, screen and much more.
A great tool for support staff and casual users alike.
Browser Probe
Most of these answers are outdated with dead links.
I found http://www.mybrowserinfo.com that suits my needs. Hope it helps someone else.
More user friendly service: https://aboutmybrowser.com/?nr