How to do automation testing of your website on firefox ? (webdriverio) - node.js

I am new to automation testing. I followed the steps given on "http://webdriver.io/guide.html" everything went great .
I installed
node.js
selenium-server-standalone-3.5.3
geckodriver
chromeriver
My script goes like this:
var webdriverio = require(.\\webdriverio');
var options = {
desiredCapabilities: {
browserName: 'C:\Program Files\Mozilla Firefox\firefox'
}
};
webdriverio
.remote(options)
.init()
.url('http://www.google.com')
.getTitle().then(function(title) {
console.log('Title was: ' + title);
})
.end()
.catch(function(err) {
console.log(err);
});
This works well but it opens chrome browser where as I want to open firefox.

Switch browserName: 'C:\Program Files\Mozilla Firefox\firefox'
to just browserName: 'firefox'

Related

Protractor azure pipeline No element found Error

My protractor tests work correctly on my machine but when start it on Azure pipeline all tests fail with No element found.
Do you have an idea wwhat is the problem
May be i miss something here.That is in my conf.js:
browser.ignoreSynchronization = false;
exports.config = {
allScriptsTimeout: 500000,
// getPageTimeout: 15000,
specs: ['specDAC.js'],
rootElement: 'html',
capabilities: {
'browserName': 'chrome',
chromeOptions: {
args: ["--headless", "--disable-gpu", "--window-size=1200,900"],
binary: process.env.CHROME_BIN
}
},
directConnect: true,
baseUrl: 'http://localhost:4200/',
framework: 'jasmine',
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 1000000,
Usually when I see 'element not found' it's typically signalizing that the page/AUT is not even loaded. It's hard to say without seeing actual code but I can assume that your test starts with navigating to some page. Try add some logging or wrap this part in to condition (e.g If 'login' button is present => click; else => console.log("something wrong")
The problem is not in the code. The test is work on my machine. The problem is something in the pipeline or in conf.js. The pipeline cannot find any elements. The page is loaded i put an average waiting time.
Ok may be you are right. That is my code you can check it:
it('first test', async function(){
await sleep(2000);
await browser.driver.manage().window().maximize();
await browser.waitForAngularEnabled(false);
await sleep(8000);
// login user
await loginPage.get(testConf.loginUrl);
await sleep(4000);
await loginPage.setLoginCredentials(testConf.mmmClientUser, testConf.password);
The error is not find an element where put my email but locally it is work

Protractor generates an error when disabling flow control to test an Angular app

I've been struggling with this error for a while and I'm running out of mana. I'm currently trying to test an Angular app with protractor and async/await. According to the doc, I have to disable the control flow by adding the following to my config file:
SELENIUM_PROMISE_MANAGER: false but doing so produces the following error:
UnhandledPromiseRejectionWarning: Error: Error while waiting for Protractor to sync with the page: "both angularJS testability and angular testability are undefined. This could be either because this is a non-angular page or because your test involves client-side navigation, which can interfere with Protractor's bootstrapping. See https://github.com/angular/protractor/issues/2643 for details" I visited the url (https://github.com/angular/protractor/issues/2643) but it didn't turn out very helpful.
At this point I'm not sure if I'm doing something wrong or if it's a bug with protractor itself. For this reason I also opened an issue on GitHub.
Here is my test:
import {
browser,
ExpectedConditions,
$
} from 'protractor';
describe('When user click \"Test\" button', async () => {
beforeAll(async () => {
expect(browser.getCurrentUrl()).toContain('myawesomewebsite');
});
it ("should click the button", async () => {
var button = $(".button");
button.click();
});
});
And here is my full configuration:
exports.config = {
capabilities: {
'browserName': 'chrome'
},
seleniumAddress: 'http://localhost:4444/wd/hub',
framework: 'jasmine',
specs: ['test.spec.ts'],
SELENIUM_PROMISE_MANAGER: false,
jasmineNodeOpts: {
defaultTimeoutInterval: 30000
},
beforeLaunch: function () {
require('ts-node/register')
}
};
You missed await before each protractor api invoking.
describe('When user click \"Test\" button', async () => {
beforeAll(async () => {
expect(await browser.getCurrentUrl()).toContain('myawesomewebsite');
});
it ("should click the button", async () => {
var button = $(".button");
await button.click();
});
});
So, thanks to #CrispusDH on GitHub, I figured out that I could use waitForAngularEnabled in the configuration file and not just in the spec file. Using it in the spec file was not working, but if used in the onPrepare hook of the configuration file, the error goes away.
A lot of resources online were saying to set it to false, but this wasn't working for me as Protractor couldn't find element without waiting for Angular, so I did set it to false in the configuration and file but called browser.waitForAngularEnabled(true); in my specs file (beforeAll hook). Now the error is gone, allowing me to use async/await.
Here is the proper configuration to use:
SELENIUM_PROMISE_MANAGER: false,
onPrepare: async () => {
await browser.waitForAngularEnabled(false);
}
And here is the code to call in spec file:
beforeAll(async () => {
browser.waitForAngularEnabled(true);
});

Unable to access Browser-sync External IP on Ubuntu 16.04

I'm trying to follow along with a wordpress guide on Lynda.com that instructs me to use npm and Browser-Sync. Everything was working properly when I was working on a windows machine but I have recently setup a linux server (Ubuntu 16.04) and cannot seem to access the External URL Browser-Sync gives me. I am on the same network, and it does not work on any of my devices. I have the site setup to http://custom.local. Below is the Gulpfile.js I am using to initiate browser-sync.
var themename = 'custom';
var gulp = require('gulp'),
// Prepare and optimize code etc
autoprefixer = require('autoprefixer'),
browserSync = require('browser-sync').create(),
image = require('gulp-image'),
jshint = require('gulp-jshint'),
postcss = require('gulp-postcss'),
sass = require('gulp-sass'),
sourcemaps = require('gulp-sourcemaps'),
// Only work with new or updated files
newer = require('gulp-newer'),
// Name of working theme folder
root = '../' + themename + '/',
scss = root + 'sass/',
js = root + 'js/',
img = root + 'images/',
languages = root + 'languages/';
// CSS via Sass and Autoprefixer
gulp.task('css', function() {
return gulp.src(scss + '{style.scss,rtl.scss}')
.pipe(sourcemaps.init())
.pipe(sass({
outputStyle: 'expanded',
indentType: 'tab',
indentWidth: '1'
}).on('error', sass.logError))
.pipe(postcss([
autoprefixer('last 2 versions', '> 1%')
]))
.pipe(sourcemaps.write(scss + 'maps'))
.pipe(gulp.dest(root));
});
// Optimize images through gulp-image
gulp.task('images', function() {
return gulp.src(img + 'RAW/**/*.{jpg,JPG,png}')
.pipe(newer(img))
.pipe(image())
.pipe(gulp.dest(img));
});
// JavaScript
gulp.task('javascript', function() {
return gulp.src([js + '*.js'])
.pipe(jshint())
.pipe(jshint.reporter('default'))
.pipe(gulp.dest(js));
});
// Watch everything
gulp.task('watch', function() {
gulp.task('watch', function() {
browserSync.init({
open: false,
proxy: 'custom.local',
port: 8080
});
gulp.watch([root + '**/*.css', root + '**/*.scss' ], ['css']);
gulp.watch(js + '**/*.js', ['javascript']);
gulp.watch(img + 'RAW/**/*.{jpg,JPG,png}', ['images']);
gulp.watch(root + '**/*').on('change', browserSync.reload);
});
// Default task (runs at initiation: gulp --verbose)
gulp.task('default', ['watch']);
I've tried using different ports, and setting tunnel: true, and anything else I can find, but I'm getting a whole lot of nothing. Any assistance would be amazing.
Thank you,
If you come across this and are having a similar issue, I just fixed it. I am using a headless environment so I didn't think to set up an entry in the hosts file.
/etc/hosts
127.0.0.1 custom.local

Start chrome & firefox with proxy using webdriverio in nodejs

How can I open a browser session with proxy set through the options(/desired capabilities) using WebDriverIO in NodeJs. I use this code for setting proxy, but it stopped working. The browser opens without the proxy and will not perform any further actions.
options = {
desiredCapabilities: {
browserName: 'firefox',
proxy: {
proxyType: 'manual',
httpProxy: '127.0.0.11:80'
}
}
};
client = webdriverio.remote(options).init();
I believe this code should work:
var webdriverio = require('webdriverio');
var options = {
desiredCapabilities: {
browserName: 'firefox'
, proxy: {
proxyType: 'MANUAL'
, httpProxy: '127.0.0.11:80'
}
}
};
client = webdriverio.remote(options).init();
Reference: https://github.com/webdriverio/webdriverio/issues/324

Error in OpenShift: "phantomjs-node: You don't have 'phantomjs' installed"

I successfully created a script using phantomjs-node in local and I would like to host on OpenShift.
The thing is when I run my script hosted, I had this strange error:
phantom stderr: execvp(): No such file or directory phantomjs-node:
You don't have 'phantomjs' installed
But as you can see, I put the dependancies in the package.json file:
"dependencies": {
"express": "~3.4.4",
"phantom": "*",
"phantomjs": "*"
},
Any suggestions?
Edit:
This is how I initialize the phantomjs script:
var options = {
port: 16000,
hostname: "127.2.149.1",
path: "/phantom_path/"
}
phantom.create(function(ph) {
visitUrl(ph, 0, 0);
}, options);
The error message You don't have 'phantomjs' installed is an internal error from the phantomjs-node module. I ran into this error myself, and I managed to fix it like this:
var phantom = require('phantom');
var options = {
path: '/usr/local/bin/'
};
phantom.create(function (ph) {
ph.createPage(function (page) {
page.open("http://www.google.com", function (status) {
console.log("opened google? ", status);
page.evaluate(function () { return document.title; }, function (result) {
console.log('Page title is ' + result);
ph.exit();
});
});
});
}, options);
Notice the options being passed to the phantom.create() method. The path option should be the full path to the directory that contains your phantomjs binary.
Phantomjs-node is looking for Phantomjs on the PATH of your Open Shift environment and cannot find it. Look for a way to add Phantomjs on this PATH.

Resources