saucelabs - maximize screen resolution - browser

I am running automated test cases using phpunit. I will have to maximize the screen first for some of the elements to be visible. I have in my code in the setup() function:
function setUp() {
$session = $this->prepareSession();
$session->currentWindow()->maximize();
}
But when i run it in saucelabs, i dont see that this command works (the browser does not maximize) and the test cases fail with "Element not found" error. How do I open the screen with max resolution using phpunit in saucelabs?

I believe this can be specified in the DesiredCapabilities("screen-resolution","1280x1024") or so.You might also want to checkout https://docs.saucelabs.com/reference/platforms-configurator/#/

Related

Is there a good way to print the time after each run with `mocha -w`?

I like letting mocha -w run in a terminal while I work on test so I get immediate feedback, but I can't always tell from a glance if it's changed or not when the status doesn't change - did it run, or did it get stuck (it's happened)?
I'd like to have a way to append a timestamp to the end of each test run, but ideally only when run in 'watch' mode - if I'm running it manually, of course I know if it ran or not.
For now, I'm appending an asynchronous console log to the last test that runs:
it('description', function () {
// real test parts.should.test.things();
// Trick - schedule the time to be printed to the log - so I can see when it was run last
setTimeout(() => console.log(new Date().toDateString() + " # " + new Date().toTimeString()), 5);
});
Obviously this is ugly and bad for several reasons:
It's manually added to the last test - have to know which that is
It is added every time that test is run, but never others - so if I run a different file or test -> no log; if I run only that test manually -> log
It's just kind of an affront to the purpose of the tests - subverting it to serve my will
I have seen some references to mocha adding a global.it object with the command line args, which could be searched for the '-w' flag, but that is even uglier, and still doesn't solve most of the problems.
Is there some other mocha add-in module which provides this? Or perhaps I've overlooked something in the options? Or perhaps I really shouldn't need this and I'm doing it all wrong to begin with?
Mocha supports root level hooks. If you place an after hook (for example) outside any describe block, it should run at the end of all tests. It won't run only in watch mode, of course, but should otherwise be fit for purpose.

Random failure of selenium test on test server

I'm working on a project which uses nodejs and nighwatch for test automation. The problem here is that the tests are not reliable and give lots of false positives. I did everything to make them stable and still getting the errors. I went through some blogs like https://bocoup.com/blog/a-day-at-the-races and did some code refactoring. Did anyone have some suggestions to solve this issue. At this moment I have two options, either I rewrite the code in Java(removing nodejs and nightwatch from solution as I'm far more comfortable in Java then Javascript. Most of the time, struggle with the non blocking nature of Javascript) or taking snapshots/reviewing app logs/run one test at a time.
Test environment :-
Server -Linux
Display - Framebuffer
Total VM's -9 with selenium nodes running the tests in parallel.
Browser - Chrome
Type of errors which I get is element not found. Most of the time the tests fail as soon the page is loaded. I have already set 80 seconds for timeout so time can't be issue. The tests are running in parallel but on separate VM's so I don't know whether it can be issue or not.
Edit 1: -
Was working on this to know the root cause. I did following things to eliminate random fails: -
a. Added --suiteRetries to retry the failed cases.
b. Went through the error screenshot and DOM source. Everything seems fine.
c. Replaced the browser.pause with explicit waits
Also while debugging I observed one problem, maybe that is the issue which is causing random failures. Here's the code snippet
for (var i = 0; i < apiResponse.data.length; i++) {
var name = apiResponse.data[i];
browser.useXpath().waitForElementVisible(pageObject.getDynamicElement("#topicTextLabel", name.trim()), 5000, false);
browser.useCss().assert.containsText(
pageObject.getDynamicElement("#topicText", i + 1),
name.trim(),
util.format(issueCats.WRONG_DATA)
);
}
I added the xpath check to validate if i'm waiting enough for that text to appear. I observed that visible assertion is getting passed but in next assertion the #topicText is coming as previous value or null.This is an intermittent issue but on test server happens frequently.
There is no magic bullet to brittle UI end to end tests. In the ideal world there would be an option set avoid_random_failures=true that would quickly and easily solve the problem, but for now it's only a dream.
Simple rewriting all tests in Java will not solve the problem, but if you feel better in java, then I would definitely go in that direction.
As you already know from this article Avoiding random failures in Selenium UI tests there are 3 commonly used avoidance techniques for race conditions in UI tests:
using constant sleep
using WebDriver's "implicit wait" parameter
using explicit waits (WebDriverWait + ExpectedConditions + FluentWait)
These techniques are also briefly mentioned on WebDriver: Advanced Usage, you can also read about them here: Tips to Avoid Brittle UI Tests
Methods 1 and 2 are generally not recommended, they have drawbaks, they can work well on simple HTML pages, but they are not 100% realiable on AJAX pages, and they slow down the tests. The best one is #3 - explicit waits.
In order to use technique #3 (explicit waits) You need to familiarize yourself and be comfortable with the following WebDriver tools (I point to theirs java versions, but they have their counterparts in other languages):
WebDriverWait class
ExpectedConditions class
FluentWait - used very rarely, but very usefull in some difficult cases
ExpectedConditions has many predefinied wait states, the most used (in my experience) is ExpectedConditions#elementToBeClickable which waits until an element is visible and enabled such that you can click it.
How to use it - an example: say you open a page with a form which contains several fields to which you want to enter data. Usually it is enought to wait until the first field appears on the page and it will be editable (clickable):
By field1 = By.xpath("//div//input[.......]");
By field2 = By.id("some_id");
By field3 = By.name("some_name");
By buttonOk = By.xpath("//input[ text() = 'OK' ]");
....
....
WebDriwerWait wait = new WebDriverWait( driver, 60 ); // wait max 60 seconds
// wait max 60 seconds until element is visible and enabled such that you can click it
// if you can click it, that means it is editable
wait.until( ExpectedConditions.elementToBeClickable( field1 ) ).sendKeys("some data" );
driver.findElement( field2 ).sendKeys( "other data" );
driver.findElement( field3 ).sendKeys( "name" );
....
wait.until( ExpectedConditions.elementToBeClickable( buttonOK)).click();
The above code waits until field1 becomes editable after the page is loaded and rendered - but no longer, exactly as long as it is neccesarry. If the element will not be visible and editable after 60 seconds, then test will fail with TimeoutException.
Usually it's only necessary to wait for the first field on the page, if it becomes active, then the others also will be.

Cannot get QWindow::fromWinId to work properly

My Qt 5.9 program (on X11 Linux) launches other applications, using QProcess.
I would like to have control over windows these applications spawn, so I obtain their winId value and use QWindow::fromWinId to get a QWindow instance.
The problem is these instances are invalid and do not represent the window they are supposed to.
If I check the winId values using xwininfo, the correct information is returned, so I know they are good.
What am I doing wrong?
Edit: An example won't help much, but here goes:
QProcess *process=new QProcess(this);
...
process.open()
... // wait until window appears
WId winId=PidToWid(process->processId()); // this function returns the Window ID in decimal format. I test this with xwininfo, it's always correct
...
QWindow *appWindow=QWindow::fromWinId(winId);
... And that's basically it. appWindow is a valid QWindow instance, but it does not relate to the actual window in any way. For example, if I close() it, it returns true but the window does not close.
Even if I provide a wrong WId on purpose, the end result is the same.
This is not proper solution with explanation why it should work, however it may be helpful for somebody...
I had the same issue with my application when I switched from Qt4 QX11EmebeddedContainer to Qt5 implementation using QWindow. What I did to resolve / fix this issue was following:
Client application:
widget->show(); //Widget had to be shown
widget->createWinId();
sendWinId(widget->winId()); //Post window handle to master app where is constructed container
Master application:
QWindow* window = QWindow::fromWinId(clientWinId);
window->show(); //This show/hide toggle did trick in combination with show in client app
window->hide();
QWidget* container = QWidget::createWindowContainer(window, parentWindowWidget);
After this I was able to control window properly through QWidget container.

Problems running parallel watir tests

Running two tests at once, how do I get the second test from closing the browser of the first test?
Pretty much like my questions states: I'm running two tests (e.g.: test1.rb, test2.rb) at once using basic watir.
I'm not running rake, watir-grid, selenium-grid, parallel_test, or rspec. Whichever test finishes first invokes browser.close, causing the remaining test to fail. The returned message from the failed test is browser window was closed. /var/lib/gems/2.3.0/gems/watir-6.1.0/lib/watir/browser.rb:312:in 'assert_exists'.
What am I doing wrong? I've tried giving different variable names to the browser assignment such as browser1, browser2, etc. I've even tried installing rake under Jenkins to use two different workspaces. Below are examples of my tests (actual code removed to protect company identity).
test1.rb
#!/usr/bin/ruby
require 'watir'
require 'headless'
def runTests
# tests go here
end
begin
puts "Running headless."
headless = Headless.new
headless.start
puts "Running browser."
browser = Watir::Browser.new(:chrome)
browser.window.resize_to(1200, 1000)
browser.driver.manage.timeouts.implicit_wait = 5
runTests()
rescue => e
puts ("#{e}. "+ e.backtrace.join("\n"))
ensure
browser.close
headless.destroy
end
test2.rb
#!/usr/bin/ruby
require 'watir'
require 'headless'
require 'CoreClass'
def runSecondFileTests()
# second set of tests go here
# might use #coreClass if needed
end
begin
puts "Running headless."
headless = Headless.new
headless.start
puts "Running browser."
client = Selenium::WebDriver::Remote::Http::Default.new
client.read_timeout = 600
browser = Watir::Browser.new(:chrome, :http_client => client)
browser.window.resize_to(1200, 1000)
#coreClass = CoreClass.new(browser)
runSecondFileTests()
rescue => e
puts ("#{e} "+e.backtrace.join("\n"))
ensure
browser.close
headless.destroy
end
Posts I've already read:
Is it possible to run Watir test in parallel?
Watir webdriver; window.close is closing entire browser?
Suppress auto-closing window in Watir
https://markoh.co.uk/droplets
https://watirmelon.blog/tag/automated-testing/
http://watirautomation.blogspot.com/
https://github.com/grosser/parallel_tests#setup-for-non-rails
https://github.com/watir/watir-rspec
The problem is actually in the headless setup. The first headless.destroy will force close all open browsers in the default display. You need to specify display or reuse parameters.

Inconsistent results from karma e2e test runner. How can I debug?

I have a simple angular / requirejs / node project that loads correctly when viewed from a browser. I'm trying to get e2e tests with karma set up.
I've copied all of the e2e configurations and directory structures from the angular-require-js seed into my own project. Unfortunately, the tests in my own project give bizarre (and ever-changing!) results. Here's the stripped-down test I'm trying to run:
describe('My Application', function() {
beforeEach(function() {
browser().navigateTo('/');
sleep(0.5);
});
it('shows an "Ask a Question" button on the index page', function() {
expect(element('a').text()).toBe('Ask a Question');
});
});
Sometimes the test fails
Executed 1 of 1 (1 FAILED) (0.785 secs / 0.614 secs)
Firefox 22.0 (Mac) My Application shows an "Ask a Question" button on the index page FAILED
element 'a' text
http://localhost:9876/base/test/lib/angular/angular-scenario.js?1375035800000:25397: Selector a did not match any elements.
(but there ARE a elements on the page!)
Sometimes the test hangs
Executed 0 of 0! In these cases the test-runner browser does show that it's trying to run a test, but it never completes:
It just stays like this forever. My app IS displayed in the browser during this hang.
Without element('a') it always passes
The only way to get consistent results is to avoid element(). If I expect(true).toBe(true) then 1 out of 1 tests always pass.
How can I debug this?
I'm at a loss for how to move forward. The test browser is correctly displaying my app, with the relevant 'a' element and everything. The test runner itself seems to only sometimes recognize that it should be running something and NEVER finds the a element. Is there a way to step through the test running process? Is this a common problem that happens when [x] is misconfigured?
Thanks for any suggestions!
karma-e2e.conf.js
basePath = '../';
files = [
'test/lib/angular/angular-scenario.js',
ANGULAR_SCENARIO_ADAPTER,
'test/e2e/**/*.js'
];
autoWatch = false;
browsers = ['Firefox'];
singleRun = true;
proxies = {
'/': 'http://localhost:3000/'
};
urlRoot = "__karma__";
junitReporter = {
outputFile: 'test_out/e2e.xml',
suite: 'e2e'
};
How many anchor tabs do you have on the page?
You may not be referencing the actual anchor you'd expect. Add an id tag to the anchor and test again. If it is the only anchor tag in the page try to match the text rather than expect it to be. IE:
expect((element('#anchor-tag-id').text()).toMatch(/Ask a question/);
If you use chrome open the develop tools on your a element to check the actual values, this may help a lot.
EDIT:
should be
expect(element('#anchor-tag-id').text()).toMatch(/Ask a question/);
sorry added extra ( in the first example

Resources