Inconsistent results from karma e2e test runner. How can I debug? - node.js

I have a simple angular / requirejs / node project that loads correctly when viewed from a browser. I'm trying to get e2e tests with karma set up.
I've copied all of the e2e configurations and directory structures from the angular-require-js seed into my own project. Unfortunately, the tests in my own project give bizarre (and ever-changing!) results. Here's the stripped-down test I'm trying to run:
describe('My Application', function() {
beforeEach(function() {
browser().navigateTo('/');
sleep(0.5);
});
it('shows an "Ask a Question" button on the index page', function() {
expect(element('a').text()).toBe('Ask a Question');
});
});
Sometimes the test fails
Executed 1 of 1 (1 FAILED) (0.785 secs / 0.614 secs)
Firefox 22.0 (Mac) My Application shows an "Ask a Question" button on the index page FAILED
element 'a' text
http://localhost:9876/base/test/lib/angular/angular-scenario.js?1375035800000:25397: Selector a did not match any elements.
(but there ARE a elements on the page!)
Sometimes the test hangs
Executed 0 of 0! In these cases the test-runner browser does show that it's trying to run a test, but it never completes:
It just stays like this forever. My app IS displayed in the browser during this hang.
Without element('a') it always passes
The only way to get consistent results is to avoid element(). If I expect(true).toBe(true) then 1 out of 1 tests always pass.
How can I debug this?
I'm at a loss for how to move forward. The test browser is correctly displaying my app, with the relevant 'a' element and everything. The test runner itself seems to only sometimes recognize that it should be running something and NEVER finds the a element. Is there a way to step through the test running process? Is this a common problem that happens when [x] is misconfigured?
Thanks for any suggestions!
karma-e2e.conf.js
basePath = '../';
files = [
'test/lib/angular/angular-scenario.js',
ANGULAR_SCENARIO_ADAPTER,
'test/e2e/**/*.js'
];
autoWatch = false;
browsers = ['Firefox'];
singleRun = true;
proxies = {
'/': 'http://localhost:3000/'
};
urlRoot = "__karma__";
junitReporter = {
outputFile: 'test_out/e2e.xml',
suite: 'e2e'
};

How many anchor tabs do you have on the page?
You may not be referencing the actual anchor you'd expect. Add an id tag to the anchor and test again. If it is the only anchor tag in the page try to match the text rather than expect it to be. IE:
expect((element('#anchor-tag-id').text()).toMatch(/Ask a question/);
If you use chrome open the develop tools on your a element to check the actual values, this may help a lot.
EDIT:
should be
expect(element('#anchor-tag-id').text()).toMatch(/Ask a question/);
sorry added extra ( in the first example

Related

BeforeEach step is repeated with cy.session using cypress-cucumber-preprocessor

I have a Cypress project where I use the Cypress session API to maintain a session throughout features.
Now I try switching from the deprecated Klaveness Cypress Cucumber Preprocessor to the replacement, Badeball's Cypress Cucumber Preprocessor. But I am running into an issue; the beforeEach() step where my authentication takes place gets repeated several times before the tests start. Eventually, Cypress "snaps out of it" and starts running the actual tests - but obviously this is very resource and time intensive, something is going wrong.
My setup:
Dependencies:
"cypress": "^9.6.1",
"#badeball/cypress-cucumber-preprocessor": "^9.1.3",
index.ts:
beforeEach(() => {
let isAuthInitialized = false;
function spyOnAuthInitialized(window: Window) {
window.addEventListener('react:authIsInitialized', () => {
isAuthInitialized = true;
});
}
login();
cy.visit('/', { onBeforeLoad: spyOnAuthInitialized });
cy.waitUntil(() => isAuthInitialized, { timeout: 30000 });
});
login() function:
export function login() {
cy.session('auth', () => {
cy.authenticate();
});
}
As far as I can see, I follow the docs for cy.session almost literally.
My authenticate command has only application specific steps, it does include a cy.visit('/') - after which my application is redirected to a login service (different domain) and then continues.
The problem
cy.session works OK, it creates a session on the first try - then each subsequent time it logs a succesful restore of a valid session. But this happens a number of times, it seems to get stuck in a loop.
Screenshot:
It looks to me like cy.visit() is somehow triggering the beforeEach() again. Perhaps clearing some session data (localstorage?) that causes my authentication redirect to happen again - or somehow makes Cypress think the test starts fresh. But of course beforeEach() should only happen once per feature.
I am looking at a diff of my code changes, and the only difference except the preprocessor change is:
my .cypress-cucumber-preprocessorrc.json (which I set up according to the docs
typing changes, this preprocessor is stricter about typings
plugins/index.ts file, also set up according to the docs
Am I looking at a bug in the preprocessor? Did I make a mistake? Or something else?
There are two aspects of Cypress + Cucumber with preprocessor that make this potentially confusing
Cypress >10 "Run all specs" behaviour
As demonstrated in Gleb Bahmutov PhD's great blog post, if you don't configure Cypress to do otherwise, running all specs runs each hook before each test. His proposed solution is to not use the "run all specs" button, which I find excessive - because there are ways around this; see below for a working solution with the Cucumber preprocessor.
Note: as of Cypress 10, "run all specs" is no longer supported (for reasons related to this unclarity).
Cucumber preprocessor config
The Cypress Cucumber preprocessor recommends to not use the config option nonGlobalStepDefinitions, but instead configure specific paths like (source):
"stepDefinitions": [
"cypress/integration/[filepath]/**/*.{js,ts}",
"cypress/integration/[filepath].{js,ts}",
"cypress/support/step_definitions/**/*.{js,ts}",
]
}
What it doesn't explicitly state though, is that the file which includes your hooks (in my case index.ts) should be excluded from these paths if you don't want them to run for each test! I could see how one might think this is obvious, but it's easy to accidentally include your hooks' file in this filepath config.
TLDR: If I exclude my index.ts file which includes my hooks from my stepDefinitions config, I can use "run all specs" as intended - with beforeEach() running only once before each test.

How I could attach screenshots to cucumber report in cypress when the step fails?

recently I started working with cucumber html reporter using cypress, but I didn't manage to attach screenshot on the failed step to the report. Does anybody have any idee on how I could do that?
Now my report looks like in the image bellow: =>
I would like to achieve this format: =>
It looks like you are using cypress-cucumber-preprocessor
I was looking at using hooks to do this but as far as I can tell you dont have any access to the scenario objects (like in cucumber.js) to attach screenshots too.
However I did find this script https://github.com/jcundill/cypress-cucumber-preprocessor/blob/master/fixJson.js which will go through and attach/embed the screenshots (and videos too) that cypress takes, to the cucumber.json files that cypress-cucumber-preprocessor generates
Then when you generate the report you will see screenshots and videos for failed tests
Note I had to fiddle with it a bit to get it working for me
Firstly, The regex for determining the scenario names from the screenshot did work for me, I replaced it with a function
function getScenarioNameFromScreenshot(screenshot) {
const index1 = screenshot.indexOf('--');
let index2;
if (screenshot.indexOf('(example') === -1) {
// Normal end index of scenario in screenshot filename
index2 = screenshot.indexOf('(failed)');
} else {
// End index of scenario if its from a failed BDD example
index2 = screenshot.indexOf('(example');
}
return screenshot
.substring(index1 + 2, index2)
.trim();
}
Secondly, I had to create the .embeddings array in the cucumber.json (otherwise it fails when you try and push to it, if it doesn't exist, which it didn't for me)
myStep.embeddings = [];
Before
myStep.embeddings.push({ data: base64Image, mime_type: "image/png" });
Although, looking at it, it would probably be better to check for its existence first, and create it if needed, but hey it works for me
But otherwise it worked like a charm

Random failure of selenium test on test server

I'm working on a project which uses nodejs and nighwatch for test automation. The problem here is that the tests are not reliable and give lots of false positives. I did everything to make them stable and still getting the errors. I went through some blogs like https://bocoup.com/blog/a-day-at-the-races and did some code refactoring. Did anyone have some suggestions to solve this issue. At this moment I have two options, either I rewrite the code in Java(removing nodejs and nightwatch from solution as I'm far more comfortable in Java then Javascript. Most of the time, struggle with the non blocking nature of Javascript) or taking snapshots/reviewing app logs/run one test at a time.
Test environment :-
Server -Linux
Display - Framebuffer
Total VM's -9 with selenium nodes running the tests in parallel.
Browser - Chrome
Type of errors which I get is element not found. Most of the time the tests fail as soon the page is loaded. I have already set 80 seconds for timeout so time can't be issue. The tests are running in parallel but on separate VM's so I don't know whether it can be issue or not.
Edit 1: -
Was working on this to know the root cause. I did following things to eliminate random fails: -
a. Added --suiteRetries to retry the failed cases.
b. Went through the error screenshot and DOM source. Everything seems fine.
c. Replaced the browser.pause with explicit waits
Also while debugging I observed one problem, maybe that is the issue which is causing random failures. Here's the code snippet
for (var i = 0; i < apiResponse.data.length; i++) {
var name = apiResponse.data[i];
browser.useXpath().waitForElementVisible(pageObject.getDynamicElement("#topicTextLabel", name.trim()), 5000, false);
browser.useCss().assert.containsText(
pageObject.getDynamicElement("#topicText", i + 1),
name.trim(),
util.format(issueCats.WRONG_DATA)
);
}
I added the xpath check to validate if i'm waiting enough for that text to appear. I observed that visible assertion is getting passed but in next assertion the #topicText is coming as previous value or null.This is an intermittent issue but on test server happens frequently.
There is no magic bullet to brittle UI end to end tests. In the ideal world there would be an option set avoid_random_failures=true that would quickly and easily solve the problem, but for now it's only a dream.
Simple rewriting all tests in Java will not solve the problem, but if you feel better in java, then I would definitely go in that direction.
As you already know from this article Avoiding random failures in Selenium UI tests there are 3 commonly used avoidance techniques for race conditions in UI tests:
using constant sleep
using WebDriver's "implicit wait" parameter
using explicit waits (WebDriverWait + ExpectedConditions + FluentWait)
These techniques are also briefly mentioned on WebDriver: Advanced Usage, you can also read about them here: Tips to Avoid Brittle UI Tests
Methods 1 and 2 are generally not recommended, they have drawbaks, they can work well on simple HTML pages, but they are not 100% realiable on AJAX pages, and they slow down the tests. The best one is #3 - explicit waits.
In order to use technique #3 (explicit waits) You need to familiarize yourself and be comfortable with the following WebDriver tools (I point to theirs java versions, but they have their counterparts in other languages):
WebDriverWait class
ExpectedConditions class
FluentWait - used very rarely, but very usefull in some difficult cases
ExpectedConditions has many predefinied wait states, the most used (in my experience) is ExpectedConditions#elementToBeClickable which waits until an element is visible and enabled such that you can click it.
How to use it - an example: say you open a page with a form which contains several fields to which you want to enter data. Usually it is enought to wait until the first field appears on the page and it will be editable (clickable):
By field1 = By.xpath("//div//input[.......]");
By field2 = By.id("some_id");
By field3 = By.name("some_name");
By buttonOk = By.xpath("//input[ text() = 'OK' ]");
....
....
WebDriwerWait wait = new WebDriverWait( driver, 60 ); // wait max 60 seconds
// wait max 60 seconds until element is visible and enabled such that you can click it
// if you can click it, that means it is editable
wait.until( ExpectedConditions.elementToBeClickable( field1 ) ).sendKeys("some data" );
driver.findElement( field2 ).sendKeys( "other data" );
driver.findElement( field3 ).sendKeys( "name" );
....
wait.until( ExpectedConditions.elementToBeClickable( buttonOK)).click();
The above code waits until field1 becomes editable after the page is loaded and rendered - but no longer, exactly as long as it is neccesarry. If the element will not be visible and editable after 60 seconds, then test will fail with TimeoutException.
Usually it's only necessary to wait for the first field on the page, if it becomes active, then the others also will be.

Problems running parallel watir tests

Running two tests at once, how do I get the second test from closing the browser of the first test?
Pretty much like my questions states: I'm running two tests (e.g.: test1.rb, test2.rb) at once using basic watir.
I'm not running rake, watir-grid, selenium-grid, parallel_test, or rspec. Whichever test finishes first invokes browser.close, causing the remaining test to fail. The returned message from the failed test is browser window was closed. /var/lib/gems/2.3.0/gems/watir-6.1.0/lib/watir/browser.rb:312:in 'assert_exists'.
What am I doing wrong? I've tried giving different variable names to the browser assignment such as browser1, browser2, etc. I've even tried installing rake under Jenkins to use two different workspaces. Below are examples of my tests (actual code removed to protect company identity).
test1.rb
#!/usr/bin/ruby
require 'watir'
require 'headless'
def runTests
# tests go here
end
begin
puts "Running headless."
headless = Headless.new
headless.start
puts "Running browser."
browser = Watir::Browser.new(:chrome)
browser.window.resize_to(1200, 1000)
browser.driver.manage.timeouts.implicit_wait = 5
runTests()
rescue => e
puts ("#{e}. "+ e.backtrace.join("\n"))
ensure
browser.close
headless.destroy
end
test2.rb
#!/usr/bin/ruby
require 'watir'
require 'headless'
require 'CoreClass'
def runSecondFileTests()
# second set of tests go here
# might use #coreClass if needed
end
begin
puts "Running headless."
headless = Headless.new
headless.start
puts "Running browser."
client = Selenium::WebDriver::Remote::Http::Default.new
client.read_timeout = 600
browser = Watir::Browser.new(:chrome, :http_client => client)
browser.window.resize_to(1200, 1000)
#coreClass = CoreClass.new(browser)
runSecondFileTests()
rescue => e
puts ("#{e} "+e.backtrace.join("\n"))
ensure
browser.close
headless.destroy
end
Posts I've already read:
Is it possible to run Watir test in parallel?
Watir webdriver; window.close is closing entire browser?
Suppress auto-closing window in Watir
https://markoh.co.uk/droplets
https://watirmelon.blog/tag/automated-testing/
http://watirautomation.blogspot.com/
https://github.com/grosser/parallel_tests#setup-for-non-rails
https://github.com/watir/watir-rspec
The problem is actually in the headless setup. The first headless.destroy will force close all open browsers in the default display. You need to specify display or reuse parameters.

Sammy.js with Knockout.js Not Running Route With Every URL Change

I have a single Sammy route that recognizes an arbitrary number of parameters. The route looks like this:
get(/^\/(?:\?[^#]*)?#page\/?((?:[^\:\/]+\:[^\:\/]+\/?)*)$/g, function() {
var params = {};
var splat = this.params.splat[0];
var re = /([^\:\/]+)\:([^\:\/]+)/g;
match = true
while(match = re.exec(splat)) {
params[match[1]] = match[2];
}
self.loadData(params);
});
This code works. What it does is it recognizes routes of the pattern #page/param1:value1/param2:value2/ for an arbitrary number of parameters. My loadData function has default values for many of these parameters. I'm confident there isn't a problem with the actual loading of the pages, since it works 100% on many computers in many browsers. However, it has weird behavior on my Android's browser and on my friend's Mac's Safari and Chrome (works on my PC's Chrome). I've noticed that these are Webkit browsers.
The behavior is that the route runs correctly for the first URL change, then won't for the next URL change (although the URL in the browser bar does indeed always change), then it'll work again for the third one, and won't for the fourth. That is, it works every other time. This seems like very strange behavior to me, and I'm at a loss as to how to debug this. For certain links, I was able to run a hack such that on click I set the window location to the URL and forcefully run the sammy code with runRoute('get', url);. It's impractical to have to add this for every click event on the page, and that doesn't really account for all URL changes anyway. Is there something I can do to debug why my route isn't being run every time the URL is changing?
For those of you who encounter similar behavior, on every other click in the above-mentioned browsers, this.params.splat was undefined. It's supposed to be set to the matched part of the URL (e.g. /#page/param1:value1/).
The hack I came up with to deal with this is to add this to the top of the get route:
if(this.params.splat === undefined) {
app.unload().run();
return;
}
This doesn't get to the root of the problem, it's just a hack that allows it to re-run the routes so that params.splat isn't undefined the next time through. If anyone has more information on what is going on, I'd be interested.

Resources