Converting existing cypress tests to cucumber style bdd using cypress-cucumber-preprocessor. Second scenario is not picked up - cucumber

We have an existing application, the tests are written in cypress. We now want to integrate a cucumber style feature which will internally run using cypress. We used cypress-cucumber-preprocessor for the same. I followed the steps given here on the github page. The problem I'm facing now, is while running tests, it shows both the scenarios, but runs only one. Shows a green tick mark next to it, but doesn't start the second one, and the clock keeps on ticking. On clicking the second scenario in the cypress launcher it says - no commands were issued in this test.
What have I tried:
I tried to duplicate the same scenario twice in the same feature file. It still runs only first one and does not move to the next one.
I moved both different scenarios in two different feature files. It runs both of them successfully.
I tried to run the example repo (cypress-cucumber-example) locally with n number of scenarios. That works seamlessly.
Some observations:
While the first test is run I ran chrome console, and saw some errors due to some network calls failing. But these calls were made (with same errors) even when I was using only cypress and hadn't integrated with cucumber, and all tests were passing. Is it because of some magic cucumber is bringing along with it? Read somewhere default cucumber waits for a test is 60 seconds, I waited for maximum 170 seconds, and then stopped the suite. At the end all I get is one scenario green and other not even started.

It took me quite a long time, but I actually figured out what the issue was. I had an enter key after Feature: in my feature file. The ide didn't raise it as any problem and all was good. I was just comparing successful runs against this issue and saw that the feature name is not appearing in the UI, and hence took away the \n. It works like a charm now. Wondering what a small enter key can do.

Related

how to make a jest test fail at first time even using jest.retryTimes?

in my Jest test suite,
i use
jest.retryTimes(4)
because of the particular and instable architecture of the software. And this works as expected.
There are some test that must to pass at the first time so for these particular test i need to set
jest.retryTimes(1)
at the beginning of the test, restoring
jest.retryTimes(4)
at the end.
There are two problems :
The problem is this configuration is global, and test are executed
in parallel, so when this test start, it put 1 to jest retry for all
the test running at this moment. I would like to make only this
particular test fail the first time.
Jest Circus ignore the update of jest.retryTimes at the beginning and at the end
of the test, it keep considering 4 temptative before of raise the failure.
I read the documentation but I think I cannot obtain this result.
any suggestion?
Thanks

Fatal error: stale element reference: element is not attached to the page document in selenium with nodejs

I am facing this issue while running the UI regression test. I have used Typescript for automation in Selenium and I have around 80 test cases. So when executing all at once, it executes some of the test cases smoothly but after sometimes in one of the testcase this error comes up and stops the execution. I am using only one thread for running these tests.
"noOfThreads": "1",
There is no issue with the individual test cases as they work fine when running individually
StaleElementException occurs when the the driver tries to perform an action on an element which was once present in the DOM but does not exist anymore. For example, if you store an element in a variable, perform some changes which introduces variation in the DOM structure and the original element is lost, and now your driver tries to access it, this would fail.
In your case, when you are running the tests individually, there is no effect of one scenario on the next one. This is why the tests pass. However, when you run all of them together, this can introduce a change in the DOM.
Try to debug the test which failed by running a few tests before it.

Speed up Meteor test refresh

When testing a Meteor app, I notice that Meteor does a lot of repeated tasks, such as
Downloading missing packages (several seconds)
It seems that it would be more efficient to check for updated package.json and .meteor/versions before "downloading missing packages", especially when the only changes are within unit tests or other application-specific code (i.e. no new imports).
Building web.cordova (which I do not use)
I am pretty sure that specifying the target for testing is possible, so the project is only build for web.browser, for example.
If there are errors, everything is built and executed three times (and failing three times).
When a test fail, why does it have to try again with the exact same code? Is there any use case where this makes sense?
Right now, every time a test module changes, it's several seconds before the tests are ran again because of all these tasks. Is there any way to optimize this and make it more efficient and faster?

Tests run fine one by one but fail a lot when run in parallel

I have the following problem:
I've created a bunch of test using WebDriver + Java + TestNG + Maven which work just fine when I run them one by one- when the thread-count is 1 in my testng.xml file but when I increase the number of threads the tests start failing, most common issues I am getting are:
1. stale element reference exceptions
2. timeouts
3. keyboard actions do not work in most cases
As for now I am running the tests either just through testng.xml or selenium grid (2.3.1 server) on my local machine using only firefox browser 23.0.1.
Any answer or ideas will be helpful, thanks!

Cucumber: Each feature passes individually, but not together

I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?
It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.
I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.

Resources