Cucumber: Each feature passes individually, but not together - cucumber

I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?

It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.

I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.

Related

Handling Multithreading in XML files for running testcases in parallel

I'm new with multithreading, here is my problem statement,
I have an xml file (TestCase.xml) where each tag resembles a test case something like below,
TestCase.xml
In turn, each main tag has a child-tag that links to another xml(TestStep.xml) which dictates the steps of the test case, it’s TS in the above example.
TestStep.xml
The execution always starts from the TestCase.xml based on the id provided. With this overview, I have 100 test cases in my suite & I want to execute them in parallel, i.e. execute at least 5-6 test cases at the same time. I’m not able to use external plug-ins like Testng, Junit, BDD or mavensurefire etc. After a lot of R&D we have ended up with Multithreading. I would need assistance on how to implement the same.

Tracing Node.js program execution

I'd like to trace the execution path of an arbitrary Node.js program.
Specifically, I'd like to run a program (server or script), and have some sort of block-level (function call, loop, if statement) trace of the execution.
Constraints
Output must contain files / lines / hit count for all lines during execution
No code minification. Istanbul is great but I want to keep the code that is executed in the end as readable as possible.
For long-running processes (servers, for example), I want to be able to see "current" line coverage (or as up-to-date as possible)
I don't want to lose any coverage data, so while Profiling would give me some hints as to lines hit, it's not really code coverage.
Things I don't care about
Exactly how the coverage is read. For example, it could be output to a file, it could be read via the code, etc.
Coverage format
Thing's I've investigated so far:
Using NODE_V8_COVERAGE:
I found that if I set the NODE_V8_COVERAGE environment variable to a directory, coverage data will be output to that directory when the program exits (here's a blog post on the creation of this feature).
The problem that I'm facing here is that I'm not sure there's a way to trigger the generation of these reports before the program terminates.
Using inspector
I have also been experimenting with Node.js inspector. I found a useful CPU profiler here. This could end up being helpful, but this profiler works by sampling, not as a hook into the language. As a result, I only get line numbers / counts for parts of the code that were slow.
I also tried using Profiler.startPreciseCoverage, thinking that somehow this might give me every line that was executed (didn't find the documentation to be clear on what this does really). It didn't seem to be any more useful
Using Istanbul
I would like to avoid instrumenting code if possible.
Question
It seems like my options are limited, but at the same time this is only a result of my Googling for an hour or two.
Is there a better way to capture line coverage with the constraints listed above?
There's a pull request pending for Node.js to add functionality to programmatically start/stop/write V8 coverage information. If you are adventurous, you could use git to get the version of Node.js you want to use, apply the commits from the patch, and compile a Node.js binary.
If you clone the Node.js repository, the various versions of Node.js are tagged. So you can get the code for Node.js 12.19.0 by checking out the v12.19.0 tag.
You can cherry-pick the commits from the pull request normally, or you could use curl -L https://github.com/nodejs/node/pull/33807.patch | git am to apply the commits as patches.
Instructions for compiling/building the Node.js binary can be found at https://github.com/nodejs/node/blob/master/BUILDING.md#building-nodejs-on-supported-platforms.
More long term, you could chime in on the pull request on whether it meets your needs or not and hopefully get it going again. It seems to have stalled.

Converting existing cypress tests to cucumber style bdd using cypress-cucumber-preprocessor. Second scenario is not picked up

We have an existing application, the tests are written in cypress. We now want to integrate a cucumber style feature which will internally run using cypress. We used cypress-cucumber-preprocessor for the same. I followed the steps given here on the github page. The problem I'm facing now, is while running tests, it shows both the scenarios, but runs only one. Shows a green tick mark next to it, but doesn't start the second one, and the clock keeps on ticking. On clicking the second scenario in the cypress launcher it says - no commands were issued in this test.
What have I tried:
I tried to duplicate the same scenario twice in the same feature file. It still runs only first one and does not move to the next one.
I moved both different scenarios in two different feature files. It runs both of them successfully.
I tried to run the example repo (cypress-cucumber-example) locally with n number of scenarios. That works seamlessly.
Some observations:
While the first test is run I ran chrome console, and saw some errors due to some network calls failing. But these calls were made (with same errors) even when I was using only cypress and hadn't integrated with cucumber, and all tests were passing. Is it because of some magic cucumber is bringing along with it? Read somewhere default cucumber waits for a test is 60 seconds, I waited for maximum 170 seconds, and then stopped the suite. At the end all I get is one scenario green and other not even started.
It took me quite a long time, but I actually figured out what the issue was. I had an enter key after Feature: in my feature file. The ide didn't raise it as any problem and all was good. I was just comparing successful runs against this issue and saw that the feature name is not appearing in the UI, and hence took away the \n. It works like a charm now. Wondering what a small enter key can do.

Execute SoapUI test on multi-threads

I have a SoapUI test which uses an input file to read lines as input of requests. So there is a loop which reads data and execute request and write output to file. Response times are too long, so processing of this file should be done asynchronously, but I am not sure, how SoapUI can handle this. There is file attachment in SOAP requests, which is not handled by current version of JMeter.
As per the SoapUI's documentation below, both test cases or test suites can be executed in Parallel mode.
In the case of TestSuites and TestCases these can be executed either in sequence or parallell, as configured with the corresponding toolbar buttons.
In the above image, first one in the marked image stands for sequential execution and the second one (with multiple parallel arrows) stands for Parallel execution mode.
User can select either of the one before executing the tests.
Hope this helps.
Note that SOAPUI does not allows test steps to be executed in parallel. If you need any custom execution i.e., same test case and steps to be executed in Parallel, here is sample project done for that. It can be used as reference and apply it to your case.
I understood this question as requiring the ability to call a service asynchronously due to the time it takes to process. So, by this, I mean SoapUI makes a request to a web service and instead of waiting for it, it carries on. At some point later, SoapUI receives the response.
SoapUI can handle this, I haven't tried it myself, but when reading some guides recently, I noticed it can be done.
See....
Blog Guide
SoapUI Forum
In short, it involves setting up a mock service to receive the response, which can then be validated.

How to find the time when a Puppet manifest is executed

I'm wondering if anyone knows a good way to get the date and time when a portion of code in a Puppet manifest is actually executed. Sometimes my manifests take a long time to run, and I need to schedule a task to occur soon after the end of the run, no matter when that occurs.
I have tried the time() function, setting a variable using generate() (using the date function on the Puppet master), and even creating a custom fact, but everything I've tried gets evaluated when the manifests are parsed on the server, rather than when they actually execute on the client.
Any ideas? The clients are all Windows, FWIW.
Thanks in advance!
I am not sure I understand what you mean, but you can't get this information during catalog compilation (obviously), so you can't use it to change the way the catalog will be applied.
If you need to trigger another process on the same host, then you should use any IPC mechanism you have available. You can exec anything, and have it happen just after any other resources is applied, so it is just a matter of finding the proper command.

Resources