Can Gatling execute more than one Karate scenario from a feature file? [duplicate] - performance-testing

This question already has an answer here:
karate-gatling: how to simulate a karate feature except those scenarios tagged with #ignore
(1 answer)
Closed 1 year ago.
I am using Karate-Gatling combo to test backend. I have one test where I would like to
Update some info about account
Upload multiple files (one by one)
Save changes
Simplest way to simulate this would be to have Scenario for the 1. and 3. step, and have Scenario Outline for step 2. with all the different files in Examples:. All in the same .feature file.
However when I run this with Gatling, only the first scenario in the list gets executed. Is there a way to make Gatling run the others as well? I suppose that there could be a trick with dynamic outlines, but I'm asking if I'm missing something obvious.

Do you want to execute them in sequence or in parallel? Remember scenarios are supposed to run in parallel.
Could you provide extracts of the source code?
Also, would be good to know the Karate version, considering the recent 1.0 release.

All the Scenario-s in the feature-file should be executed. Please check if maybe the first Scenario is exiting with an error.
Otherwise this is a bug. Please then follow this process: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Related

How do I find the duration of a user in Karate-Gatling? [duplicate]

This question already has answers here:
Is there an option for us to customize and group the test scenarios in the statics section of the Karate Gatling Report?
(2 answers)
Closed 1 year ago.
I am using Karate-Gatling combo for perormance testing. In 0.9.6, the Gatling logs included a thread index, from which I could determine how long each user took to finish the scenario. The logs no longer contain this information in 1.0.1.
Is there a way to get the information about time it took to process a single user in 1.0.1? Or am I stuck with some sort of statistics of Duration*ConcurrentUsers/TotalUsers?
This is news to me. There have been some code contributions and no one else has reported this. The Gatling version has also been upgraded, and perhaps it is no longer supported. The best thing is for you to help us understand what should be changed and do some investigation. We have a nice developer guide, so you should be able to easily build from source.
I suspect it is this commit: https://github.com/intuit/karate/pull/1335/files
If if we don't get help, it is unlikely we resolve this as no one else has reported this.

Get test result in Spock "cleanup" method

Is it possible in cleanup method in Spock check is feature (or even better - current iteration of feature) passed or failed? In java's JUnit/TestNG/Cucumber it can be done in one line. But what about Spock?
I've found similar questions here:
Find the outcome/status of a test in Specification.cleanup()
Execute some action when Spock test fails
But both seems to be overcomplicated and it was years ago. Is there any better solution?
Thanks in advance
Update: main goal is to save screenshots and perform some additional actions for failed tests only in my geb/spock project
It is not over-complicated IMO, it is a flexible approach to hooking into events via listeners and extensions. The cleanup: block is there to clean up test fixtures, as the name implies. Reporting or other things based on the test result are to be done in a different way.
Having said that, the simple and short answer to your question is: This still is the canonical way to do that. By the way, you didn't tell us what you want to do with the test result in the clean-up block. This kind of thing - explaining how you want to do something but not explaining why (i.e. which problem you are trying to solve) is called the XY problem.

How to write feature file and when to convert them to step definition to adapt to a changing business requirement?

I am working on a BDD web development and testing project with other team members.
On top we write feature files in gherkin and run cucumber to generate step functions. At bottom we write Selenium page models and action libraries scripts. The rest is just fill in the step functions with Selenium script and finally run cucumber cases.
Sounds simple enough.
The problem comes starting when we write feature files.
Problem 1: Our client's requirement keeps changing every week as the project proceed, in terms of removing old ones and adding new ones.
Problem 2: On top of that, for some features, detailed steps keep changing too.
The problem gets really bad if we try to generate updated step functions based on updated feature file every day. There are quite some housecleaning to do to keep step functions and feature files in sync.
To deal with problem 2, I remembered that one basic rule in writing gherkin feature file is to use business domain language as much as possible. So I tried to persuade the BA to write the feature file a little more vague, and do not include too many UI specific steps in it, so that we need not to modify feature files/step functions often. But she hesitate 'cause the client's requirement document include details and she just try to follow.
To deal with problem 1, I have no solution.
So my question is:
Is there a good way to write feature file so that it's less impacted by client's requirement change? Can we write it vague to omit some details that may change (this way at least we can stabilize the step function prototype), and if so, how far can we go?
When is a good time to generate the step definitions and filling in the content? From the beginning, or wait until the features stabilize a little? How often should we do it if the feature keep changing? And is there a convenient way to clean the outdated step functions?
Any thoughts are appreciated.
Thanks,
If your client has specific UI requirements for which you are contracted to provide automated tests, then you ought to be writing those using actual test automation tools. Cucumber is not a test automation tool. If you attempt to use it as such, you are simply causing yourself a lot of pain for naught.
If, however, you are only contracted to validate that your application complies with the business rules provided by your client, during frequent and focused discovery sessions with them, then Cucumber may be able to help you.
In either case, you are going to ultimately fail, if there's no real collaboration with your client. If they're regularly throwing new business rules, or new business requirements over a transome through which you have limited or no visibility, then you are in a no-win situation.

How can we add copy of existing testcase to another testsuite using groovy

Requirement:
I have 2 testcases and will grow in future. I need a way to run these 2 testcase in multiple environment parallel at runtime.
So I can either make multiple copies of these testcase for multiple environment and add them to empty testsuite and set to run them parallel. All these using groovy script.
Or try a way to run each testcase parallel by some code.
I tried tcase.run(properties,async)
but did not work.
Need help.
Thank you.
This question does not show any research effort; it is unclear and not useful. You are mixing together unrelated things.
If you have a non-Pro installation, then you can parameterize the endpoints. This is accomplished by editing all your endpoints with a SoapUI property, and passing these to your test run. This is explained in the official documentation.
If you have a -Pro license, then you have access to the Environments feature, which essentially wraps the above for you in a convenient manner. Again: consult official documentation.
Then a separate question is how to run these in parallel. That will very much depend on what you have available. In the simplest case, you can create a shell script that calls testrunner the appropriate number of times with appropriate arguments. Official documentation is available. There are also options to run from Maven - official documentation - in which case you can use any kind of CI to run these.
I do not understand how Groovy would play into all this, unless you would like to get really fancy and run all this from junit, which also has official documentation available.
If you need additional information, you could read through SO official documentation and perhaps clarify your answer.

How to iterate over a cucumber feature

I'm writing a feature in cucumber that could be applied to a number of objects that can be programmaticaly determined. Specifically, I'm writing a smoke test for a cloud deployment (though the problem is with cucumber, not the cloud tools, thus stack overflow).
Given a node matching "role:foo"
When I connect to "automatic.eucalyptus.public_ipv4" on port "default.foo.port"
Then I should see "Hello"
The given does a search for nodes with the role foo does and the automatic.eucalyptus... And port come from the node found. This works just fine... for one node.
The search could retun multiple nodes in different environments. Dev will probably return one, test and integration a couple, and prod can vary. The given already finds all of them.
Looping over the nodes in each step doesn't really work. If any one failed in the When, the whole thing would fail. I've looked at scenarios and cucumber-iterate, but both seem to assume that all scenarios are predefined rather than programmatically looked up.
I'm a cuke noob, so I'm probably missing something. Any thoughts?
Edit
I'm "resolving" the problem by flipping the scenario. I'm trying to integrate into a larger cluster definition language to define repeatedly call the feature by passing the info as an environment variable.
I apologize in advance that I can't tell you exactly "how" to do it, but a friend of mine solved a similar problem using a somewhat unorthodox technique. He runs scenarios that write out scenarios to be run later. The gem he wrote to do this is called cukewriter. He describes how to use it in pretty good detail on the github page for the gem. I hope this will work for you, too.

Resources