How can we add copy of existing testcase to another testsuite using groovy - groovy

Requirement:
I have 2 testcases and will grow in future. I need a way to run these 2 testcase in multiple environment parallel at runtime.
So I can either make multiple copies of these testcase for multiple environment and add them to empty testsuite and set to run them parallel. All these using groovy script.
Or try a way to run each testcase parallel by some code.
I tried tcase.run(properties,async)
but did not work.
Need help.
Thank you.

This question does not show any research effort; it is unclear and not useful. You are mixing together unrelated things.
If you have a non-Pro installation, then you can parameterize the endpoints. This is accomplished by editing all your endpoints with a SoapUI property, and passing these to your test run. This is explained in the official documentation.
If you have a -Pro license, then you have access to the Environments feature, which essentially wraps the above for you in a convenient manner. Again: consult official documentation.
Then a separate question is how to run these in parallel. That will very much depend on what you have available. In the simplest case, you can create a shell script that calls testrunner the appropriate number of times with appropriate arguments. Official documentation is available. There are also options to run from Maven - official documentation - in which case you can use any kind of CI to run these.
I do not understand how Groovy would play into all this, unless you would like to get really fancy and run all this from junit, which also has official documentation available.
If you need additional information, you could read through SO official documentation and perhaps clarify your answer.

Related

Ordering testcase execution - Pytest

I went through some answers for the same issue on Stack Overflow. Most of the answers suggest to use markers. I want to see whether any other alternative is available. Here is my requirement.
test_file1.py:
test_class 1
test_case1
test_case2
test_case3...so on
test_file2.py:
test_class 1
test_case1
test_casPls help to solve this.e2
test_case3...so on
Like this I have around 30-40 different .py files with different tests that are specific for testing different functionalities. Now I would like to know how to order the test execution. Is there any way that I can trigger the execution from a single file where I have defined my order of execution?
You've reached the point where you need to leverage the unit testing framework to do what you want for you, without you having to write the code to do setting up and tearing down.
I will assume you're using Python's unittest module, otherwise if you're not then it almost certainly has what I'm about to show you (and if it doesn't, it's probably not worth using and you should switch to something better).
As per the docs, you can add a method called setUp(), whereby:
... the testing framework will automatically call for every single test we run.
import unittest
class YourTestClass(unittest.TestCase):
def setUp(self):
# Call code here to be set up
# Save the things into fields with self.whatever = calculate(...)
def someTest(self):
# Make use of self.whatever that your automatically run setUp()
# has provided for you, since the unit testing framework has
# called it before this test is run
Then there's also tearDown() if you need it, which runs code after every unit test.
This means when you implement setUp (and optionally tearDown), it would run like:
test_class1.setUp()
test_class1.test_case1
test_class1.tearDown()
test_class1.setUp()
test_class1.test_case2
test_class1.tearDown()
test_class1.setUp()
test_class1.test_case3
test_class1.tearDown()
...
There's a lot more you can do as well, for that I recommend checking out the docs. However this should do what you want it to based on the comment you provided me.
Now if you have code that is shared among them (like maybe some small parameters change), you should investigate the Distinguishing test iterations using subtests section at the link provided to see how to use subTest() to handle this for you. This supposedly requires Python 3.4+ (which I hope you are using and not something archaic like 2.7).
Finally, if you want to just run something once before all the tests, you can use setUpClass() which will do the same thing as setUp() except it's run once. See the docs for more details.
In short:
Use setUp() if you need to do something before every test
Use setUpClass() if you need to do something once before all the tests
Look into subTest() if you have a lot of tests that are similar and only the data you feed into the tests change, but this is only to make your code cleaner

Isolating scenarios in Cabbage

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

Creating Test suites in Spring IDE for the Spock Test specs

I have hundreds of test specifications written in Spock. All of these are functional tests and can be run independently. But I have come across a situation where I need to run a specific test before running some other test.
This was very easy to achieve using Junit Test Suite and it was very straight forward in Eclipse. But since all my tests are groovy tests there is no easy way to create a Test Suite in Spring IDE for the spock tests (written in Groovy).
Can someone please share some ideas as to how we can create a Test suite and run some specific tests and also define the order of tests.
Any help would be much appreciated.
Spock specifications are valid JUnit tests (or suites) as well. That's why they are recognized by tools such as STS. You should be able to add it to the test suites as well as other JUnit test.
On the other hand it doesn't sound as a good practise if your tests depend on execution order.
If certain tasks needs to be performed before the test execution, it should be placed in setup() method. If that logic is common to more than one test, consider extracting it to the parent class.
If all you need is sequential execution of methods within a spec, have a look at #spock.lang.Stepwise, which is handy for testing workflows. Otherwise, you have the same possibilities as with plain JUnit: you can use JUnit (4) test suites, model test suites in your build tool of choice (which might not help within STS), or define test suites via Eclipse run configurations. I don't know how far support for the latter goes, but at the very least, it should allow you to run all tests in a package.
Although I think that it won't allow you to specify the order of the tests, you could use Spock's Runner configuration or #IgnoreIf/#Require built-in extensions. Have a look at my response to a similar question. It's probably also worth having a look at RunnerConfiguration javadoc as it shows that you can include classes directly instead of using annotations.
If the tests you want to run in a specific order are part of the same spock Specification, then you can use the #Stepwise annotation to direct that the tests (feature methods) are executed in the order they appear in the Specification class.
As others mentioned, its best to avoid this dependency if you can because of the complexity it introduces. For example, what happens if the first test fails? Does that leave the system in a undefined state for the subsequent tests? So it would be better to prevent the intra-test dependencies with setup() and cleanup() methods (or setupSpec() and cleanupSpec()).
Another option is to combine two dependent tests into a single multi-stage test with multiple when:/then: block pairs in sequence.

Is there a generic way to consume my dependency's grunt build process?

Let's say I have a project where I want to use Lo-Dash and jQuery, but I don't need all of the features.
Sure, both these projects have build tools so I can compile exactly the versions I need to save valuable bandwidth and parsing time, but I think it's quite uncomfortable and ugly to install both of them locally, generate my versions and then check them it into my repository.
Much rather I'd like to integrate their grunt process into my own and create custom builds on the go, which would be much more maintainable.
The Lo-Dash team offers this functionality with a dedicated cli and even wraps it with a grunt task. That's very nice indeed, but I want a generic solution for this problem, as it shouldn't be necessary to have every package author replicate this.
I tried to achieve this somehow with grunt-shell hackery, but as far as I know it's not possible to devDependencies more than one level deep, which makes it impossible even more ugly to execute the required grunt tasks.
So what's your take on this, or should I just move this over to the 0.5.0 discussion of grunt?
What you ask assumes that the package has:
A dependency on Grunt to build a distribution; most popular libraries have this, but some of the less common ones may still use shell scripts or the npm run command for general minification/compression.
Some way of generating a custom build in the first place with a dedicated tool like Modernizr or Lo-Dash has.
You could perhaps substitute number 2 with a generic one that parses both your source code and the library code and uses code coverage to eliminate unnecessary functions from the library. This is already being developed (see goldmine), however I can't make any claims about how good that is because I haven't used it.
Also, I'm not sure how that would work in a AMD context where there are a lot of interconnected dependencies; ideally you'd be able to run the r.js optimiser and get an almond build for production, and then filter that for unnecessary functions (most likely Istanbul, would then have to make sure that the filtered script passed all your unit/integration tests). Not sure how that would end up looking but it'd be pretty cool if that could happen. :-)
However, there is a task especially for running Grunt tasks from 'sub-gruntfiles' that you might like to have a look at: grunt-subgrunt.

How to iterate over a cucumber feature

I'm writing a feature in cucumber that could be applied to a number of objects that can be programmaticaly determined. Specifically, I'm writing a smoke test for a cloud deployment (though the problem is with cucumber, not the cloud tools, thus stack overflow).
Given a node matching "role:foo"
When I connect to "automatic.eucalyptus.public_ipv4" on port "default.foo.port"
Then I should see "Hello"
The given does a search for nodes with the role foo does and the automatic.eucalyptus... And port come from the node found. This works just fine... for one node.
The search could retun multiple nodes in different environments. Dev will probably return one, test and integration a couple, and prod can vary. The given already finds all of them.
Looping over the nodes in each step doesn't really work. If any one failed in the When, the whole thing would fail. I've looked at scenarios and cucumber-iterate, but both seem to assume that all scenarios are predefined rather than programmatically looked up.
I'm a cuke noob, so I'm probably missing something. Any thoughts?
Edit
I'm "resolving" the problem by flipping the scenario. I'm trying to integrate into a larger cluster definition language to define repeatedly call the feature by passing the info as an environment variable.
I apologize in advance that I can't tell you exactly "how" to do it, but a friend of mine solved a similar problem using a somewhat unorthodox technique. He runs scenarios that write out scenarios to be run later. The gem he wrote to do this is called cukewriter. He describes how to use it in pretty good detail on the github page for the gem. I hope this will work for you, too.

Resources