Get test result in Spock "cleanup" method - groovy

Is it possible in cleanup method in Spock check is feature (or even better - current iteration of feature) passed or failed? In java's JUnit/TestNG/Cucumber it can be done in one line. But what about Spock?
I've found similar questions here:
Find the outcome/status of a test in Specification.cleanup()
Execute some action when Spock test fails
But both seems to be overcomplicated and it was years ago. Is there any better solution?
Thanks in advance
Update: main goal is to save screenshots and perform some additional actions for failed tests only in my geb/spock project

It is not over-complicated IMO, it is a flexible approach to hooking into events via listeners and extensions. The cleanup: block is there to clean up test fixtures, as the name implies. Reporting or other things based on the test result are to be done in a different way.
Having said that, the simple and short answer to your question is: This still is the canonical way to do that. By the way, you didn't tell us what you want to do with the test result in the clean-up block. This kind of thing - explaining how you want to do something but not explaining why (i.e. which problem you are trying to solve) is called the XY problem.

Related

Can Gatling execute more than one Karate scenario from a feature file? [duplicate]

This question already has an answer here:
karate-gatling: how to simulate a karate feature except those scenarios tagged with #ignore
(1 answer)
Closed 1 year ago.
I am using Karate-Gatling combo to test backend. I have one test where I would like to
Update some info about account
Upload multiple files (one by one)
Save changes
Simplest way to simulate this would be to have Scenario for the 1. and 3. step, and have Scenario Outline for step 2. with all the different files in Examples:. All in the same .feature file.
However when I run this with Gatling, only the first scenario in the list gets executed. Is there a way to make Gatling run the others as well? I suppose that there could be a trick with dynamic outlines, but I'm asking if I'm missing something obvious.
Do you want to execute them in sequence or in parallel? Remember scenarios are supposed to run in parallel.
Could you provide extracts of the source code?
Also, would be good to know the Karate version, considering the recent 1.0 release.
All the Scenario-s in the feature-file should be executed. Please check if maybe the first Scenario is exiting with an error.
Otherwise this is a bug. Please then follow this process: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Python test method for additional method call

I have a situation and i could not find anything online that would help.
my understanding is that python testing is rigorous to ensure that if someone changes a method, the test would fail and alert the developers to go rectify the difference.
I have a method that calls 4 other methods from other classes. Patching made it real easy for me to determine if a method has been called. However, let's say someone in my team decides to add a 5th method, the test will still pass. Assuming that no other method calls should be allowed inside, is there a way to test in python to make sure no other calls are made? Refer to example.py below:
example.py:
def example():
classA.method1()
classB.method2()
classC.method3()
classD.method4()
classE.method5() # we do not want this method in here, test should fail if it detects a 5th or more method.
Is there anyway to cause the test case to fail if any additional methods are added?
You can easily test (with mock or doing the mocking manually) that example() does not specifically calls classE.method5, but that's about all you can expect - it won't work (unless explicitely tested too) for ie classF.method6(). Such a test would require either parsing the example function's source code or analysing it's bytecode representation.
This being said:
my understanding is that python testing is rigorous to ensure that if someone changes a method, the test would fail
I'm afraid your understanding is a bit off - it's not about "changing the method", it's about "unexpectedly changing behaviour". IOW you should first test for behaviour (black box testing), not for implementation (white box testing). Now the distinction between "implementation" and "behaviour" can be a bit blurry depending on the context (you can consider that "calling X.y()" is part of the expected behaviour and it sometimes makes sense indeed), but the distinction is still important.
wrt/ your current use case (and without more context - ie why shouldn't the function call anything else ?), I'd personnaly wouldn't bother trying be that defensive and I'd just clearly document this requirement as a comment in the example() function itself so anyone editing this code immediatly knows what he should not do.

Isolating scenarios in Cabbage

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

How to iterate over a cucumber feature

I'm writing a feature in cucumber that could be applied to a number of objects that can be programmaticaly determined. Specifically, I'm writing a smoke test for a cloud deployment (though the problem is with cucumber, not the cloud tools, thus stack overflow).
Given a node matching "role:foo"
When I connect to "automatic.eucalyptus.public_ipv4" on port "default.foo.port"
Then I should see "Hello"
The given does a search for nodes with the role foo does and the automatic.eucalyptus... And port come from the node found. This works just fine... for one node.
The search could retun multiple nodes in different environments. Dev will probably return one, test and integration a couple, and prod can vary. The given already finds all of them.
Looping over the nodes in each step doesn't really work. If any one failed in the When, the whole thing would fail. I've looked at scenarios and cucumber-iterate, but both seem to assume that all scenarios are predefined rather than programmatically looked up.
I'm a cuke noob, so I'm probably missing something. Any thoughts?
Edit
I'm "resolving" the problem by flipping the scenario. I'm trying to integrate into a larger cluster definition language to define repeatedly call the feature by passing the info as an environment variable.
I apologize in advance that I can't tell you exactly "how" to do it, but a friend of mine solved a similar problem using a somewhat unorthodox technique. He runs scenarios that write out scenarios to be run later. The gem he wrote to do this is called cukewriter. He describes how to use it in pretty good detail on the github page for the gem. I hope this will work for you, too.

Spock vs FitNesse

I've been looking into Spock and I've had experience with FitNesse. I'm wondering how would people choose one over the other - if they appear to be addressing the same or similar problem space.
Also for the folks who have been using Spock or other groovy code for tests, do you see any noticeable performance degradation? Tests are supposed to give immediate feedback - as we know that if the tests take longer to run, the developer tends to run them less frequently - so I'm wondering if the reduction in speed of test execution has had any impact in the real world.
Thanks
I am no FitNesse guy, so please take what I say with a grain of salt. To me it seems what FitNesse is trying to do is to provide a programming language independent environment to specify tests. They use it to have a more visual interface with the programmer. In Spock a Groovy ast transform is used to transform the table into a groovy program.
Since you basically stay in a programming language it is in Spock more easy to realize more complicated test setups. As a result you often seem to have to write fixture code in FitNesse.
I personally don't need a test execution button, I like the direct approach. I like not having to take of even more classes, only to enable testing and I like looking at the code directly. For example I want to just execute my test from the command line, not from a web interface. That is surely possible in FitNesse too, but as a result the whole visual thing FitNesse is trying to give the user is just ballast for me. That's why I would choose Spock over FitNesse.
The advantage of the language agnostic approach is of course, that a lot of test specifications can be used for Java and for .Net. so if that is a requirement for you, you may want to judge different. It usually is not to me.
As for performance, I would not worry too much about that part.

Resources