spockframework: check expected result after every feature - groovy

I am using spockframework and geb for test automation. I would like to execute after every feature a simple check to be sure that no error dialogs are shown, I have added the following cleanup() method:
def cleanup() {
expect:
$('.myErrrorDialogClass').isEmpty()
}
The code is executed after every feature but it does not throw any error when the dialog is shown.

Spock uses AST transforms to wire in the functionality for each test label (when, expect, etc); they may not run the transformations on the cleanup method. They are either not expecting or not encouraging assertions in cleanup, so that code may run but not actually assert anything.
You can get around this by using a standard Groovy assert call without the expect block.
Summarized from our comment discussion above - in case you want to accept it as an answer ;-)

Related

Catching and recovering from error in C++ function called from Duktape

I have created a plugin for the OpenCPN marine navigation program that incorporates Duktape to provide a scripting capability. OpenCPN uses wxWidgets.
Basically, the plugin presents the user with a console comprising a script window, an output window and various buttons. The user enters their script (or loads it from a .js file) and clicks on Run. The script is run using duk_peval. On return I display the result, destroy the context and wait for the user to run again, perhaps after modifying the script. All this works well. However, consider the following test script:
add(2, 3);
function add(a, b){
if (a == b) throw("args match");
return(a + b);
}
If the two arguments in the call to add are equal. The script throws an error and the user can try again. This all works.
Now I can implement add as a c++ function thus:
static duk_ret_t add(duk_context *ctx){
int a, b;
a = duk_get_int(ctx, 0);
b = duk_get_int(ctx, 1);
if (a == b){
duk_error(ctx, DUK_ERR_TYPE_ERROR, "args match");
}
duk_pop_2(ctx);
duk_push_int(ctx, a+b);
return (1);
}
As written, this passes the error to the fatal error handler. I know I must not try and use Duktape further but I can display the error OK. However, I have no way back to the plugin.
The prescribed action is to exit or abort but these both terminate the hosting application, which is absolutely unacceptable. Ideally, I need to be able to return from the duk_peval call with the error.
I have tried running the add function using duk_pcall from an outer C++ function. This catches the error and I can display it from that outer function. But when I return from that outer function, the script carries on when it should not and the eventual return from the duk_peval call has no knowledge of the error.
I know I could use try/catch in the script but with maybe dozens of calls to the OpenCPN APIs this is unrealistic. Percolating an error return code all the way back, maybe through several C++ functions and then to the top-level script would also be very cumbersome as the scripts and functions can be quite complex.
Can anyone please suggest a way of passing control back to my invoking plugin - preferably by returning from the duk_peval?
I have cracked this at last.
Firstly, I use the following in error situations:
if (a == b){
duk_push_error_object(ctx, DUK_ERR_ERROR, "args match");
duk_thow(ctx);
}
If an error has been thrown, the returned values from duk_peval and duk_pcall are non-zero and the error object is on the stack - as documented
It is all working nicely for me now.

NUnit Attribute to simulate condition-based Assert.Inconclusive with custom message text

I have some tests that depend on a certain thing being true (access to the internet, as it happens, but that isn't important and I don't want to discuss the details of the condition).
I can very easily write a static helper method which will test the (parameterless) condition and call Assert.Inconclusive("Explanatory Message") if it's true/false. And then call that at the start of each Test which has this requirement.
But I'd like to do this as an Attribute, if possible.
How, in detail, do I achieve that, in NUnit?
What I've tried so far:
There's an IApplyToTest interface, exposed by NUnit, which I can make my Attribute implement, and will allow me to hook into the TestRunner, but I can't get it to do what I want :(
That interface gives me access to an NUnit.Framework.Internal.Test object.
If I call:
test.RunState = RunState.NotRunnable;
then I get something equivalent to Assert.Fail("").
Similarly RunState.Skipped or RunState.Ignored give me the equivalent of Assert.Ignore("").
But none of these are setting a message on the Test, and there's no test.Message = "foo"; or equivalent (that I can see).
There's a test.MakeInvalid("Foo") which does set a message, but that's equivalent to Assert.Fail("Foo").
I found something that looked promising:
var result = test.MakeTestResult();
result.SetResult(ResultState.Inconclusive, "Custom Message text");
But that doesn't seem to do anything; the Test just Passes :( I looked for a test.SetAsCurrentResult(result) method in case I need to "attach" that result object back to the test? But nothing doing.
It feels like this is supposed to be possible, but I can't quite figure out how to make it all play together.
If anyone can even show me how to get to Skipped + Custom Message displayed, then I'd probably take that!
If you really want your test to be Inconclusive, then that's what Assume.That is there for. Use it just as you would use Assert.That and the specified constraint fails, your test result will be inconclusive.
That would be the simplest answer to your question.
However, reading the things you have tried, I don't think you actually want Inconclusive at least not as it is defined by NUnit.
In NUnit, Inconclusive means that the test doesn't count because it couldn't be run. The result basically disappears and the test run is successful.
You seem to be saying that you want to receive some notice that the condition failed. That makes sense in the situation where (for example) the internet was not available so your test run isn't definitive.
NUnit provides Assert.Ignore and Warn.If (also Warn.Unless) for those situations. Or you can set the corresponding result states in your custom attribute.
Regarding implementation... The RunState of a test applies to it's status before anyone has even tried to execute it. So, for example, the RunState may be Ignored if someone has used the IgnoreAttribute or it may be NotRunnable if it requires arguments and none are provided. There is no Inconclusive run sttate because that would mean the test is written to be inconclusive all the time, which makes no ssense. The IApplyToTest interface allows an attribute to change the status of a test at the point of discovery, before it is even run, so you would not want to use that.
After NUnit has attempted to run a test, it gets a ResultState, which might be Inconclusive. You can affect this in the code of the test but not currently through an attribute. What you want here is something that checks the conditions needed to run the test immediately before running it and skips execution if the conditions are not met. That attribute would need to be one that generates a command in the chain of commands that execute a test. It would probably need to implement ICommandWrapper to do that, which is a bit more complicated than IApplyToTest because the attribute code must generate a command instance that will work properly with NUnit itself and with other commands in the chain.
If I had this situation, I believe I would use a Run parameter to indicate whether the internet should be available. Then, the tests could
Assume.That(InternetIsNotNeeded());
silently ignoring those tests or fail as expected when the internet should be available.

Execute Spock Test in Different Environments

I have a Selenium test, which is executed with the help of Spock framework. In general it looks like this:
class SeleniumSpec extends Specification {
URL remoteAddress // Address of SE grid
Capabilities caps // Desired capabilities
WebDriver driver // Web driver
def setup() {
driver = new RemoteWebDriver(remoteAddress, caps)
}
def "some test" () {
expect:
driver.findElement(By.cssSelector("p.someParagraph")).text == 'Some text'
}
// other tests go here ...
}
The point here is, that my specification describes behavior of some component (in most cases - web views/pages). So the methods are expected to implement some business-relative logic (smth. like 'click on button and expect message in another field); but another thing, I would like to test, is to ensure that behavior is exactly the same in all browsers (capabilities).
To achieve this in 'ideal' world, I'd like to have a mechanism to specify, that a particular test class should be used several times, but with some different parameters. But for now, I see an ability to apply data sets only for a single method.
I came up only with several ideas to implement this (according to my current knowledge regarding Spock framework):
Use a list of drivers and execute each action over all list members. So each call to 'driver' will be replaced with 'drivers.each { it }' invocation. This approach, on the other hand, makes it hard to exactly discover, which of drivers failed the test.
Use method parameters and data sets to initiate a fresh copy of web driver on each iteration. This approach seems more logical according to Spock philosophy, but it requires a heavy operation of driver and web application initialization to be performed every time. It also removes ability to perform 'step-by-step' testing, since state of driver won't be preserved between test methods.
Combination of these approaches, when drivers are kept in a map, and each test invocation has exact name of driver to be used.
I'd appreciate, if anybody met this case and may come up with ideas, how to properly organize the testing process. Someone could also discover other approaches, or pros & contras of those above. Considering another test tool could be an option also.
You could create an abstract BaseSpec which contains all your features, but do not setup the driver in that spec. Then create a subspec for each different browser you want to test e.g.
class FirefoxSeleniumSpec extends BaseSeleniumSpec{
setupSpec(){
super.driver = new FirefoxDriver(...)
}
}
And then you can run all of the sub-specs to test all the browsers

Return a value from a SoapUI TestCase

I'm trying to modularize my test cases, so I'm running a shared test case (as a procedure) that does something useful and returns a result value. As I need to pass-in non-string input properties, I have to run the test case from groovy:
def findLoopEndTC = testRunner.testCase.testSuite.testCases["TestCase - Find Loop End"]
assert findLoopEndTC != null, "Referred TC not found"
def runContext = new com.eviware.soapui.support.types.StringToObjectMap()
runContext.put("TestStepContext", context)
def runner = findLoopEndTC.run( runContext, false )
assert runner.status != com.eviware.soapui.model.testsuite.TestRunner.Status.FAILED : runner.reason
I've learned that the test case is run using the SINGLETON_AND_WAIT mode which ensures that the TestCase itself is run in a thread-safe way.
My question is how to return a value from the run test case in a thread-safe way?
I tried runner.getRunContext().getProperty("Result"), but it seems that the context properties are no longer there. So there seems to be only the "classical" way, findLoopEndTC.getPropertyValue("Result"), but this is aparently not thread-safe.
Are there other possibilities?
I use the free version of SoapUI.
I had the same problem. If I understand you correctly, this is what you want:
You’ve put the ‘calling’ context into a new context ‘runContext’:
context.get("TestStepContext").put("Results",resultList)
which has been passed in as the context for the test case to be run (synchronously). I’ll call the test case to be run ‘B’:
def runner = findLoopEndTC.run( runContext, false ) //in calling test case
To get something useful back from ‘B’, somewhere in it you need to put a value back into TestStepContext, e.g.:
context.get("TestStepContext").put("Results",resultList) //My results happened to be a list
In the calling test case, the line you need after the call to run the test case is:
def testResults = runContext.get("TestStepContext").get("Results")
Hope this makes sense.
I've been trying to work this out for the last few days too. I haven't been able to work out how to make it thread safe but I have an alternative approach which I think works pretty well.
I've based it on this http://forum.soapui.org/viewtopic.php?f=2&t=4681#p15731 suggestion from the SoapUI team. I found with the above solution it was still not thread safe, 99% of the time this worked but I found sometimes it's possible you can have two test cases both breaking out the loop at the same time.
To deal with this I set runningDeleteCar to the the hashcode for the current testRunner when it's broken out of the loop. I then double check this to make sure that some other test case hasn't gone in and changed it, if it doesn't match I just go back to the while loop. This stops the situation of two test cases breaking out at the same time.
This approach basically means only one test case can go through the shared test case at a time.

Wanted but not invoked: However, there were other interactions with this mock:

Wanted but not invoked: However, there were other interactions with this mock:
This is a mockito error you would catch when trying to verify the invocation on an object on specific method, but what happens is you have interacted with other method of that object but not the one mentioned.
If you have an object named CustomerService and say it has two methods named saveCustomer() and verifyExistingCustomer(),
and your mockito looks something like verify(customerService, atleast(1)).verifyExistingCustomer(customer), but in your actual service you called the saveCustomer() at least one.
Any idea how to resolve this ?
From what you are describing, it looks like you are telling your mocks that you are expecting verifyExistingCustomer() to be called but you are not actually calling it.
You should probably look at your test design, specifically ensuring that you can (via mocking) isolate your tests to test each method individually.
If there is something in your code that decides whether to call saveCustomer() or verifyExistingCustomer() then you should try to mock the data that the code inspects so that you can test each individually.
For example if your code looked like this:
if (customer.getId() == 0) {
saveCustomer(customer);
} else {
verifyExistingCustomer(customer);
}
Then you could have two separate tests that you could isolate by setting a zero value and non-zero value for the id in customer.
If you'd like to share your code I could probably give you a better example.

Resources