NUnit Attribute to simulate condition-based Assert.Inconclusive with custom message text - attributes

I have some tests that depend on a certain thing being true (access to the internet, as it happens, but that isn't important and I don't want to discuss the details of the condition).
I can very easily write a static helper method which will test the (parameterless) condition and call Assert.Inconclusive("Explanatory Message") if it's true/false. And then call that at the start of each Test which has this requirement.
But I'd like to do this as an Attribute, if possible.
How, in detail, do I achieve that, in NUnit?
What I've tried so far:
There's an IApplyToTest interface, exposed by NUnit, which I can make my Attribute implement, and will allow me to hook into the TestRunner, but I can't get it to do what I want :(
That interface gives me access to an NUnit.Framework.Internal.Test object.
If I call:
test.RunState = RunState.NotRunnable;
then I get something equivalent to Assert.Fail("").
Similarly RunState.Skipped or RunState.Ignored give me the equivalent of Assert.Ignore("").
But none of these are setting a message on the Test, and there's no test.Message = "foo"; or equivalent (that I can see).
There's a test.MakeInvalid("Foo") which does set a message, but that's equivalent to Assert.Fail("Foo").
I found something that looked promising:
var result = test.MakeTestResult();
result.SetResult(ResultState.Inconclusive, "Custom Message text");
But that doesn't seem to do anything; the Test just Passes :( I looked for a test.SetAsCurrentResult(result) method in case I need to "attach" that result object back to the test? But nothing doing.
It feels like this is supposed to be possible, but I can't quite figure out how to make it all play together.
If anyone can even show me how to get to Skipped + Custom Message displayed, then I'd probably take that!

If you really want your test to be Inconclusive, then that's what Assume.That is there for. Use it just as you would use Assert.That and the specified constraint fails, your test result will be inconclusive.
That would be the simplest answer to your question.
However, reading the things you have tried, I don't think you actually want Inconclusive at least not as it is defined by NUnit.
In NUnit, Inconclusive means that the test doesn't count because it couldn't be run. The result basically disappears and the test run is successful.
You seem to be saying that you want to receive some notice that the condition failed. That makes sense in the situation where (for example) the internet was not available so your test run isn't definitive.
NUnit provides Assert.Ignore and Warn.If (also Warn.Unless) for those situations. Or you can set the corresponding result states in your custom attribute.
Regarding implementation... The RunState of a test applies to it's status before anyone has even tried to execute it. So, for example, the RunState may be Ignored if someone has used the IgnoreAttribute or it may be NotRunnable if it requires arguments and none are provided. There is no Inconclusive run sttate because that would mean the test is written to be inconclusive all the time, which makes no ssense. The IApplyToTest interface allows an attribute to change the status of a test at the point of discovery, before it is even run, so you would not want to use that.
After NUnit has attempted to run a test, it gets a ResultState, which might be Inconclusive. You can affect this in the code of the test but not currently through an attribute. What you want here is something that checks the conditions needed to run the test immediately before running it and skips execution if the conditions are not met. That attribute would need to be one that generates a command in the chain of commands that execute a test. It would probably need to implement ICommandWrapper to do that, which is a bit more complicated than IApplyToTest because the attribute code must generate a command instance that will work properly with NUnit itself and with other commands in the chain.
If I had this situation, I believe I would use a Run parameter to indicate whether the internet should be available. Then, the tests could
Assume.That(InternetIsNotNeeded());
silently ignoring those tests or fail as expected when the internet should be available.

Related

How to check if ID3D12GraphicsCommandList has been closed?

I'm learning DirectX12 and writing some utility classes to encapsulate functionality. Right now I'm working on mechanism for pooling CommandLists.
The pool assumes all command lists are closed. I wanted to validate that during inserting to the pool, but I can't manage to check it. From MSDN:
Returns S_OK if successful; otherwise, returns one of the following
values:
E_FAIL if the command list has already been closed, or an invalid API was called during command list recording.
Which is precisely what I'm looking for, but when I call ID3D12GraphicsCommandList::Close() to validate, it throws exception in KernelBase.dll. It looks really bizarre to me. Is this specification incompliance?
//EDIT: I cannot catch the exception, even with catch(...). It tells me maybe something may be wrong with my setup, but everything else is working for me.

spockframework: check expected result after every feature

I am using spockframework and geb for test automation. I would like to execute after every feature a simple check to be sure that no error dialogs are shown, I have added the following cleanup() method:
def cleanup() {
expect:
$('.myErrrorDialogClass').isEmpty()
}
The code is executed after every feature but it does not throw any error when the dialog is shown.
Spock uses AST transforms to wire in the functionality for each test label (when, expect, etc); they may not run the transformations on the cleanup method. They are either not expecting or not encouraging assertions in cleanup, so that code may run but not actually assert anything.
You can get around this by using a standard Groovy assert call without the expect block.
Summarized from our comment discussion above - in case you want to accept it as an answer ;-)

Return a value from a SoapUI TestCase

I'm trying to modularize my test cases, so I'm running a shared test case (as a procedure) that does something useful and returns a result value. As I need to pass-in non-string input properties, I have to run the test case from groovy:
def findLoopEndTC = testRunner.testCase.testSuite.testCases["TestCase - Find Loop End"]
assert findLoopEndTC != null, "Referred TC not found"
def runContext = new com.eviware.soapui.support.types.StringToObjectMap()
runContext.put("TestStepContext", context)
def runner = findLoopEndTC.run( runContext, false )
assert runner.status != com.eviware.soapui.model.testsuite.TestRunner.Status.FAILED : runner.reason
I've learned that the test case is run using the SINGLETON_AND_WAIT mode which ensures that the TestCase itself is run in a thread-safe way.
My question is how to return a value from the run test case in a thread-safe way?
I tried runner.getRunContext().getProperty("Result"), but it seems that the context properties are no longer there. So there seems to be only the "classical" way, findLoopEndTC.getPropertyValue("Result"), but this is aparently not thread-safe.
Are there other possibilities?
I use the free version of SoapUI.
I had the same problem. If I understand you correctly, this is what you want:
You’ve put the ‘calling’ context into a new context ‘runContext’:
context.get("TestStepContext").put("Results",resultList)
which has been passed in as the context for the test case to be run (synchronously). I’ll call the test case to be run ‘B’:
def runner = findLoopEndTC.run( runContext, false ) //in calling test case
To get something useful back from ‘B’, somewhere in it you need to put a value back into TestStepContext, e.g.:
context.get("TestStepContext").put("Results",resultList) //My results happened to be a list
In the calling test case, the line you need after the call to run the test case is:
def testResults = runContext.get("TestStepContext").get("Results")
Hope this makes sense.
I've been trying to work this out for the last few days too. I haven't been able to work out how to make it thread safe but I have an alternative approach which I think works pretty well.
I've based it on this http://forum.soapui.org/viewtopic.php?f=2&t=4681#p15731 suggestion from the SoapUI team. I found with the above solution it was still not thread safe, 99% of the time this worked but I found sometimes it's possible you can have two test cases both breaking out the loop at the same time.
To deal with this I set runningDeleteCar to the the hashcode for the current testRunner when it's broken out of the loop. I then double check this to make sure that some other test case hasn't gone in and changed it, if it doesn't match I just go back to the while loop. This stops the situation of two test cases breaking out at the same time.
This approach basically means only one test case can go through the shared test case at a time.

cucumber: string to an active record model name

I'm trying to do a DRY cucumber feature and I'm facing a problem of converting a string into an ActiveRecord model name
Given /^the following "(.+)" exist:/ do |mod, table|
table.hashes.each do |t|
mod.create!(t)
end
assert mod.all.count == table.hashes.size
end
that gives
undefined method `create!' for "Balloon":String (NoMethodError)
More elegant solution might be to use a factory, but I'm wondering whether it is possible to use the above approach?
You could look into constantize which turns a String into a constant. Try:
"Balloon".constantize.create!(t)
BUT: Using your app code (models in particular) in a Cucumber step is code smell. Your integration tests shouldn't rely on the code under test at all—think of your app as a black box when you implement Cucumber steps. (Also think of a refactoring of your models that require you to go back and change your Cucumber steps—that's your first clue that you're on the wrong track!)
What you could do to improve this is create the models using an API (if your app implements one).
That way, you only rely on those parts of your app that are public-facing.
On another note: Your Given shouldn't have an assertion, it's more like a before hook in RSpec, setting up a condition for a later assertion...

Wanted but not invoked: However, there were other interactions with this mock:

Wanted but not invoked: However, there were other interactions with this mock:
This is a mockito error you would catch when trying to verify the invocation on an object on specific method, but what happens is you have interacted with other method of that object but not the one mentioned.
If you have an object named CustomerService and say it has two methods named saveCustomer() and verifyExistingCustomer(),
and your mockito looks something like verify(customerService, atleast(1)).verifyExistingCustomer(customer), but in your actual service you called the saveCustomer() at least one.
Any idea how to resolve this ?
From what you are describing, it looks like you are telling your mocks that you are expecting verifyExistingCustomer() to be called but you are not actually calling it.
You should probably look at your test design, specifically ensuring that you can (via mocking) isolate your tests to test each method individually.
If there is something in your code that decides whether to call saveCustomer() or verifyExistingCustomer() then you should try to mock the data that the code inspects so that you can test each individually.
For example if your code looked like this:
if (customer.getId() == 0) {
saveCustomer(customer);
} else {
verifyExistingCustomer(customer);
}
Then you could have two separate tests that you could isolate by setting a zero value and non-zero value for the id in customer.
If you'd like to share your code I could probably give you a better example.

Resources