Is there a way to wrap one or more tests in a group within the test interface? - origen-sdk

We are dynamically creating bunches of characterization tests within our test interface. I would like to enclose them in a group for readability. I see that groups can be referenced in the flow files, is there an 'start_group' interface method where a list of tests could be added to as they are created?
thx

I think you can assign a group parameter to individual tests, which might help:
group :my_group do
test :test1
test :test2
end
# This is equivalent
test :test1, group: :my_group
test :test2, group: :my_group
Obviously the group parameter could be injected within the interface if that is preferable in your case.
Also remember that everything you can call in a flow you can call in an interface, so you could also use the group :blah do ... end approach within your interface logic.

Related

How to define the execution order of cucumber Test Cases

I want to have two different scenarios in the same feature.
The thing is that Scenario 1 needs to be executed before Scenario 2. I have seen that this can be achieved through cucumber Hooks but when digging in the explanations, there's no concrete cucumber implementation in the examples I have found.
How can I get Scenario 1 executed before Scenario 2?
The feature file is like this:
#Events #InsertExhPlan #DelExhPln
Feature: Insert an Exh Plan and then delete it
#InsertExhPlan
Scenario: Add a new ExhPlan
Given I login as admin
And I go to automated test
When I go to ExhPlan section
And Insert a new exh plan
Then The exh plan is listed
#DeleteExhPlan
Scenario: Delete an Exh Plan
Given I login as admin
And Open the automatized tests edition
When I go to the exh plan section
And The new exh plan is deleted
Then The new exhibitor plan is deleted
The Hooks file is:
package com.barrabes.utilities;
import cucumber.api.java.After;
import cucumber.api.java.Before;
import static com.aura.steps.rest.ParentRestStep.logger;
public class Hooks {
#Before(order=1)
public void beforeScenario(){
logger.info("================This will run before every Scenario================");
}
#Before(order=0)
public void beforeScenarioStart(){
logger.info("-----------------Start of Scenario-----------------");
}
#After(order=0)
public void afterScenarioFinish(){
logger.info("-----------------End of Scenario-----------------");
}
#After(order=1)
public void afterScenario(){
logger.info("================This will run after every Scenario================");
}
}
The order is now as it should be but I don't see how does the Hooks file control exection order.
You don't use Hooks for that purpose. Hooks are used for code that you need to run before and/or after tests, and/or before and/of after test suites; not to control the order of features and/or scenarios.
Cucumber scenarios are executed top to bottom. For the example you showed there, Scenario: Add a new ExhPlan will execute before Scenario: Delete an Exh Plan if you pass the tag #Events in the test runner. Also, you should not have the scenario tags at the feature level. So, you should remove #InsertExhPlan and #DelExhPln at the Feature level. Alternatively, you could pass a comma-separated list of scenario tags to the test runner in the order you want. For example, if you need to run scenario 2 before scenario 1, you would pass the tags for the corresponding scenarios in the order you wish them to be executed. Moreover, you can do this as well from your CI environment. For example, you can have Jenkins jobs that execute the tasks in a specific order by passing the scenario tags in that order. And, if you wish to be run in the default order, simply you can pass the feature tag.
About Hooks, this should be for code that needs to be run for all features and scenarios. For specific stuff you need to run for a particular feature, you need to use Background in the Cucumber file. Background block is run before each scenario in a given feature file.

NUnit Attribute to simulate condition-based Assert.Inconclusive with custom message text

I have some tests that depend on a certain thing being true (access to the internet, as it happens, but that isn't important and I don't want to discuss the details of the condition).
I can very easily write a static helper method which will test the (parameterless) condition and call Assert.Inconclusive("Explanatory Message") if it's true/false. And then call that at the start of each Test which has this requirement.
But I'd like to do this as an Attribute, if possible.
How, in detail, do I achieve that, in NUnit?
What I've tried so far:
There's an IApplyToTest interface, exposed by NUnit, which I can make my Attribute implement, and will allow me to hook into the TestRunner, but I can't get it to do what I want :(
That interface gives me access to an NUnit.Framework.Internal.Test object.
If I call:
test.RunState = RunState.NotRunnable;
then I get something equivalent to Assert.Fail("").
Similarly RunState.Skipped or RunState.Ignored give me the equivalent of Assert.Ignore("").
But none of these are setting a message on the Test, and there's no test.Message = "foo"; or equivalent (that I can see).
There's a test.MakeInvalid("Foo") which does set a message, but that's equivalent to Assert.Fail("Foo").
I found something that looked promising:
var result = test.MakeTestResult();
result.SetResult(ResultState.Inconclusive, "Custom Message text");
But that doesn't seem to do anything; the Test just Passes :( I looked for a test.SetAsCurrentResult(result) method in case I need to "attach" that result object back to the test? But nothing doing.
It feels like this is supposed to be possible, but I can't quite figure out how to make it all play together.
If anyone can even show me how to get to Skipped + Custom Message displayed, then I'd probably take that!
If you really want your test to be Inconclusive, then that's what Assume.That is there for. Use it just as you would use Assert.That and the specified constraint fails, your test result will be inconclusive.
That would be the simplest answer to your question.
However, reading the things you have tried, I don't think you actually want Inconclusive at least not as it is defined by NUnit.
In NUnit, Inconclusive means that the test doesn't count because it couldn't be run. The result basically disappears and the test run is successful.
You seem to be saying that you want to receive some notice that the condition failed. That makes sense in the situation where (for example) the internet was not available so your test run isn't definitive.
NUnit provides Assert.Ignore and Warn.If (also Warn.Unless) for those situations. Or you can set the corresponding result states in your custom attribute.
Regarding implementation... The RunState of a test applies to it's status before anyone has even tried to execute it. So, for example, the RunState may be Ignored if someone has used the IgnoreAttribute or it may be NotRunnable if it requires arguments and none are provided. There is no Inconclusive run sttate because that would mean the test is written to be inconclusive all the time, which makes no ssense. The IApplyToTest interface allows an attribute to change the status of a test at the point of discovery, before it is even run, so you would not want to use that.
After NUnit has attempted to run a test, it gets a ResultState, which might be Inconclusive. You can affect this in the code of the test but not currently through an attribute. What you want here is something that checks the conditions needed to run the test immediately before running it and skips execution if the conditions are not met. That attribute would need to be one that generates a command in the chain of commands that execute a test. It would probably need to implement ICommandWrapper to do that, which is a bit more complicated than IApplyToTest because the attribute code must generate a command instance that will work properly with NUnit itself and with other commands in the chain.
If I had this situation, I believe I would use a Run parameter to indicate whether the internet should be available. Then, the tests could
Assume.That(InternetIsNotNeeded());
silently ignoring those tests or fail as expected when the internet should be available.

How to get around spock Data provider is null error

I have a SpringBootTest that reads in properties from application.properties. The setup code uses the #Value annotation to set the values accordingly. One of these properties is an array of names.
I am trying to write a data driven test using Spock. The where statement is using these names that are initialized in the setup:
expect:
retrievedName == value
where:
value << getNames()
This always fails with org.spockframework.runtime.SpockExecutionException: Data provider is null.
It appears that the getNames() call is invoked before the properties are initialized in the setup code. If I do not use the where statement (data driven testing), all works fine. Is there a workaround for this?
You cannot use data initialized in the setup section as a source for data driven tests. As per the docs:
Although it is declared last, the where block is evaluated before the feature method containing it runs.
You can try and use setupSpec() methods and #Shared fields as a workaround.
See here for an example.

cucumber: string to an active record model name

I'm trying to do a DRY cucumber feature and I'm facing a problem of converting a string into an ActiveRecord model name
Given /^the following "(.+)" exist:/ do |mod, table|
table.hashes.each do |t|
mod.create!(t)
end
assert mod.all.count == table.hashes.size
end
that gives
undefined method `create!' for "Balloon":String (NoMethodError)
More elegant solution might be to use a factory, but I'm wondering whether it is possible to use the above approach?
You could look into constantize which turns a String into a constant. Try:
"Balloon".constantize.create!(t)
BUT: Using your app code (models in particular) in a Cucumber step is code smell. Your integration tests shouldn't rely on the code under test at all—think of your app as a black box when you implement Cucumber steps. (Also think of a refactoring of your models that require you to go back and change your Cucumber steps—that's your first clue that you're on the wrong track!)
What you could do to improve this is create the models using an API (if your app implements one).
That way, you only rely on those parts of your app that are public-facing.
On another note: Your Given shouldn't have an assertion, it's more like a before hook in RSpec, setting up a condition for a later assertion...

Why in Mojito, renaming controller.server.js to controller.server-foo.js will have no effect?

In Mojito on top of Node.js, I followed the example on http://developer.yahoo.com/cocktails/mojito/docs/quickstart/
What I did was renaming controller.server.js to controller.server-foo.js, and created a new file controller.server.js to show "Hello World".
But when mojito is started, the old file controller.server-foo.js is being used and so the "Hello World" is not printed. How come Mojito will use the old file?
(I also tried renaming controller.server-foo.js to foo-controller.server.js and now the "Hello World" is printed, but why is controller.server-foo.js used?)
I found out that historically, the "affinity" of the controller can be two parts. The first part is common, server, or client, and the second part is optional, and it might be tests, or other words, so use other names such as controller-not-used-server.js to disable it.
#Charles, there are 3 registration processes in mojito (yes, it is confusing at first):
Affinity (server, client or common).
YUI.add when creating yui modules (controllers, models, binders, etc)
and the less common which is the registration by name (which includes soemthing that we call selectors)
In your case, by having two controllers, one of them with a custom selector named "foo", you are effectible putting in use the 3 registration at once. Here is what happen internally:
A controller is always detonated as "controller" filename from the mojit folder, which is part of the registration by name, and since you have "foo" selector for one of the controller, your mojit will have to modes, "default" and "foo". Which one of them will be use? depends on the application.json, where you can have some conditions to set the value of "selector", which by default is empty. If you set the value of selector to "foo" when, for example, device is an iphone, then that controller will be used when that condition matches.
Then the YUI.add plays an important role, it is the way we can identify which controller should be used, and its only requirement is that NO OTHER MODULE in the app can have the same YUI Module name, which means that your controllers can't be named the same when registering them thru YUI.add. And I'm sure this is what is happening in your case. If they both have the same name under YUI.add() one will always override the other, and you should probably see that in the logs as a warning, if not, feel free to open an issue thru github.
To summarize:
The names used when registering YUI modules have to be unique, in your case, you can use: YUI.add('MyMojit', function(){}) and YUI.add('MyMojitFoo', function(){}), for each controller.
Use the selector (e.g.: controller.server-mobile.js) to select which YUI module should be used for a particular request by setting selector to the proper value in application.json.

Resources