Is there any way to continue executing Cucumber Steps even when one of the steps fails. In my current setup when a step fails , cucumber skips remaining steps....I wonder if there is some way to twick cucumber runner setup..
I could comment out failing steps but its not practical when you dont know which step will fail...If i could continue with remaining step i would know complete set of failing Tests in one shot....rather than going in cycle over cycle...
Environment: Cucumber JVM , R , Java , Ibatis , Spring Framework, Maven
It is not a good idea to continue executing steps after a step failure because a step failure can leave the World with an invariant violation. A better strategy is to increase the granularity of your scenarios. Instead of writing a single scenario with several "Then" statements, use a list of examples to separately test each postconditions. Sometimes a scenario outline and list of examples can consolidate similar stories. https://docs.cucumber.io/gherkin/reference/#scenario-outline
There is some discussion about adding a feature to tag certain steps to continue after failure. https://github.com/cucumber/cucumber/issues/79
One way would be to catch all the assertion errors and decide in the last step whether to fail or pass the test case. In this case, you can tailor it, say, check at any step to see if there is more than n errors and fail the test, if so.
Here's what I have done:
initialize a StringBuffer for Errors in your #Before for the test cases
catch the Assertion Errors and add to the StringBuffer, so that, they do not get thrown and terminate the test case.
Check the StringBuffer to determine whether to fail the test case.
StringBuffer verificationErrors;
// Initialize your error SringBuffer here
#Before
public void initialize() {
verificationErrors = new StringBuffer();
}
// The following is one of the steps in the test case where I need to assert something
#When("^the value is (\\d+)$")
public void the_result_should_be(int arg1) {
try {
assertEquals(arg1, 0);
}
catch(AssertionError ae) {
verificationErrors.append("Value is incorrect- "+ae.getMessage());
}
Check the StringBuffer in #After or in the last step of test case to determine if you can pass it or fail it, as follows:
if (!(verificationErrors.size()==0)) {
fail(verificationErrors.toString());
}
The only issue would be that, in the report, all the steps would look green but the test case looks failed. Then you might have to look through the Errors String to know which step(s) failed. You could add extra information to the String whenever there is an Assertion Error to help you locate the step easily.
Use SoftAssert to accumulate all assertion failures. Then tag your step definitions class as #ScenarioScoped and in step definitions class add a method tagged #After where you do mySoftAssert.assertAll();
i.e.
import io.cucumber.guice.ScenarioScoped;
import io.cucumber.java.After;
import io.cucumber.java.Before;
import io.cucumber.java.en.Then;
#ScenarioScoped
public class MyStepDefinitions {
SoftAssert mySoftAssert=new SoftAssert();
#Then("check something")
public void checkSomething() {
mySoftAssert.assertTrue(actualValue>expectedMinValue);
}
#After
public void afterScenario(Scenario scenario) throws Exception {
mySoftAssert.assertAll();
}
}
Related
Is it possible to use some kind of #Before annotation ?
I want to 'pre-load' datas (POST) before to launch my tests (GET).
But I only want parallel executions on the GET.
I was thinking to define a method with #LoadWith("preload_generation.properties") with :
number.of.threads=1
ramp.up.period.in.seconds=1
loop.count=1
Just to be sure that we execute it only once.
But it looks like I cannot choose the order of execution, and I need this POST method to be the first one executed.
I also tried to put a TestMappings with my 'loading method' at the top of the class.
But it doesn't work neither.
I am not aware of any way that ZeroCode would be able to do this as it is specific to only re-leveraging tests already written in JUnit. My suggestion would be to follow a bit more traditional approach and use standard JUnit setup methods
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
#Before
public void setup() {
// setup before each individual test
}
rather than attempting to use a tool outside of its intended purposes.
As per your described scenario above that you want to ensure data is loaded before your tests are executed, especially in the case of being run under load by ZeroCode it is suggested that you determine how to create your data using the
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
While this may take a bit more thought into how you create your data by creating it before all the tests it will ensure that your load test is excluding data setup time.
I know that we can use ErrorCollector or soft assertions (AssertJ or TestNG) that do not fail a unit test immediately.
How they can be used with Mockito assertions? Or if they can't, does Mockito provide any alternatives?
Code sample
verify(mock).isMethod1();
verify(mock, times(1)).callMethod2(any(StringBuilder.class));
verify(mock, never()).callMethod3(any(StringBuilder.class));
verify(mock, never()).callMethod4(any(String.class));
Problem
In this snippet of code if a verification will fail, then the test will fail which will abort the remaining verify statements (it may require multiple test runs until all the failures from this unit test are revealed, which is time-consuming).
Since Mockito 2.1.0 you can use VerificationCollector rule in order to collect multiple verification failures and report at once.
Example
import static org.mockito.Mockito.verify;
import org.junit.Rule;
import org.mockito.junit.MockitoJUnit;
import org.mockito.junit.VerificationCollector;
// ...
#Rule
public final VerificationCollector collector = MockitoJUnit.collector();
#Test
public void givenXWhenYThenZ() throws Exception {
// ...
verify(mock).isMethod1();
verify(mock, times(1)).callMethod2(any(StringBuilder.class));
verify(mock, never()).callMethod3(any(StringBuilder.class));
verify(mock, never()).callMethod4(any(String.class));
}
Known issues
This rule cannot be used with ErrorCollector rule in the same test method. In separate tests it works fine.
Using Soft assertions you could do :
softly.assertThatThrownBy(() -> verify(mock).isMethod1()).doesNotThrowAnyException();
softly.assertThatThrownBy(() -> verify(mock, times(1)).callMethod2(any(StringBuilder.class))).doesNotThrowAnyException();
softly.assertThatThrownBy(() -> verify(mock, never()).callMethod3(any(StringBuilder.class))).doesNotThrowAnyException();
softly.assertThatThrownBy(() -> verify(mock, never()).callMethod4(anyString())).doesNotThrowAnyException();
If one or more of your mockito assertions fail, it will trigger an exception, and softAssertion will do the reporting job.
I am trying to create a cronjob in cq using a time interval
I see on the link https://sling.apache.org/documentation/bundles/scheduler-service-commons-scheduler.html I could make job1 run and it will work. But I have a questions on the code.
In the below code
Why is job1.run() invoked in a catch block? Can we not add it to the try block?
Can I replace the catch block instead of job1.run() using thread using start and can I add in try block or must it be in the catch block?
Thread newThread = new Thread(job1);
newThread.start();
I see the cronjob code in the above link
protected void activate(ComponentContext componentContext) throws Exception {
//case 1: with addJob() method: executes the job every minute
String schedulingExpression = "0 * * * * ?";
String jobName1 = "case1";
Map<String, Serializable> config1 = new HashMap<String, Serializable>();
boolean canRunConcurrently = true;
final Runnable job1 = new Runnable() {
public void run() {
log.info("Executing job1");
}
};
try {
this.scheduler.addJob(jobName1, job1, config1, schedulingExpression, canRunConcurrently);
} catch (Exception e) {
job1.run();
}
}
According to the Javadoc, addJob, addPeriodicJob and fireJobAt will throw an Exception if the job cannot be added. The docs do not suggest anything regarding the cause of such failures.
The snippet on the Apache Sling Scheduler documentation page that you quoted in your question catches and ignores these exceptions.
Looking at the implementation provided, job1 is just a regular runnable so executing the run method manually in the catch block does not affect the Scheduler at all.
What it seems to be doing is attempt to add the job and in case of failure, silently ignore it and run it manually so that it prints "Executing job1"
There are at least two serious problems with this approach:
The code ignores the fact that something went wrong while adding the job and pretends this never happens (no logging, nothing)
It runs the job manually giving you the impression that it has been scheduled and just ran for the first time.
As to why it's happening? I have no idea. It's just silly and It's certainly not something I'd like to see in actual, non-tutorial code.
The API using such a generic exception to signal failure is also quite unfortunate.
Coincidentally, Sling 7 deprecates addJob, addPeriodicJob and fireJobAt and replaces them all with schedule. The schedule method returns a boolean so it doesn't give any more information about what exactly happened but it doesn't require you to use ugly try catch blocks.
If you're unable to use the latest version of Sling, make sure to use a logger and log the exceptions. Running your jobs manually, whatever they are, probably won't make much sense but that's something you need to decide.
Can anyone please tell me in detail what is test clean up why we use it?
Why we use after initialize
What actually it does
Please tell me in detail
Test Cleanup is code that runs after each test.
Test cleanup is declared inside the same class where your tests are declared. Also any assertions you have that go in TestCleanup can fail the test. This is very useful if you have values you check for each test in the same location that could potentially fail the test.
[TestCleanup]
public void CleanUp()
{
AppManager.CheckForHandledExceptions();
}
Here are the important events to consider:
[ClassInitialize]
public static void Init(TestContext testContext)
{
//Runs before any test is run in the class - imo not that useful.
}
[TestInitialize]
public void Init()
{
//Runs just prior to running a test very useful.
}
Mostly I use TestInitialize to reset the uimap between tests, otherwise control references can go stale.
Next what runs at the end, once all the tests in your assembly have run (very good for checking for unhandled exceptions or maybe shutting down the application).
So if you run 100 tests via MTM, after the last one is finished, AssemblyCleaup will run, also note this method is a bit special it is declared once per assembly, in it's own class with the [CodedUITest] attribute on the class.
[CodedUITest]
public class TestRunCleanup
{
[AssemblyCleanup()]
public static void AssemblyCleanup()
{
AppManager.CloseApplicationUnderTest();
}
}
Thanks in advance for the help -
I am new to mockito but have spent the last day looking at examples and the documentation but haven't been able to find a solution to my problem, so hopefully this is not too dumb of a question.
I want to verify that deleteLogs() calls deleteLog(Path) NUM_LOGS_TO_DELETE number of times, per path marked for delete. I don't care what the path is in the mock (since I don't want to go to the file system, cluster, etc. for the test) so I verify that deleteLog was called NUM_LOGS_TO_DELETE times with any non-null Path as a parameter. When I step through the execution however, deleteLog gets passed a null argument - this results in a NullPointerException (based on the behavior of the code I inherited).
Maybe I am doing something wrong, but verify and the use of isNotNull seems pretty straight forward...here is my code:
MonitoringController mockController = mock(MonitoringController.class);
// Call the function whose behavior I want to verify
mockController.deleteLogs();
// Verify that mockController called deleteLog the appropriate number of times
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(isNotNull(Path.class));
Thanks again
I've never used isNotNull for arguments so I can't really say what's going wrong with you code - I always use an ArgumentCaptor. Basically you tell it what type of arguments to look for, it captures them, and then after the call you can assert the values you were looking for. Give the below code a try:
ArgumentCaptor<Path> pathCaptor = ArgumentCaptor.forClass(Path.class);
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(pathCaptor.capture());
for (Path path : pathCaptor.getAllValues()) {
assertNotNull(path);
}
As it turns out, isNotNull is a method that returns null, and that's deliberate. Mockito matchers work via side effects, so it's more-or-less expected for all matchers to return dummy values like null or 0 and instead record their expectations on a stack within the Mockito framework.
The unexpected part of this is that your MonitoringController.deleteLog is actually calling your code, rather than calling Mockito's verification code. Typically this happens because deleteLog is final: Mockito works through subclasses (actually dynamic proxies), and because final prohibits subclassing, the compiler basically skips the virtual method lookup and inlines a call directly to the implementation instead of Mockito's mock. Double-check that methods you're trying to stub or verify are not final, because you're counting on them not behaving as final in your test.
It's almost never correct to call a method on a mock directly in your test; if this is a MonitoringControllerTest, you should be using a real MonitoringController and mocking its dependencies. I hope your mockController.deleteLogs() is just meant to stand in for your actual test code, where you exercise some other component that depends on and interacts with MonitoringController.
Most tests don't need mocking at all. Let's say you have this class:
class MonitoringController {
private List<Log> logs = new ArrayList<>();
public void deleteLogs() {
logs.clear();
}
public int getLogCount() {
return logs.size();
}
}
Then this would be a valid test that doesn't use Mockito:
#Test public void deleteLogsShouldReturnZeroLogCount() {
MonitoringController controllerUnderTest = new MonitoringController();
controllerUnderTest.logSomeStuff(); // presumably you've tested elsewhere
// that this works
controllerUnderTest.deleteLogs();
assertEquals(0, controllerUnderTest.getLogCount());
}
But your monitoring controller could also look like this:
class MonitoringController {
private final LogRepository logRepository;
public MonitoringController(LogRepository logRepository) {
// By passing in your dependency, you have made the creator of your class
// responsible. This is called "Inversion-of-Control" (IoC), and is a key
// tenet of dependency injection.
this.logRepository = logRepository;
}
public void deleteLogs() {
logRepository.delete(RecordMatcher.ALL);
}
public int getLogCount() {
return logRepository.count(RecordMatcher.ALL);
}
}
Suddenly it may not be so easy to test your code, because it doesn't keep state of its own. To use the same test as the above one, you would need a working LogRepository. You could write a FakeLogRepository that keeps things in memory, which is a great strategy, or you could use Mockito to make a mock for you:
#Test public void deleteLogsShouldCallRepositoryDelete() {
LogRepository mockLogRepository = Mockito.mock(LogRepository.class);
MonitoringController controllerUnderTest =
new MonitoringController(mockLogRepository);
controllerUnderTest.deleteLogs();
// Now you can check that your REAL MonitoringController calls
// the right method on your MOCK dependency.
Mockito.verify(mockLogRepository).delete(Mockito.eq(RecordMatcher.ALL));
}
This shows some of the benefits and limitations of Mockito:
You don't need the implementation to keep state any more. You don't even need getLogCount to exist.
You can also skip creating the logs, because you're testing the interaction, not the state.
You're more tightly-bound to the implementation of MonitoringController: You can't simply test that it's holding to its general contract.
Mockito can stub individual interactions, but getting them consistent is hard. If you want your LogRepository.count to return 2 until you call delete, then return 0, that would be difficult to express in Mockito. This is why it may make sense to write fake implementations to represent stateful objects and leave Mockito mocks for stateless service interfaces.