What is the purpose of Test Clean up - coded-ui-tests

Can anyone please tell me in detail what is test clean up why we use it?
Why we use after initialize
What actually it does
Please tell me in detail

Test Cleanup is code that runs after each test.
Test cleanup is declared inside the same class where your tests are declared. Also any assertions you have that go in TestCleanup can fail the test. This is very useful if you have values you check for each test in the same location that could potentially fail the test.
[TestCleanup]
public void CleanUp()
{
AppManager.CheckForHandledExceptions();
}
Here are the important events to consider:
[ClassInitialize]
public static void Init(TestContext testContext)
{
//Runs before any test is run in the class - imo not that useful.
}
[TestInitialize]
public void Init()
{
//Runs just prior to running a test very useful.
}
Mostly I use TestInitialize to reset the uimap between tests, otherwise control references can go stale.
Next what runs at the end, once all the tests in your assembly have run (very good for checking for unhandled exceptions or maybe shutting down the application).
So if you run 100 tests via MTM, after the last one is finished, AssemblyCleaup will run, also note this method is a bit special it is declared once per assembly, in it's own class with the [CodedUITest] attribute on the class.
[CodedUITest]
public class TestRunCleanup
{
[AssemblyCleanup()]
public static void AssemblyCleanup()
{
AppManager.CloseApplicationUnderTest();
}
}

Related

Integration Test Data Set Up in Spring Boot

Here is my test class
#SpringBootTest
#ActiveProfiles("test")
public MyTest {
...
#Before
public void init() {
System.out.println(":::: --- start init() ---");
...
}
...
}
Strange enough, the init() won't run for some reason. If I change the #Before to #BeforeAll and the method to static, the init() will run. A problem is that those data set up code doesn't run inside a static method and I can't change all of them to run inside a static block. For now, I have the following code in each test method to overcome the issue
if(list.size() == 0)
init();
I am wondering why the #Before won't run. Any advice?
In JUnit 5, #BeforeEach and #BeforeAll annotations are the equivalents for #Before and #BeforeClass in JUnit 4.
#Before is a JUnit 4 annotation, while #BeforeAll is a JUnit 5 annotation. You can also see this from the imports org.junit.Before and org.junit.jupiter.api.BeforeAll.
Also, the code marked #BeforeEach is executed before each test, while #BeforeAll runs once before the entire test fixture.
To be able to run #BeforeAll on a non-static method you can change the lifecycle of the the test instance with:
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
You have to be careful though, since the test class instance is now only created once, and not once per test method. If your test methods rely on state stored in instance variables, you may now need to manually reset the state in #BeforeEach or #AfterEach lifecycle methods.

jupiter zerocode ParallelLoadExtension choose methods order

Is it possible to use some kind of #Before annotation ?
I want to 'pre-load' datas (POST) before to launch my tests (GET).
But I only want parallel executions on the GET.
I was thinking to define a method with #LoadWith("preload_generation.properties") with :
number.of.threads=1
ramp.up.period.in.seconds=1
loop.count=1
Just to be sure that we execute it only once.
But it looks like I cannot choose the order of execution, and I need this POST method to be the first one executed.
I also tried to put a TestMappings with my 'loading method' at the top of the class.
But it doesn't work neither.
I am not aware of any way that ZeroCode would be able to do this as it is specific to only re-leveraging tests already written in JUnit. My suggestion would be to follow a bit more traditional approach and use standard JUnit setup methods
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
#Before
public void setup() {
// setup before each individual test
}
rather than attempting to use a tool outside of its intended purposes.
As per your described scenario above that you want to ensure data is loaded before your tests are executed, especially in the case of being run under load by ZeroCode it is suggested that you determine how to create your data using the
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
While this may take a bit more thought into how you create your data by creating it before all the tests it will ensure that your load test is excluding data setup time.

how not invoke specific method on spring batch

Is there a way with mockito to test my batch and tell to not invoke specific method that the process call MyService.class, inside that service i got some business logic in a specific method that i don't want him to invoke is that possible? I was sure that doing the doReturn will ignore my method (the method is public) but still get call.
#Autowired
private JobLauncherTestUtils jobLauncherTestUtils;
#Autowired
private MyRepo myRepoRepository;
#Before
public void setUp() {
MockitoAnnotations.initMocks(this);
}
#Test
public void testJob() throws Exception {
doReturn(true).when(spy(MyService.class)).validationETPer(any(Object.class), anyChar());
doReturn(true).when(spy(MyService.class)).validationMP(any(Object.class),anyChar());
JobExecution execution = jobLauncherTestUtils.launchJob();
assertEquals(execution.getStepExecutions().size(), 1);
for (StepExecution stepExecution : execution.getStepExecutions()) {
assertEquals(3, stepExecution.getReadCount());
assertEquals(3, stepExecution.getWriteCount());
}
assertEquals(ExitStatus.COMPLETED.getExitCode(), execution.getExitStatus().getExitCode());
assertEquals(2, myRepoRepository.count());
}
This line does not what you believe it does:
doReturn(true).when(spy(MyService.class)).validationETPer(any(Object.class), anyChar());
spy(MyService.class) creates a new instance of MyService and spies on it. Of course, since you never use that instance, it's completely useless. There is an instance that is spied on, but you didn't store it anywhere, you didn't put it anywhere and so nobody will ever use it.
That line does not spy on every MyService in existence, as you seem to believe, it creates one concrete instance of MyService and spies on that. Obviously, that's not what you want here - you want to spy on the instance that is actually used in your code.
What you probably wanted to do, is more something like this...
MyService mySpy = spy(MyService.class);
doReturn(true).when(mySpy).validationETPer(any(Object.class), anyChar());
jobLauncherTestUtils.setMyService( mySpy ); // or something like that
// and NOW you can test
It's hard to go deeper in the details without knowing how the MyService is used in your actual code. Personally, I would suggest starting with unit tests without spring and using spring test only for higher level tests.

continue running cucumber steps after a failure

Is there any way to continue executing Cucumber Steps even when one of the steps fails. In my current setup when a step fails , cucumber skips remaining steps....I wonder if there is some way to twick cucumber runner setup..
I could comment out failing steps but its not practical when you dont know which step will fail...If i could continue with remaining step i would know complete set of failing Tests in one shot....rather than going in cycle over cycle...
Environment: Cucumber JVM , R , Java , Ibatis , Spring Framework, Maven
It is not a good idea to continue executing steps after a step failure because a step failure can leave the World with an invariant violation. A better strategy is to increase the granularity of your scenarios. Instead of writing a single scenario with several "Then" statements, use a list of examples to separately test each postconditions. Sometimes a scenario outline and list of examples can consolidate similar stories. https://docs.cucumber.io/gherkin/reference/#scenario-outline
There is some discussion about adding a feature to tag certain steps to continue after failure. https://github.com/cucumber/cucumber/issues/79
One way would be to catch all the assertion errors and decide in the last step whether to fail or pass the test case. In this case, you can tailor it, say, check at any step to see if there is more than n errors and fail the test, if so.
Here's what I have done:
initialize a StringBuffer for Errors in your #Before for the test cases
catch the Assertion Errors and add to the StringBuffer, so that, they do not get thrown and terminate the test case.
Check the StringBuffer to determine whether to fail the test case.
StringBuffer verificationErrors;
// Initialize your error SringBuffer here
#Before
public void initialize() {
verificationErrors = new StringBuffer();
}
// The following is one of the steps in the test case where I need to assert something
#When("^the value is (\\d+)$")
public void the_result_should_be(int arg1) {
try {
assertEquals(arg1, 0);
}
catch(AssertionError ae) {
verificationErrors.append("Value is incorrect- "+ae.getMessage());
}
Check the StringBuffer in #After or in the last step of test case to determine if you can pass it or fail it, as follows:
if (!(verificationErrors.size()==0)) {
fail(verificationErrors.toString());
}
The only issue would be that, in the report, all the steps would look green but the test case looks failed. Then you might have to look through the Errors String to know which step(s) failed. You could add extra information to the String whenever there is an Assertion Error to help you locate the step easily.
Use SoftAssert to accumulate all assertion failures. Then tag your step definitions class as #ScenarioScoped and in step definitions class add a method tagged #After where you do mySoftAssert.assertAll();
i.e.
import io.cucumber.guice.ScenarioScoped;
import io.cucumber.java.After;
import io.cucumber.java.Before;
import io.cucumber.java.en.Then;
#ScenarioScoped
public class MyStepDefinitions {
SoftAssert mySoftAssert=new SoftAssert();
#Then("check something")
public void checkSomething() {
mySoftAssert.assertTrue(actualValue>expectedMinValue);
}
#After
public void afterScenario(Scenario scenario) throws Exception {
mySoftAssert.assertAll();
}
}

How do I mock a JSF FacesContext with a ViewMap for Unit Tests outside an actual web application?

EDIT: Cleaned up the question for readability. Please ignore comments up to October 31st.
In our application stack we work with many smaller jar modules that get combined into a final web application. One module defines JSF features, like implementing this ViewScope.
Now appart from integration testing we want to be able to unit test every part and thus need a way to mock a complete Faces Context (to be accessed via a wrapper) to test classes that use it.
The important part here is complete meaning it has to have an initialized ViewMap as this is where our ViewScope puts its objects.
I've tried different approaches:
1) shale-test: I've come the furthest with this but unfortunately the project is retired.
So far I've wrapped the FacesContext in a Provider which allows me to replace it with a Mocked FacesContext for testing. I've also modified the shale implementation of AbstractViewControllerTestCase to include an application context.
However when calling MockedFacesContext.getViewRoot().getViewMap() as this will throw an UnsupportedOperationException. The reasons seems to be that the MockApplication does not instantiate Application.defaultApplication (it's null) which is required for this method call. This seems to be a shale-test limitation.
2) JMock or mockito These seem to me not to not really mock anything at all as most members will just remain null. Don't know if JMock or mockito can actually call the propper initialization methods.
3) Custom Faces Mocker: To me this seems the only remaining option but we don't really have the time to analyse how Faces is initialized and recreate the behaviour for mocking purposes. Maybe someone has dont this before and can share major waypoints and gotchas?
Or is there any alternative way to mock a FacesContext outside a web application?
I would go with PowerMock+Mockito:
From your link:
private Map<String,Object> getViewMap() {
return FacesContext.getCurrentInstance().getViewRoot().getViewMap();
}
In the test:
#RunWith(PowerMockRunner.class)
#PrepareForTest({ FacesContext.class });
public class TheTest {
/*
* fake viewMap.
*/
private Map<String,Object> viewMap = Maps.newHashMap() // guava
/**
* mock for FaceContext
*/
#Mock
private FacesContext faceContext;
/**
* mock for UIViewRoot
*/
#Mock
private UIViewRoot uiViewRoot;
#Before
public void setUp() {
Mockito.doReturn(this.uiViewRoot).when(this.faceContext).getViewRoot();
Mockito.doReturn(this.viewMap).when(this.uiViewRoot).getViewMap();
PowerMock.mockStatic(FacesContext.class);
PowerMock.doReturn(this.faceContext).when(FacesContext.class, "getCurrentInstance");
}
#Test
public void someTest() {
/*
* do your thing and when
* FacesContext.getCurrentInstance().getViewRoot().getViewMap();
* is called, this.viewMap is returned.
*/
}
}
Some reading:
http://code.google.com/p/powermock/wiki/MockitoUsage

Resources