jupiter zerocode ParallelLoadExtension choose methods order - performance-testing

Is it possible to use some kind of #Before annotation ?
I want to 'pre-load' datas (POST) before to launch my tests (GET).
But I only want parallel executions on the GET.
I was thinking to define a method with #LoadWith("preload_generation.properties") with :
number.of.threads=1
ramp.up.period.in.seconds=1
loop.count=1
Just to be sure that we execute it only once.
But it looks like I cannot choose the order of execution, and I need this POST method to be the first one executed.
I also tried to put a TestMappings with my 'loading method' at the top of the class.
But it doesn't work neither.

I am not aware of any way that ZeroCode would be able to do this as it is specific to only re-leveraging tests already written in JUnit. My suggestion would be to follow a bit more traditional approach and use standard JUnit setup methods
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
#Before
public void setup() {
// setup before each individual test
}
rather than attempting to use a tool outside of its intended purposes.
As per your described scenario above that you want to ensure data is loaded before your tests are executed, especially in the case of being run under load by ZeroCode it is suggested that you determine how to create your data using the
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
While this may take a bit more thought into how you create your data by creating it before all the tests it will ensure that your load test is excluding data setup time.

Related

how not invoke specific method on spring batch

Is there a way with mockito to test my batch and tell to not invoke specific method that the process call MyService.class, inside that service i got some business logic in a specific method that i don't want him to invoke is that possible? I was sure that doing the doReturn will ignore my method (the method is public) but still get call.
#Autowired
private JobLauncherTestUtils jobLauncherTestUtils;
#Autowired
private MyRepo myRepoRepository;
#Before
public void setUp() {
MockitoAnnotations.initMocks(this);
}
#Test
public void testJob() throws Exception {
doReturn(true).when(spy(MyService.class)).validationETPer(any(Object.class), anyChar());
doReturn(true).when(spy(MyService.class)).validationMP(any(Object.class),anyChar());
JobExecution execution = jobLauncherTestUtils.launchJob();
assertEquals(execution.getStepExecutions().size(), 1);
for (StepExecution stepExecution : execution.getStepExecutions()) {
assertEquals(3, stepExecution.getReadCount());
assertEquals(3, stepExecution.getWriteCount());
}
assertEquals(ExitStatus.COMPLETED.getExitCode(), execution.getExitStatus().getExitCode());
assertEquals(2, myRepoRepository.count());
}
This line does not what you believe it does:
doReturn(true).when(spy(MyService.class)).validationETPer(any(Object.class), anyChar());
spy(MyService.class) creates a new instance of MyService and spies on it. Of course, since you never use that instance, it's completely useless. There is an instance that is spied on, but you didn't store it anywhere, you didn't put it anywhere and so nobody will ever use it.
That line does not spy on every MyService in existence, as you seem to believe, it creates one concrete instance of MyService and spies on that. Obviously, that's not what you want here - you want to spy on the instance that is actually used in your code.
What you probably wanted to do, is more something like this...
MyService mySpy = spy(MyService.class);
doReturn(true).when(mySpy).validationETPer(any(Object.class), anyChar());
jobLauncherTestUtils.setMyService( mySpy ); // or something like that
// and NOW you can test
It's hard to go deeper in the details without knowing how the MyService is used in your actual code. Personally, I would suggest starting with unit tests without spring and using spring test only for higher level tests.

Mockito isNotNull passes null

Thanks in advance for the help -
I am new to mockito but have spent the last day looking at examples and the documentation but haven't been able to find a solution to my problem, so hopefully this is not too dumb of a question.
I want to verify that deleteLogs() calls deleteLog(Path) NUM_LOGS_TO_DELETE number of times, per path marked for delete. I don't care what the path is in the mock (since I don't want to go to the file system, cluster, etc. for the test) so I verify that deleteLog was called NUM_LOGS_TO_DELETE times with any non-null Path as a parameter. When I step through the execution however, deleteLog gets passed a null argument - this results in a NullPointerException (based on the behavior of the code I inherited).
Maybe I am doing something wrong, but verify and the use of isNotNull seems pretty straight forward...here is my code:
MonitoringController mockController = mock(MonitoringController.class);
// Call the function whose behavior I want to verify
mockController.deleteLogs();
// Verify that mockController called deleteLog the appropriate number of times
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(isNotNull(Path.class));
Thanks again
I've never used isNotNull for arguments so I can't really say what's going wrong with you code - I always use an ArgumentCaptor. Basically you tell it what type of arguments to look for, it captures them, and then after the call you can assert the values you were looking for. Give the below code a try:
ArgumentCaptor<Path> pathCaptor = ArgumentCaptor.forClass(Path.class);
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(pathCaptor.capture());
for (Path path : pathCaptor.getAllValues()) {
assertNotNull(path);
}
As it turns out, isNotNull is a method that returns null, and that's deliberate. Mockito matchers work via side effects, so it's more-or-less expected for all matchers to return dummy values like null or 0 and instead record their expectations on a stack within the Mockito framework.
The unexpected part of this is that your MonitoringController.deleteLog is actually calling your code, rather than calling Mockito's verification code. Typically this happens because deleteLog is final: Mockito works through subclasses (actually dynamic proxies), and because final prohibits subclassing, the compiler basically skips the virtual method lookup and inlines a call directly to the implementation instead of Mockito's mock. Double-check that methods you're trying to stub or verify are not final, because you're counting on them not behaving as final in your test.
It's almost never correct to call a method on a mock directly in your test; if this is a MonitoringControllerTest, you should be using a real MonitoringController and mocking its dependencies. I hope your mockController.deleteLogs() is just meant to stand in for your actual test code, where you exercise some other component that depends on and interacts with MonitoringController.
Most tests don't need mocking at all. Let's say you have this class:
class MonitoringController {
private List<Log> logs = new ArrayList<>();
public void deleteLogs() {
logs.clear();
}
public int getLogCount() {
return logs.size();
}
}
Then this would be a valid test that doesn't use Mockito:
#Test public void deleteLogsShouldReturnZeroLogCount() {
MonitoringController controllerUnderTest = new MonitoringController();
controllerUnderTest.logSomeStuff(); // presumably you've tested elsewhere
// that this works
controllerUnderTest.deleteLogs();
assertEquals(0, controllerUnderTest.getLogCount());
}
But your monitoring controller could also look like this:
class MonitoringController {
private final LogRepository logRepository;
public MonitoringController(LogRepository logRepository) {
// By passing in your dependency, you have made the creator of your class
// responsible. This is called "Inversion-of-Control" (IoC), and is a key
// tenet of dependency injection.
this.logRepository = logRepository;
}
public void deleteLogs() {
logRepository.delete(RecordMatcher.ALL);
}
public int getLogCount() {
return logRepository.count(RecordMatcher.ALL);
}
}
Suddenly it may not be so easy to test your code, because it doesn't keep state of its own. To use the same test as the above one, you would need a working LogRepository. You could write a FakeLogRepository that keeps things in memory, which is a great strategy, or you could use Mockito to make a mock for you:
#Test public void deleteLogsShouldCallRepositoryDelete() {
LogRepository mockLogRepository = Mockito.mock(LogRepository.class);
MonitoringController controllerUnderTest =
new MonitoringController(mockLogRepository);
controllerUnderTest.deleteLogs();
// Now you can check that your REAL MonitoringController calls
// the right method on your MOCK dependency.
Mockito.verify(mockLogRepository).delete(Mockito.eq(RecordMatcher.ALL));
}
This shows some of the benefits and limitations of Mockito:
You don't need the implementation to keep state any more. You don't even need getLogCount to exist.
You can also skip creating the logs, because you're testing the interaction, not the state.
You're more tightly-bound to the implementation of MonitoringController: You can't simply test that it's holding to its general contract.
Mockito can stub individual interactions, but getting them consistent is hard. If you want your LogRepository.count to return 2 until you call delete, then return 0, that would be difficult to express in Mockito. This is why it may make sense to write fake implementations to represent stateful objects and leave Mockito mocks for stateless service interfaces.

how to generate code in second UImap from existing recording

I am new to codedUI and for a start I am reading a lot about what should be a best practice.
I have read that if you are using complex application that is advisable to use multiple UImaps. Although I can not see a benefit at the moment I have created small project with two UImaps.
In the first initial setup (with initial UImap and CodedUITest1) I can choose whether to use Test builder or existing action recording for generating code. What ever I do it 'goes' to initial UImap. When I create new UI, test builder is started and I can record some actions and when I save it, it is added to newly created UImap in my case called AdvanceSettings. But I can not generate code from existing recording. Why is that? I would like to create automated test cases based on manual test cases with recordings.
Below is my code. I am using CodedUITest1 class for both UImaps. Should I use new class for
every UImap?
If you have some comments on code please do write some.
As I see it. Multiple UImaps are used if you have complex application so you can more easily change something. every GUI element has one UImap so if something changes on that GUI you only edit that UImap. But if you have one UImap and you use proper naming you can also easily replace or re-record certain method. So I am missing big picture with multiple UImaps.
using System;
using System.Collections.Generic;
using System.Text.RegularExpressions;
using System.Windows.Input;
using System.Windows.Forms;
using System.Drawing;
using Microsoft.VisualStudio.TestTools.UITesting;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Microsoft.VisualStudio.TestTools.UITest.Extension;
using Keyboard = Microsoft.VisualStudio.TestTools.UITesting.Keyboard;
using EAEP.AdvanceSettingsClasses;
namespace EAEP
{
/// <summary>
/// Summary description for CodedUITest1
/// </summary>
[CodedUITest]
public class CodedUITest1
{
public CodedUITest1()
{
}
[TestInitialize]
public void InitializationForTest()
{
this.UIMap.AppLaunch();
}
[TestMethod]
public void MainGUIMethod()
{
// To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items.
// For more information on generated code, see http://go.microsoft.com/fwlink/?LinkId=179463
this.UIMap.AssertMethod1();
this.UIMap.RestoreDefaults();
this.UIMap.AssertMethod1();
}
[TestMethod]
public void AdvanceSettignsWindowMethod()
{
AdvanceSettings advanceSettings = new AdvanceSettings();
advanceSettings.MoreSettingsReopenedAfterCancel();
this.UIMap.AssertVerificationAfterCancel();
advanceSettings.MoreSettingsReopenedAfterOK();
this.UIMap.AssertVerificationAfterOK();
}
#region Additional test attributes
// You can use the following additional attributes as you write your tests:
////Use TestInitialize to run code before running each test
//[TestInitialize()]
//public void MyTestInitialize()
//{
// // To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items.
// // For more information on generated code, see http://go.microsoft.com/fwlink/?LinkId=179463
//}
////Use TestCleanup to run code after each test has run
//[TestCleanup()]
//public void MyTestCleanup()
//{
// // To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items.
// // For more information on generated code, see http://go.microsoft.com/fwlink/?LinkId=179463
//}
#endregion
public TestContext TestContext
{
get
{
return testContextInstance;
}
set
{
testContextInstance = value;
}
}
private TestContext testContextInstance;
public UIMap UIMap
{
get
{
if ((this.map == null))
{
this.map = new UIMap();
}
return this.map;
}
}
private UIMap map;
}
}
You cann't use multiple UI Maps with the from existing recording feature. this feature always generates code in a map called UIMap. I've explained a bit about these limitation in a blog post i did about integrating specflow with Coded Ui tests
http://rburnham.wordpress.com/2011/05/30/record-your-coded-ui-test-methods/
If you want to use Multiple UIMaps for better maintainability you have to use this method
Record each action individually by right clicking the UIMap and selecting Coded UI Test Builder.
Manually wire up the test to the actions by creating a blank Coded UI Test, update the UIMap it references and then in the test methods call the required actions to perform the test.
Its a limitation that makes what is good about the MTM integration pointless.
Having multiple UIMaps speeds the test execution. Additionally this makes editions, assertions, properties and settings a lot easier.
To create tests for the second UIMap you just right click on it and press "Edit With Coded UI Test Builder"
Regarding the But I can not generate code from existing recording. Why is that? I have no clue - what do you mean by can not?

What are fixtures in programming?

I have heard of this term many times (in the context of programming) but couldn't find any explanation of what it meant. Any good articles or explanations?
I think you're referring to test fixtures:
The purpose of a test fixture is to ensure that there is a well known
and fixed environment in which tests are run so that results are
repeatable. Some people call this the test context.
Examples of fixtures:
Loading a database with a specific, known set of data
Erasing a hard disk and installing a known clean operating system installation
Copying a specific known set of files
Preparation of input data and set-up/creation of fake or mock objects
(source: wikipedia, see link above)
Here are also some practical examples from the documentation of the 'Google Test' framework.
The term fixture varies based on context, programing language or framework.
1. A known state against which a test is running
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
PHP-Unit documentation
A test fixture (also known as a test context) is the set of
preconditions or state needed to run a test. The developer should set
up a known good state before the tests, and return to the original
state after the tests.
Wikipedia (xUnit)
2. A file containing sample data
Fixtures is a fancy word for sample data. Fixtures allow you to
populate your testing database with predefined data before your tests
run. Fixtures are database independent and written in YAML. There is
one file per model.
RubyOnRails.org
3. A process that sets up a required state. 
A software test fixture sets up the system for the testing process by
providing it with all the necessary code to initialize it, thereby
satisfying whatever preconditions there may be. An example could be
loading up a database with known parameters from a customer site
before running your test.
Wikipedia
I think PHP-unit tests have very good explaining of this:
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
Also Yii documents described fixtures test in a good shape:
Automated tests need to be executed many times. To ensure the testing
process is repeatable, we would like to run the tests in some known
state called fixture. For example, to test the post creation feature
in a blog application, each time when we run the tests, the tables
storing relevant data about posts (e.g. the Post table, the Comment
table) should be restored to some fixed state.
Here's a simple example of fixtures test:
<?php
use PHPUnit\Framework\TestCase;
class StackTest extends TestCase
{
protected $stack;
protected function setUp()
{
$this->stack = [];
}
protected function tearDown()
{
$this->stack = [];
}
public function testEmpty()
{
$this->assertTrue(empty($this->stack));
}
public function testPush()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', $this->stack[count($this->stack)-1]);
$this->assertFalse(empty($this->stack));
}
public function testPop()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', array_pop($this->stack));
$this->assertTrue(empty($this->stack));
}
}
?>
This PHP unit test has functions with names setUp and tearDown so that before running your tests you setup your data and once finished you can restore them to the initial state.
Exactly to that topic, JUnit has a well explained doc. Here is the link!
The related portion of the article is:
Tests need to run against the background of a known set of objects. This set of objects is called a test fixture. When you are writing tests you will often find that you spend more time writing the code to set up the fixture than you do in actually testing values.
To some extent, you can make writing the fixture code easier by paying careful attention to the constructors you write. However, a much bigger savings comes from sharing fixture code. Often, you will be able to use the same fixture for several different tests. Each case will send slightly different messages or parameters to the fixture and will check for different results.
When you have a common fixture, here is what you do:
Add a field for each part of the fixture
Annotate a method with #org.junit.Before and initialize the variables in that method
Annotate a method with #org.junit.After to release any permanent resources you allocated in setUp
For example, to write several test cases that want to work with different combinations of 12 Swiss Francs, 14 Swiss Francs, and 28 US Dollars, first create a fixture:
public class MoneyTest {
private Money f12CHF;
private Money f14CHF;
private Money f28USD;
#Before public void setUp() {
f12CHF= new Money(12, "CHF");
f14CHF= new Money(14, "CHF");
f28USD= new Money(28, "USD");
}
}
In Xamarin.UITest it is explained as following:
Typically, each Xamarin.UITest is written as a method that is referred
to as a test. The class which contains the test is known as a test
fixture. The test fixture contains either a single test or a logical
grouping of tests and is responsible for any setup to make the test
run and any cleanup that needs to be performed when the test finishes.
Each test should follow the Arrange-Act-Assert pattern:
Arrange – The test will setup conditions and initialize things so that the test can be actioned.
Act – The test will interact with the application, enter text, pushing buttons, and so on.
Assert – The test examines the results of the actions performed in the Act step to determine correctness. For example, the
application may verify that a particular error message is
displayed.
Link for original article of the above Excerpt
And within Xamarin.UITest code it looks like following:
using System;
using System.IO;
using System.Linq;
using NUnit.Framework;
using Xamarin.UITest;
using Xamarin.UITest.Queries;
namespace xamarin_stembureau_poc_tests
{
[TestFixture(Platform.Android)]
[TestFixture(Platform.iOS)]
public class TestLaunchScreen
{
IApp app;
Platform platform;
public Tests(Platform platform)
{
this.platform = platform;
}
[SetUp]
public void BeforeEachTest()
{
app = AppInitializer.StartApp(platform);
}
[Test]
public void AppLaunches()
{
app.Screenshot("First screen.");
}
[Test]
public void LaunchScreenAnimationWorks()
{
app.Screenshot("Launch screen animation works.");
}
}
}
Hope this might be helpful to someone who is in search of better understanding about Fixtures in Programming.
I'm writing this answer as quick note for myself on what is "fixture".
same-data-multiple-tests
Test Fixtures: Using the Same Data Configuration for Multiple Tests
If you find yourself writing two or more tests that operate on similar data, you can use a test fixture. This allows you to reuse the same configuration of objects for several different tests.
you can read more at googletest
fixtures can be used for during integration test or during development (lets say ui development where data is comming from development database
fake users for database or testing
myproject/fixtures/my_fake_user.json
[
{
"model": "myapp.person",
"pk": 1,
"fields": {
"first_name": "John",
"last_name": "Lennon"
}
},
{
"model": "myapp.person",
"pk": 2,
"fields": {
"first_name": "Paul",
"last_name": "McCartney"
}
}
]
you can read more from django docs

Partial Mocking Class with Multiple Static Methods with GMock

I'm using GMock to add some unit testing to our existing Java projects. We have multiple places where the methods needing to be tested are static methods, which utilize additional static methods within the method we want to test.
I would like to be able to partially mock the class, pretty much all static methods on the class other than the initial entry point for testing.
For example:
class StaticClass {
static void method(String one) {
method2()
}
static void method(String one, String two) {
...
}
}
My hope is that I can mock the second static method but as soon as I do, method(String) goes MIA and executing the test fails with an expectation exception. Is there a way I can partially mock the class, maintaining the functionality of the first method but mock the static access of the second method?
I've also tried using metaClass programming to mock the method, but if I set method equal to a closure, the first method goes MIA again. Not sure how to do this with overloaded methods. Any ideas?
In Gmock, it mocks static methods and matches expectations according to their names. It means you cannot mock one overloaded method but not another.
It is the same with MOP of Groovy.
While this doesn't involve GMock specifically, you could extend StaticClass inside your test file and override the methods there

Resources