What are fixtures in programming? - fixtures

I have heard of this term many times (in the context of programming) but couldn't find any explanation of what it meant. Any good articles or explanations?

I think you're referring to test fixtures:
The purpose of a test fixture is to ensure that there is a well known
and fixed environment in which tests are run so that results are
repeatable. Some people call this the test context.
Examples of fixtures:
Loading a database with a specific, known set of data
Erasing a hard disk and installing a known clean operating system installation
Copying a specific known set of files
Preparation of input data and set-up/creation of fake or mock objects
(source: wikipedia, see link above)
Here are also some practical examples from the documentation of the 'Google Test' framework.

The term fixture varies based on context, programing language or framework.
1. A known state against which a test is running
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
PHP-Unit documentation
A test fixture (also known as a test context) is the set of
preconditions or state needed to run a test. The developer should set
up a known good state before the tests, and return to the original
state after the tests.
Wikipedia (xUnit)
2. A file containing sample data
Fixtures is a fancy word for sample data. Fixtures allow you to
populate your testing database with predefined data before your tests
run. Fixtures are database independent and written in YAML. There is
one file per model.
RubyOnRails.org
3. A process that sets up a required state. 
A software test fixture sets up the system for the testing process by
providing it with all the necessary code to initialize it, thereby
satisfying whatever preconditions there may be. An example could be
loading up a database with known parameters from a customer site
before running your test.
Wikipedia

I think PHP-unit tests have very good explaining of this:
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
Also Yii documents described fixtures test in a good shape:
Automated tests need to be executed many times. To ensure the testing
process is repeatable, we would like to run the tests in some known
state called fixture. For example, to test the post creation feature
in a blog application, each time when we run the tests, the tables
storing relevant data about posts (e.g. the Post table, the Comment
table) should be restored to some fixed state.
Here's a simple example of fixtures test:
<?php
use PHPUnit\Framework\TestCase;
class StackTest extends TestCase
{
protected $stack;
protected function setUp()
{
$this->stack = [];
}
protected function tearDown()
{
$this->stack = [];
}
public function testEmpty()
{
$this->assertTrue(empty($this->stack));
}
public function testPush()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', $this->stack[count($this->stack)-1]);
$this->assertFalse(empty($this->stack));
}
public function testPop()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', array_pop($this->stack));
$this->assertTrue(empty($this->stack));
}
}
?>
This PHP unit test has functions with names setUp and tearDown so that before running your tests you setup your data and once finished you can restore them to the initial state.

Exactly to that topic, JUnit has a well explained doc. Here is the link!
The related portion of the article is:
Tests need to run against the background of a known set of objects. This set of objects is called a test fixture. When you are writing tests you will often find that you spend more time writing the code to set up the fixture than you do in actually testing values.
To some extent, you can make writing the fixture code easier by paying careful attention to the constructors you write. However, a much bigger savings comes from sharing fixture code. Often, you will be able to use the same fixture for several different tests. Each case will send slightly different messages or parameters to the fixture and will check for different results.
When you have a common fixture, here is what you do:
Add a field for each part of the fixture
Annotate a method with #org.junit.Before and initialize the variables in that method
Annotate a method with #org.junit.After to release any permanent resources you allocated in setUp
For example, to write several test cases that want to work with different combinations of 12 Swiss Francs, 14 Swiss Francs, and 28 US Dollars, first create a fixture:
public class MoneyTest {
private Money f12CHF;
private Money f14CHF;
private Money f28USD;
#Before public void setUp() {
f12CHF= new Money(12, "CHF");
f14CHF= new Money(14, "CHF");
f28USD= new Money(28, "USD");
}
}

In Xamarin.UITest it is explained as following:
Typically, each Xamarin.UITest is written as a method that is referred
to as a test. The class which contains the test is known as a test
fixture. The test fixture contains either a single test or a logical
grouping of tests and is responsible for any setup to make the test
run and any cleanup that needs to be performed when the test finishes.
Each test should follow the Arrange-Act-Assert pattern:
Arrange – The test will setup conditions and initialize things so that the test can be actioned.
Act – The test will interact with the application, enter text, pushing buttons, and so on.
Assert – The test examines the results of the actions performed in the Act step to determine correctness. For example, the
application may verify that a particular error message is
displayed.
Link for original article of the above Excerpt
And within Xamarin.UITest code it looks like following:
using System;
using System.IO;
using System.Linq;
using NUnit.Framework;
using Xamarin.UITest;
using Xamarin.UITest.Queries;
namespace xamarin_stembureau_poc_tests
{
[TestFixture(Platform.Android)]
[TestFixture(Platform.iOS)]
public class TestLaunchScreen
{
IApp app;
Platform platform;
public Tests(Platform platform)
{
this.platform = platform;
}
[SetUp]
public void BeforeEachTest()
{
app = AppInitializer.StartApp(platform);
}
[Test]
public void AppLaunches()
{
app.Screenshot("First screen.");
}
[Test]
public void LaunchScreenAnimationWorks()
{
app.Screenshot("Launch screen animation works.");
}
}
}
Hope this might be helpful to someone who is in search of better understanding about Fixtures in Programming.

I'm writing this answer as quick note for myself on what is "fixture".
same-data-multiple-tests
Test Fixtures: Using the Same Data Configuration for Multiple Tests
If you find yourself writing two or more tests that operate on similar data, you can use a test fixture. This allows you to reuse the same configuration of objects for several different tests.
you can read more at googletest
fixtures can be used for during integration test or during development (lets say ui development where data is comming from development database
fake users for database or testing
myproject/fixtures/my_fake_user.json
[
{
"model": "myapp.person",
"pk": 1,
"fields": {
"first_name": "John",
"last_name": "Lennon"
}
},
{
"model": "myapp.person",
"pk": 2,
"fields": {
"first_name": "Paul",
"last_name": "McCartney"
}
}
]
you can read more from django docs

Related

jupiter zerocode ParallelLoadExtension choose methods order

Is it possible to use some kind of #Before annotation ?
I want to 'pre-load' datas (POST) before to launch my tests (GET).
But I only want parallel executions on the GET.
I was thinking to define a method with #LoadWith("preload_generation.properties") with :
number.of.threads=1
ramp.up.period.in.seconds=1
loop.count=1
Just to be sure that we execute it only once.
But it looks like I cannot choose the order of execution, and I need this POST method to be the first one executed.
I also tried to put a TestMappings with my 'loading method' at the top of the class.
But it doesn't work neither.
I am not aware of any way that ZeroCode would be able to do this as it is specific to only re-leveraging tests already written in JUnit. My suggestion would be to follow a bit more traditional approach and use standard JUnit setup methods
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
#Before
public void setup() {
// setup before each individual test
}
rather than attempting to use a tool outside of its intended purposes.
As per your described scenario above that you want to ensure data is loaded before your tests are executed, especially in the case of being run under load by ZeroCode it is suggested that you determine how to create your data using the
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
While this may take a bit more thought into how you create your data by creating it before all the tests it will ensure that your load test is excluding data setup time.

xUnit for test DAL in .net core 2 and DI- a little bit of confusion

I a little bit of confusion about xUnit for test my DAL.
My goal is to verify that my DAL correctly accesses the DB and extract the right data.
I create a xUnit test project and try to do a simpli test with Moq like this
[Fact]
public void Test1()
{
// Arrange
var mockMyClass = new Mock<IMyClassBLL>();
// Setup a mock stat repository to return some fake data within our target method
mockStAverageCost.Setup(ac => ac.GetBy(It.IsAny<MyClassVO>())).Returns(new List<MyClassVO>
{
new MyClassVO { HCO_ID = "1"},
new MyClassVO { HCO_ID = "2"},
new MyClassVO { HCO_ID = "3"},
new MyClassVO { HCO_ID = "4"}
});
// create our MyTest by injecting our mock repository
var MyTest = new MyClassBLL(mockMyClass.Object);
// ACT - call our method under test
var result = MyTest.GetBy();
// ASSERT - we got the result we expected - our fake data has 6 goals we should get this back from the method
Assert.True(result.Count == 4);
}
The method above work fine.
Now I want access directly to the db for get data.
Obviously something escapes me, I did not understand how to perform a data test with .net core 2 simulating dependeny injection and accessing the data.
Can someone clarify my ideas?
Are you looking for a unit test or an integration test? They're fundamentally different things and serve different purposes.
If your goal is to ensure that GetBy (the unit of functionality under test) does what it's supposed to do, then you should not be using live data. A real connection with real data would introduce variables, causing the test to potentially fail when there's actually nothing wrong with GetBy. For a true unit test, you should only use mocks and test data.
If your goal is to ensure that your application can connect to your database and actually draw data out of it, then that's an integration test. You might potentially use GetBy/your repository, in general, in the test, but generally you'd want to avoid that. Again, connecting and querying directly with via something like ADO.NET serves to remove variables, so if the test fails, you'll know it was because there actually was a problem connecting/querying, in general, rather than just some issue with your repository or a particular method thereof.
Long and short, a good test tests just one thing. If that particular thing requires external components (such as a SQL Server database), then it's an integration test, and at that point, you're testing the integration of the component. Something like a repository method should not come into play, as that would be testing two different things in one test. If you need to test GetBy then there should be no external dependencies, such as a SQL Server database.
Additionally:
I did not understand how to perform a data test with .net core 2 simulating dependeny injection and accessing the data.
This would be an example of testing the framework, which is another no-no. You can safely assume that DI works in ASP.NET Core. It has its own test suite covering that. There is no need for you to add tests for that as well.

Execute Spock Test in Different Environments

I have a Selenium test, which is executed with the help of Spock framework. In general it looks like this:
class SeleniumSpec extends Specification {
URL remoteAddress // Address of SE grid
Capabilities caps // Desired capabilities
WebDriver driver // Web driver
def setup() {
driver = new RemoteWebDriver(remoteAddress, caps)
}
def "some test" () {
expect:
driver.findElement(By.cssSelector("p.someParagraph")).text == 'Some text'
}
// other tests go here ...
}
The point here is, that my specification describes behavior of some component (in most cases - web views/pages). So the methods are expected to implement some business-relative logic (smth. like 'click on button and expect message in another field); but another thing, I would like to test, is to ensure that behavior is exactly the same in all browsers (capabilities).
To achieve this in 'ideal' world, I'd like to have a mechanism to specify, that a particular test class should be used several times, but with some different parameters. But for now, I see an ability to apply data sets only for a single method.
I came up only with several ideas to implement this (according to my current knowledge regarding Spock framework):
Use a list of drivers and execute each action over all list members. So each call to 'driver' will be replaced with 'drivers.each { it }' invocation. This approach, on the other hand, makes it hard to exactly discover, which of drivers failed the test.
Use method parameters and data sets to initiate a fresh copy of web driver on each iteration. This approach seems more logical according to Spock philosophy, but it requires a heavy operation of driver and web application initialization to be performed every time. It also removes ability to perform 'step-by-step' testing, since state of driver won't be preserved between test methods.
Combination of these approaches, when drivers are kept in a map, and each test invocation has exact name of driver to be used.
I'd appreciate, if anybody met this case and may come up with ideas, how to properly organize the testing process. Someone could also discover other approaches, or pros & contras of those above. Considering another test tool could be an option also.
You could create an abstract BaseSpec which contains all your features, but do not setup the driver in that spec. Then create a subspec for each different browser you want to test e.g.
class FirefoxSeleniumSpec extends BaseSeleniumSpec{
setupSpec(){
super.driver = new FirefoxDriver(...)
}
}
And then you can run all of the sub-specs to test all the browsers

Mockito isNotNull passes null

Thanks in advance for the help -
I am new to mockito but have spent the last day looking at examples and the documentation but haven't been able to find a solution to my problem, so hopefully this is not too dumb of a question.
I want to verify that deleteLogs() calls deleteLog(Path) NUM_LOGS_TO_DELETE number of times, per path marked for delete. I don't care what the path is in the mock (since I don't want to go to the file system, cluster, etc. for the test) so I verify that deleteLog was called NUM_LOGS_TO_DELETE times with any non-null Path as a parameter. When I step through the execution however, deleteLog gets passed a null argument - this results in a NullPointerException (based on the behavior of the code I inherited).
Maybe I am doing something wrong, but verify and the use of isNotNull seems pretty straight forward...here is my code:
MonitoringController mockController = mock(MonitoringController.class);
// Call the function whose behavior I want to verify
mockController.deleteLogs();
// Verify that mockController called deleteLog the appropriate number of times
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(isNotNull(Path.class));
Thanks again
I've never used isNotNull for arguments so I can't really say what's going wrong with you code - I always use an ArgumentCaptor. Basically you tell it what type of arguments to look for, it captures them, and then after the call you can assert the values you were looking for. Give the below code a try:
ArgumentCaptor<Path> pathCaptor = ArgumentCaptor.forClass(Path.class);
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(pathCaptor.capture());
for (Path path : pathCaptor.getAllValues()) {
assertNotNull(path);
}
As it turns out, isNotNull is a method that returns null, and that's deliberate. Mockito matchers work via side effects, so it's more-or-less expected for all matchers to return dummy values like null or 0 and instead record their expectations on a stack within the Mockito framework.
The unexpected part of this is that your MonitoringController.deleteLog is actually calling your code, rather than calling Mockito's verification code. Typically this happens because deleteLog is final: Mockito works through subclasses (actually dynamic proxies), and because final prohibits subclassing, the compiler basically skips the virtual method lookup and inlines a call directly to the implementation instead of Mockito's mock. Double-check that methods you're trying to stub or verify are not final, because you're counting on them not behaving as final in your test.
It's almost never correct to call a method on a mock directly in your test; if this is a MonitoringControllerTest, you should be using a real MonitoringController and mocking its dependencies. I hope your mockController.deleteLogs() is just meant to stand in for your actual test code, where you exercise some other component that depends on and interacts with MonitoringController.
Most tests don't need mocking at all. Let's say you have this class:
class MonitoringController {
private List<Log> logs = new ArrayList<>();
public void deleteLogs() {
logs.clear();
}
public int getLogCount() {
return logs.size();
}
}
Then this would be a valid test that doesn't use Mockito:
#Test public void deleteLogsShouldReturnZeroLogCount() {
MonitoringController controllerUnderTest = new MonitoringController();
controllerUnderTest.logSomeStuff(); // presumably you've tested elsewhere
// that this works
controllerUnderTest.deleteLogs();
assertEquals(0, controllerUnderTest.getLogCount());
}
But your monitoring controller could also look like this:
class MonitoringController {
private final LogRepository logRepository;
public MonitoringController(LogRepository logRepository) {
// By passing in your dependency, you have made the creator of your class
// responsible. This is called "Inversion-of-Control" (IoC), and is a key
// tenet of dependency injection.
this.logRepository = logRepository;
}
public void deleteLogs() {
logRepository.delete(RecordMatcher.ALL);
}
public int getLogCount() {
return logRepository.count(RecordMatcher.ALL);
}
}
Suddenly it may not be so easy to test your code, because it doesn't keep state of its own. To use the same test as the above one, you would need a working LogRepository. You could write a FakeLogRepository that keeps things in memory, which is a great strategy, or you could use Mockito to make a mock for you:
#Test public void deleteLogsShouldCallRepositoryDelete() {
LogRepository mockLogRepository = Mockito.mock(LogRepository.class);
MonitoringController controllerUnderTest =
new MonitoringController(mockLogRepository);
controllerUnderTest.deleteLogs();
// Now you can check that your REAL MonitoringController calls
// the right method on your MOCK dependency.
Mockito.verify(mockLogRepository).delete(Mockito.eq(RecordMatcher.ALL));
}
This shows some of the benefits and limitations of Mockito:
You don't need the implementation to keep state any more. You don't even need getLogCount to exist.
You can also skip creating the logs, because you're testing the interaction, not the state.
You're more tightly-bound to the implementation of MonitoringController: You can't simply test that it's holding to its general contract.
Mockito can stub individual interactions, but getting them consistent is hard. If you want your LogRepository.count to return 2 until you call delete, then return 0, that would be difficult to express in Mockito. This is why it may make sense to write fake implementations to represent stateful objects and leave Mockito mocks for stateless service interfaces.

Code Contracts and Auto Generated Files

When I enabled code contracts on my WPF control project I ran into a problem with an auto generated file which was created at compile time (XamlNamespace.GeneratedInternalTypeHelper). Note, the generated file is called GeneratedInternalTypeHelper.g.cs and is not the same as the GeneratedInternalTypeHelper.g.i.cs which there are several obsolete blog posts about.
I'm not exactly sure what its purpose is, but I am assuming it is important for some internal reflection to resolve XAML. The problem is that it does not have code contracts, nor is the code contract system smart enough to recognize it as an auto generated file. This leads to a bunch of errors from the static checker.
I tried searching for a solution to this problem, but it seems like nobody is developing WPF controls and using code contracts. I did come across an interesting attribute, ContractVerificationAttribute, which takes a boolean value to set whether the assembly or class is to be verified. This allows you to decorate a class as not verified. Sadly the GeneratedInternalTypeHelper is regenerated with every compile, so it is not possible to exclude just this one class. The inverse scenario is possible though, decorate the assembly as not verified and then opt in for every class.
To mitigate the obvious hack I wanted to create a test that would at least verify that the exposed classes have code contract verification with a test like the following to ensure that own classes were at least being verified:
[Fact]
public void AllAssemblyTypesAreDecoratedWithContractVerificationTrue()
{
var assembly = typeof(someType).Assembly;
var exposedTypes = assembly.GetTypes().Where(t=>!string.IsNullOrWhiteSpace(t.Namespace) && t.Namespace.StartsWith("MyNamespace") && !t.Name.StartsWith("<>"));
var areAnyNotContractVerified = exposedTypes.Any(t =>
{
var verificationAttribute = t.GetCustomAttributes(typeof(ContractVerificationAttribute), true).OfType<ContractVerificationAttribute>();
return verificationAttribute.Any() && verificationAttribute.First().Value;
});
Assert.False(areAnyNotContractVerified);
}
As you can see it takes all classes in the controls assembly and finds the one from the company namespace which are not also auto generated anonymous types (<>WeirdClassName).
(I also need to exclude Resources and settings, but I hope you get the idea).
I'm not loving the solution since there are ways of avoiding contract verification, but currently it's the best I can come up with. If anyone has a better solution, please let me know.
So you can treat this class exactly like you would treat any other "3rd party" class or library. I'm sure certain assumptions would hold with the interaction with this generated class so at the interaction points, decorate your own code with Contract.Assume(result != null) or similar.
var result = new GennedClass().GetSomeValue();
Contract.Assume(result != null);
What this does is translate into an assertion that is checked at run time, but it allows the static analyzer to reason about the rest of the code that you do control.

Resources