I am considering using SpecFlow for a new automation project. Since SpecFlow is similar to Cucumber in the Java world this question applies to Cucumber as well.
In real world applications there are lists of complex objects and tests are required to look just for specific object in those lists and only for specific fields of.
For example, a chat application displays a list of messages, a message being a complex object comprising of a date, user name, user icon image, text, and maybe other complex objects like images, tables, etc.
Now, one test may require just to check that the chat is not empty. Other test may require just to check that a message from a specific user is present. And another one just to check for a message with a specific text. The amount of verification rules can grow into many tens.
Of course, one way to deal with that is to implement a "step" for each verification rule, hence writing tens of steps just to discover that yet another one is needed... :(
I found that a better way is to use NUnit Constrains (Hamcrest Matchers in Java) to define those rules, for example:
[Test]
public void ShouldNotBeEmpty() {
...
Assert.That(chatMessages, Is.Not.Empty);
}
[Test]
public void ShouldHaveMessageFrom(string user) {
...
Assert.That(chatMessages, Contains.Item(new Message() with User=user));
// sometimes the User field maybe a complex object too...
}
[Test]
public void ShouldHaveMessage(string text) {
...
Assert.That(chatMessages, Contains.Item(new Message() with Text=text));
}
This way the mechanism that brings chatMessages can work with any kind of verification rule. Hence in a BDD framework, one could make a single step to work for all:
public void Then_the_chat(IConstraint matcher) {
Assert.That(someHowLoadChatMessagesHere, matcher);
}
Is there any way in SpecFlow/Cucumber to have these rules mapped to Gerkin syntax?
Code reuse is not the biggest concern for a behavior-driven test. Accurately describing the business use case is what a BDD test should do, so repetitive code is more acceptable. The reality is that you do end up with a large number of step definitions. This is normal and expected for BDD testing.
Within the realm of a chat application, I see three options for writing steps that correspond to the unit test assertions in your question:
Unit Test:
[Test]
public void ShouldNotBeEmpty() {
...
Assert.That(chatMessages, Is.Not.Empty);
}
Gherkin:
Then the chat messages should not be empty
Unit Test:
[Test]
public void ShouldHaveMessageFrom(string user) {
...
Assert.That(chatMessages, Contains.Item(new Message() with User=user));
// sometimes the User field maybe a complex object too...
}
Gherkin:
Then the user should have a chat message from "Greg"
Unit Test:
[Test]
public void ShouldHaveMessage(string text) {
...
Assert.That(chatMessages, Contains.Item(new Message() with Text=text));
}
Gherkin:
Then the user should have a chat message with the following text:
"""
Hi, everyone!
How is the weather, today?
"""
Unit Test:
public void Then_the_chat(IConstraint matcher) {
Assert.That(someHowLoadChatMessagesHere, matcher);
}
This gets a little more difficult. Consider using a data table to specify a more complex object in your assertion in Gherkin:
Then the user should have the following chat messages:
| Sender | Date Sent | Message |
| Greg | 5/2/2022 9:24:18 AM | ... |
| Sarah | 5/2/2022 9:25:39 AM | ... |
SpecFlow will pass a Table object as the last parameter to this step definition. You can use the SpecFlow.Assist Table helpers to compare the data table to your expected messages.
This gives you some options to think about. Which one you choose should be determined by how well the step and scenario reads in Gherkin. Without more information, this is all I can provide. Feel free to try these out and post new questions concerning more specific problems.
Related
The io.vertx.reactivex.core.eventbus.EventBus.rxSend() method has the following signature:
public <T> Single<Message<T>> rxSend(String address,
Object message,
DeliveryOptions options)
What is the correct way to mock this so that it returns a Single containing a real object? The issue is that the Message class has no constructor apart from one which which takes another Message object.
So the following will compile:
Mockito.when(eventBus.rxSend(Mockito.isA(String.class),
Mockito.isA(JsonObject.class),
Mockito.isA(DeliveryOptions.class))).thenReturn(Single.just(new Message<Object>(null)));
but of course Single.just(new Message<Object>(null))does not contain a real object which can then be passed on to test the next handler in the verticle.
Thanks
like i mentioned in my comment, i don't have an answer to your immediate question, but i'd instead like to recommend a different approach to getting the results you're looking for.
mocking types that you don't own is generally discouraged for a variety of reasons. the two that resonate most with me (as i've fallen victim) are:
if the real implementation of the mocked dependency changes, the mock's behavior will not automatically reveal any forward-breaking changes.
the more mocks a test introduces, the more cognitive load the test carries. and some tests require a lot of mocks in order to work.
there are lots of articles on the topic with more detailed viewpoints and opinions. if you're interested, refer to the Mockito wiki, or just Google around.
given all that, rather than mocking EventBus, why not use an actual instance and receive real reply Messages composed by the framework? sure, strictly speaking this becomes more of an integration test than a unit test, but is closer to the type of testing you want.
here's an example snippet from a test i wrote in an existing project with some added comments. (the code refers to some non-standard types with an -"Ext" suffix, but they aren't salient to the approach).
private EventBus eventBus;
#Before
public setUp(#NotNull TestContext context) {
eventBus = Vertx.vertx().eventBus()
}
#Test
public void ping_pong_reply_test(#NotNull TestContext context) {
final Async async = context.async();
// the following is a MessageConsumer registered
// with the EventBus for this specific test.
// the reference is retained so that it can be
// "unregistered()" upon completion of this test
// so as not to affect other tests.
final MessageConsumer<JsonObject> consumer = eventBus.consumer(Ping.class.getName(), message -> {
// here is where you would otherwise place
// your mock Message generation.
MessageExt.replyAsJsonObject(message, new Pong());
});
final Ping message = new Ping();
final DeliveryOptions options = null;
// the following uses an un-mocked EventBus to
// send an event and receive a real Message reply.
// created by the consumer above.
EventBusExt.rxSendJsonObject(eventBus, message, options).subscribe(
result ->
// result.body() is JSON that conforms to
// the Pong type
consumer.unregister();
async.complete();
},
error -> {
context.fail(error);
}
);
}
i hope this at least inspires some new thinking around your problem.
In a project where we're using cucumber-jvm for our webtests, I've come across a problem which so far I haven't been able to solve: Se have several Scenario Outlines which should all use the same Examples. Now, of course I can copy these examples to every single one, but it would be much shorter (and probably easier to understand) if you could do something like this:
Background:
Examples:
| name |
| Alice |
| Bob |
Scenario Outline: Flying to the conference
Given I'm flying to a confernce
When I try to check in at the airport
And my name is <name>
Then I should get my plane ticket
Scenario Outline: Collecting the conference ticket
Given I'm at a conference
When I ask for the ticket for <name>
And show my business card to prove my id
Then I should get my conference ticket
Scenario Outline: Collectiong my personalized swag bag
Given I'm at a conference
When I go to the first booth
And show them my conference ticket with the name <name>
Then they'll give me a swag bag with the name <name> printed onto it
Is something like that possible? If so, how? Would I use some kind of factory as is suggested here? If so, any recomendations?
If you combined the three scenarios into one scenario then what you want to achieve is trivial. Breaking them out into separate scenarios allows
Each to run (and fail) independently
Have a separate line in the report
If you are willing to forgo the former and combine the three into one scenario then I can suggest a solution that supports the latter. Cucumber-jvm does support the ability to write text into a report in an (After) hook via the Scenario object. In your step definition class define as a class variable
private List<String> stepStatusList;
You can initialize it in the step definition's class constructor.
this.stepStatusList = new ArrayList<String>();
In the last step of each of the formally three scenarios add text into stepStatusList with the status that you want to appear in the report.
this.stepStatusList.add("Scenario sub-part identifier, some status");
In the After hook, write the lines into the report. This sample assumes that you want to write the lines independently of the success or failure of the scenario.
#Before
public void setup_cucumber_spring_context(){
// Dummy method so cucumber will recognize this class as glue
// and use its context configuration.
}
#After
public void captureScreenshotOnFailure(Scenario scenario) {
// write scenario-part status into the report
for (String substatus : this.stepStatusList) {
scenario.write(substatus);
}
// On scenario failure, add a screenshot to the cucumber report
if (scenario.isFailed() && webDriver != null) {
try {
WebDriver augemented = new Augmenter().augment(webDriver);
byte[] screenshot = ((TakesScreenshot) augemented).getScreenshotAs(OutputType.BYTES);
scenario.embed(screenshot, "image/png");
} catch (Exception e) {
e.printStackTrace();
}
}
}
Can you please share the structure of your CodedUI test projects?
It would be interesting to see how you separate tests, helpers, UIMaps.
This is the way I do it. By no means the best or only way.
I have a static utilities class for basic functions and launching/closing the appundertest
public static class Utilities
{
private static ApplicationUnderTest App;
public static Launch()
{
try
{
App = ApplicationUnderTest.Launch(pathToExe);
}
catch (Microsoft.VisualStudio.TestTools.UITest.Extension.FailedToLaunchApplicationException e) {}
}
public static Close()
{
App.Close();
App = null;
}
}
All of my *.uimaps are seperated bases on "pages" or "screens" of an application. This is sometimes codedUI screws up and your *.uimaps can break. Also worth mentioning all the uimaps contain are single actions on a page such as filling out a username for a log in or clicking a button.
I then have a NavMap class that holds all of the higher level "navigations" that I would do in my apps. It might be better to come up with some intricate structure, but I prefer to just list list methods in a static class
//you will need to include the uimap in a using statement
public static class NavMap
{
public static void Login()
{
this.credsUIMap.EnterUsername();
this.credsUIMap.ENterPassword();
this.credsUIMap.ClickLoginButton();
}
public static void LogOut()
{
this.credsUIMap.ClickLogOutButton();
}
}
Finally I have the codedUI test file where the tests formed
[TestClass]
public class Tests
{
[TestMethod]
public void TestMethod1()
{
NavMap.Login();
}
[TestMethod]
public void TestMethod2()
{
NavMap.LogOut
}
[ClassInitialize()]
public static void ClassInitialize(TestContext testcontext)
{
Utilities.Launch();
}
[ClassCleanup()]
public static void ClassCleanup()
{
Utilities.Close();
}
}
I Also do separate test files for different types of tests (positive, negative, stress, ...) Then I combine them in an orderedtest
I use multiple projects. One General containing common methods and common UIMaps for other projects (with the respective dependencies to General project).
And then I have a project for each desktop or web application that I want to automate.
In projects:
A UIMap for each window.
Then, each test instances uimaps thats each are to be used.
A ordertest grup of each test.
I can add the next example:
***I can't post images yet
Example of my current test solution structure: http://i.stack.imgur.com/ekniz.png
The way to call an recorded action from a method test would be:
#using Application.UIMaps.Common_Application_UIClasses;
#using Application.UIMaps.Window_1_UIClases;
...
Common_Application_UI app_common = new Common_Application_UI();
Window_1_UI win1 = new Window_1_UI();
app_common.goToMenuThatOpenWindow1();
win1.setSomething("hello world!");
win1.exit();
app_common.exit();
Maybe this is not the best way of working but currently this is how I do.
Apologise for my english. I hope it inspire you.
I would highly recommend using something like Code First or CodedUI Page Modeling (which I wrote) to create abstractions over your UI in a highly testable way.
Even without these frameworks, you can easily write abstractions over your tests so that your test solution would look very similar to your main solution code.
I wrote a blog post about how this would look.
Typically, I would create a folder for each major workflow in the application and one for shared. This would be very similar to the MVC structure of your app. Each control in your app would become a Page Model in your testing project.
Web Project
|
|
Views
|
--- Accounts
| |
| --- Create
| --- Manage
|
|
--- Products
|
--- Search
Test Project
|
|
--- Page Models
|
--- Accounts
|
--- ICreateAccountPageModel (interface)
--- CreateAccountPageModel (coded ui implementation)
--- IManageAccountPageModel
--- ManageAccountPageModel
--- Products
|
--- ISearch
--- Search
|
--- Tests
|
--- Accounts
|
--- CreateAccountTests
--- ManageAccountTests
--- Products
|
--- SearchProductTests
The Page Models represent the page under test (or control under test if doing more modern web development). These can be written using a Test Driven approach without the UI actually being developed yet.
Create account view would contain username, password, and confirm password inputs.
Create account page model would have methods for setting the inputs, validating page state, clicking register button, etc.
The tests would test again the interface of your page model. The implementation would be written using Coded UI.
If you are using the MVVM pattern in your web site, your page models would end up looking very much like your view models.
I have heard of this term many times (in the context of programming) but couldn't find any explanation of what it meant. Any good articles or explanations?
I think you're referring to test fixtures:
The purpose of a test fixture is to ensure that there is a well known
and fixed environment in which tests are run so that results are
repeatable. Some people call this the test context.
Examples of fixtures:
Loading a database with a specific, known set of data
Erasing a hard disk and installing a known clean operating system installation
Copying a specific known set of files
Preparation of input data and set-up/creation of fake or mock objects
(source: wikipedia, see link above)
Here are also some practical examples from the documentation of the 'Google Test' framework.
The term fixture varies based on context, programing language or framework.
1. A known state against which a test is running
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
PHP-Unit documentation
A test fixture (also known as a test context) is the set of
preconditions or state needed to run a test. The developer should set
up a known good state before the tests, and return to the original
state after the tests.
Wikipedia (xUnit)
2. A file containing sample data
Fixtures is a fancy word for sample data. Fixtures allow you to
populate your testing database with predefined data before your tests
run. Fixtures are database independent and written in YAML. There is
one file per model.
RubyOnRails.org
3. A process that sets up a required state.
A software test fixture sets up the system for the testing process by
providing it with all the necessary code to initialize it, thereby
satisfying whatever preconditions there may be. An example could be
loading up a database with known parameters from a customer site
before running your test.
Wikipedia
I think PHP-unit tests have very good explaining of this:
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
Also Yii documents described fixtures test in a good shape:
Automated tests need to be executed many times. To ensure the testing
process is repeatable, we would like to run the tests in some known
state called fixture. For example, to test the post creation feature
in a blog application, each time when we run the tests, the tables
storing relevant data about posts (e.g. the Post table, the Comment
table) should be restored to some fixed state.
Here's a simple example of fixtures test:
<?php
use PHPUnit\Framework\TestCase;
class StackTest extends TestCase
{
protected $stack;
protected function setUp()
{
$this->stack = [];
}
protected function tearDown()
{
$this->stack = [];
}
public function testEmpty()
{
$this->assertTrue(empty($this->stack));
}
public function testPush()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', $this->stack[count($this->stack)-1]);
$this->assertFalse(empty($this->stack));
}
public function testPop()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', array_pop($this->stack));
$this->assertTrue(empty($this->stack));
}
}
?>
This PHP unit test has functions with names setUp and tearDown so that before running your tests you setup your data and once finished you can restore them to the initial state.
Exactly to that topic, JUnit has a well explained doc. Here is the link!
The related portion of the article is:
Tests need to run against the background of a known set of objects. This set of objects is called a test fixture. When you are writing tests you will often find that you spend more time writing the code to set up the fixture than you do in actually testing values.
To some extent, you can make writing the fixture code easier by paying careful attention to the constructors you write. However, a much bigger savings comes from sharing fixture code. Often, you will be able to use the same fixture for several different tests. Each case will send slightly different messages or parameters to the fixture and will check for different results.
When you have a common fixture, here is what you do:
Add a field for each part of the fixture
Annotate a method with #org.junit.Before and initialize the variables in that method
Annotate a method with #org.junit.After to release any permanent resources you allocated in setUp
For example, to write several test cases that want to work with different combinations of 12 Swiss Francs, 14 Swiss Francs, and 28 US Dollars, first create a fixture:
public class MoneyTest {
private Money f12CHF;
private Money f14CHF;
private Money f28USD;
#Before public void setUp() {
f12CHF= new Money(12, "CHF");
f14CHF= new Money(14, "CHF");
f28USD= new Money(28, "USD");
}
}
In Xamarin.UITest it is explained as following:
Typically, each Xamarin.UITest is written as a method that is referred
to as a test. The class which contains the test is known as a test
fixture. The test fixture contains either a single test or a logical
grouping of tests and is responsible for any setup to make the test
run and any cleanup that needs to be performed when the test finishes.
Each test should follow the Arrange-Act-Assert pattern:
Arrange – The test will setup conditions and initialize things so that the test can be actioned.
Act – The test will interact with the application, enter text, pushing buttons, and so on.
Assert – The test examines the results of the actions performed in the Act step to determine correctness. For example, the
application may verify that a particular error message is
displayed.
Link for original article of the above Excerpt
And within Xamarin.UITest code it looks like following:
using System;
using System.IO;
using System.Linq;
using NUnit.Framework;
using Xamarin.UITest;
using Xamarin.UITest.Queries;
namespace xamarin_stembureau_poc_tests
{
[TestFixture(Platform.Android)]
[TestFixture(Platform.iOS)]
public class TestLaunchScreen
{
IApp app;
Platform platform;
public Tests(Platform platform)
{
this.platform = platform;
}
[SetUp]
public void BeforeEachTest()
{
app = AppInitializer.StartApp(platform);
}
[Test]
public void AppLaunches()
{
app.Screenshot("First screen.");
}
[Test]
public void LaunchScreenAnimationWorks()
{
app.Screenshot("Launch screen animation works.");
}
}
}
Hope this might be helpful to someone who is in search of better understanding about Fixtures in Programming.
I'm writing this answer as quick note for myself on what is "fixture".
same-data-multiple-tests
Test Fixtures: Using the Same Data Configuration for Multiple Tests
If you find yourself writing two or more tests that operate on similar data, you can use a test fixture. This allows you to reuse the same configuration of objects for several different tests.
you can read more at googletest
fixtures can be used for during integration test or during development (lets say ui development where data is comming from development database
fake users for database or testing
myproject/fixtures/my_fake_user.json
[
{
"model": "myapp.person",
"pk": 1,
"fields": {
"first_name": "John",
"last_name": "Lennon"
}
},
{
"model": "myapp.person",
"pk": 2,
"fields": {
"first_name": "Paul",
"last_name": "McCartney"
}
}
]
you can read more from django docs
Given I have two Bounded Contexts:
Fleet Mgt - simple CRUD-based supporting sub-domain
Sales - which is my CQRS-based Core Domain
When a CRUD operation occurs in the fleet management, an event reflecting the operation should be published:
AircraftCreated
AircraftUpdated
AircraftDeleted
etc.
These events are required a) to update various index tables that are needed in the Sales domain and b) to provide a unified audit log.
Question: Is there an easy way to store and publish these events (to the InProcessEventBus, I'm not using NSB here) without going through an AggregateRoot, which I wouldn't need in a simple CRUD context.
If you want to publish the event about something, this something probably is an aggregate root, because it is an externally identified object about a bundle of interest, otherwise why would you want to keep track of them?
Keeping that in mind, you don't need index tables (I understand these are for querying) in the sales BC. You need the GUIDs of the Aircraft and only lookups/joins on the read side.
For auditing I would just add a generic audit event via reflection in the repositories/unit of work.
According to Pieter, the main contributor of Ncqrs, there is no way to do this out of the box.
In this scenario I don't want to go through the whole ceremony of creating and executing a command, then loading an aggregate root from the event store just to have it emit the event.
The behavior is simple CRUD, implemented using the simplest possible solution, which in this specific case is forms-over-data using Entity Framework. The only thing I need is an event being published once a transaction occurred.
My solution looks like this:
// Abstract base class that provides a Unit Of Work
public abstract class EventPublisherMappedByConvention
: AggregateRootMappedByConvention
{
public void Raise(ISourcedEvent e)
{
var context = NcqrsEnvironment.Get<IUnitOfWorkFactory>()
.CreateUnitOfWork(e.EventIdentifier);
ApplyEvent(e);
context.Accept();
}
}
// Concrete implementation for my specific domain
// Note: The events only reflect the CRUD that's happened.
// The methods themselves can stay empty, state has been persisted through
// other means anyway.
public class FleetManagementEventSource : EventPublisherMappedByConvention
{
protected void OnAircraftTypeCreated(AircraftTypeCreated e) { }
protected void OnAircraftTypeUpdated(AircraftTypeUpdated e) { }
// ...
}
// This can be called from anywhere in my application, once the
// EF-based transaction has succeeded:
new FleetManagementEventSource().Raise(new AircraftTypeUpdated { ... });