Can you please share the structure of your CodedUI test projects?
It would be interesting to see how you separate tests, helpers, UIMaps.
This is the way I do it. By no means the best or only way.
I have a static utilities class for basic functions and launching/closing the appundertest
public static class Utilities
{
private static ApplicationUnderTest App;
public static Launch()
{
try
{
App = ApplicationUnderTest.Launch(pathToExe);
}
catch (Microsoft.VisualStudio.TestTools.UITest.Extension.FailedToLaunchApplicationException e) {}
}
public static Close()
{
App.Close();
App = null;
}
}
All of my *.uimaps are seperated bases on "pages" or "screens" of an application. This is sometimes codedUI screws up and your *.uimaps can break. Also worth mentioning all the uimaps contain are single actions on a page such as filling out a username for a log in or clicking a button.
I then have a NavMap class that holds all of the higher level "navigations" that I would do in my apps. It might be better to come up with some intricate structure, but I prefer to just list list methods in a static class
//you will need to include the uimap in a using statement
public static class NavMap
{
public static void Login()
{
this.credsUIMap.EnterUsername();
this.credsUIMap.ENterPassword();
this.credsUIMap.ClickLoginButton();
}
public static void LogOut()
{
this.credsUIMap.ClickLogOutButton();
}
}
Finally I have the codedUI test file where the tests formed
[TestClass]
public class Tests
{
[TestMethod]
public void TestMethod1()
{
NavMap.Login();
}
[TestMethod]
public void TestMethod2()
{
NavMap.LogOut
}
[ClassInitialize()]
public static void ClassInitialize(TestContext testcontext)
{
Utilities.Launch();
}
[ClassCleanup()]
public static void ClassCleanup()
{
Utilities.Close();
}
}
I Also do separate test files for different types of tests (positive, negative, stress, ...) Then I combine them in an orderedtest
I use multiple projects. One General containing common methods and common UIMaps for other projects (with the respective dependencies to General project).
And then I have a project for each desktop or web application that I want to automate.
In projects:
A UIMap for each window.
Then, each test instances uimaps thats each are to be used.
A ordertest grup of each test.
I can add the next example:
***I can't post images yet
Example of my current test solution structure: http://i.stack.imgur.com/ekniz.png
The way to call an recorded action from a method test would be:
#using Application.UIMaps.Common_Application_UIClasses;
#using Application.UIMaps.Window_1_UIClases;
...
Common_Application_UI app_common = new Common_Application_UI();
Window_1_UI win1 = new Window_1_UI();
app_common.goToMenuThatOpenWindow1();
win1.setSomething("hello world!");
win1.exit();
app_common.exit();
Maybe this is not the best way of working but currently this is how I do.
Apologise for my english. I hope it inspire you.
I would highly recommend using something like Code First or CodedUI Page Modeling (which I wrote) to create abstractions over your UI in a highly testable way.
Even without these frameworks, you can easily write abstractions over your tests so that your test solution would look very similar to your main solution code.
I wrote a blog post about how this would look.
Typically, I would create a folder for each major workflow in the application and one for shared. This would be very similar to the MVC structure of your app. Each control in your app would become a Page Model in your testing project.
Web Project
|
|
Views
|
--- Accounts
| |
| --- Create
| --- Manage
|
|
--- Products
|
--- Search
Test Project
|
|
--- Page Models
|
--- Accounts
|
--- ICreateAccountPageModel (interface)
--- CreateAccountPageModel (coded ui implementation)
--- IManageAccountPageModel
--- ManageAccountPageModel
--- Products
|
--- ISearch
--- Search
|
--- Tests
|
--- Accounts
|
--- CreateAccountTests
--- ManageAccountTests
--- Products
|
--- SearchProductTests
The Page Models represent the page under test (or control under test if doing more modern web development). These can be written using a Test Driven approach without the UI actually being developed yet.
Create account view would contain username, password, and confirm password inputs.
Create account page model would have methods for setting the inputs, validating page state, clicking register button, etc.
The tests would test again the interface of your page model. The implementation would be written using Coded UI.
If you are using the MVVM pattern in your web site, your page models would end up looking very much like your view models.
Related
I am considering using SpecFlow for a new automation project. Since SpecFlow is similar to Cucumber in the Java world this question applies to Cucumber as well.
In real world applications there are lists of complex objects and tests are required to look just for specific object in those lists and only for specific fields of.
For example, a chat application displays a list of messages, a message being a complex object comprising of a date, user name, user icon image, text, and maybe other complex objects like images, tables, etc.
Now, one test may require just to check that the chat is not empty. Other test may require just to check that a message from a specific user is present. And another one just to check for a message with a specific text. The amount of verification rules can grow into many tens.
Of course, one way to deal with that is to implement a "step" for each verification rule, hence writing tens of steps just to discover that yet another one is needed... :(
I found that a better way is to use NUnit Constrains (Hamcrest Matchers in Java) to define those rules, for example:
[Test]
public void ShouldNotBeEmpty() {
...
Assert.That(chatMessages, Is.Not.Empty);
}
[Test]
public void ShouldHaveMessageFrom(string user) {
...
Assert.That(chatMessages, Contains.Item(new Message() with User=user));
// sometimes the User field maybe a complex object too...
}
[Test]
public void ShouldHaveMessage(string text) {
...
Assert.That(chatMessages, Contains.Item(new Message() with Text=text));
}
This way the mechanism that brings chatMessages can work with any kind of verification rule. Hence in a BDD framework, one could make a single step to work for all:
public void Then_the_chat(IConstraint matcher) {
Assert.That(someHowLoadChatMessagesHere, matcher);
}
Is there any way in SpecFlow/Cucumber to have these rules mapped to Gerkin syntax?
Code reuse is not the biggest concern for a behavior-driven test. Accurately describing the business use case is what a BDD test should do, so repetitive code is more acceptable. The reality is that you do end up with a large number of step definitions. This is normal and expected for BDD testing.
Within the realm of a chat application, I see three options for writing steps that correspond to the unit test assertions in your question:
Unit Test:
[Test]
public void ShouldNotBeEmpty() {
...
Assert.That(chatMessages, Is.Not.Empty);
}
Gherkin:
Then the chat messages should not be empty
Unit Test:
[Test]
public void ShouldHaveMessageFrom(string user) {
...
Assert.That(chatMessages, Contains.Item(new Message() with User=user));
// sometimes the User field maybe a complex object too...
}
Gherkin:
Then the user should have a chat message from "Greg"
Unit Test:
[Test]
public void ShouldHaveMessage(string text) {
...
Assert.That(chatMessages, Contains.Item(new Message() with Text=text));
}
Gherkin:
Then the user should have a chat message with the following text:
"""
Hi, everyone!
How is the weather, today?
"""
Unit Test:
public void Then_the_chat(IConstraint matcher) {
Assert.That(someHowLoadChatMessagesHere, matcher);
}
This gets a little more difficult. Consider using a data table to specify a more complex object in your assertion in Gherkin:
Then the user should have the following chat messages:
| Sender | Date Sent | Message |
| Greg | 5/2/2022 9:24:18 AM | ... |
| Sarah | 5/2/2022 9:25:39 AM | ... |
SpecFlow will pass a Table object as the last parameter to this step definition. You can use the SpecFlow.Assist Table helpers to compare the data table to your expected messages.
This gives you some options to think about. Which one you choose should be determined by how well the step and scenario reads in Gherkin. Without more information, this is all I can provide. Feel free to try these out and post new questions concerning more specific problems.
I like the jhipster entity generator.
I often get to change my model and regen all entities.
I wish to keep the generated stuff and override for my needs.
On angular side, it is quite easy to create a new service extending the default entity service to do my stuff.
On java side, it is more complicated.
For example, I override src/main/java/xxx/web/rest/xxxResource.java with src/main/java/xxx/web/rest/xxxOverrideResource.java
I have to comment #RestController in xxxResource.java. I tried to give it a different bundle name from the overrided class but it is not sufficient : #RestController("xxxResource")
In xxxOverrideResource.java, I have to change all #xxxMapping() to different paths
In xxxOverrideResource.java, I have to change all method names
This allow me to keep the CRUD UI and API, and overload it using another MappingPath.
Some code to make it more visual. Here is the generated xxxResource.java
/**
* REST controller for managing WorldCommand.
*/
// Commented to prevent bean dupplicated error.
// #RestController
#RequestMapping("/api")
public class WorldCommandResource {
private final WorldCommandService worldCommandService;
public WorldCommandResource(WorldCommandService worldCommandService) {
this.worldCommandService = worldCommandService;
}
#PutMapping("/world-commands")
#Timed
public ResponseEntity<WorldCommand> updateWorldCommand(#Valid #RequestBody WorldCommand worldCommand)
throws URISyntaxException {
log.debug("REST request to update WorldCommand : {}", worldCommand);
...
}
Here is my overloaded version : xxxOverrideResource.java
/**
* REST controller for managing WorldCommand.
*/
#RestController("WorldCommandOverrideResource")
#RequestMapping("/api")
public class WorldCommandOverrideResource extends WorldCommandResource {
private final WorldCommandOverrideService worldCommandService;
public WorldCommandOverrideResource(WorldCommandOverrideService worldCommandService) {
super(worldCommandService);
log.warn("USING WorldCommandOResource");
this.worldCommandService = worldCommandService;
}
#PutMapping("/world-commands-override")
#Timed
public ResponseEntity<WorldCommand> updateWorldCommandOverride(#Valid #RequestBody WorldCommand worldCommand)
throws URISyntaxException {
throw new RuntimeException("WorldCommand updating not allowed");
}
With the xxxResource overrided, it is easy to override the xxxService and xxxRepository by constructor injection.
I feel like I am over thinking it. As it is not an external component but code from a generator, maybe the aim is to use the tool to write less code and then do the changes you need.
Also, I fear this overriding architecture will prevent me from creating abstract controller if needed.
Do you think keeping the original generated code is a good pratice or I should just make my changes in the generated class and be carefull when regenerating an entity ?
Do you know a better way to override a Spring controller ?
Your approach looks like the side-by-side approach described here: https://www.youtube.com/watch?v=9WVpwIUEty0
I often found that the generated REST API is only useful for managing data in a backoffice and I usually write a complete separate API with different endpoints, authorizations and DTOs that is consumed by mobile or end-users. So I don't see much value in overriding REST controllers, after all they are supposed to be quite thin with as little business logic as possible.
You must also consider how long you want to keep this compatibility with generated code. As your app grows in complexity you might want to refactor your code and organize it around feature packages rather than by technical packages (repository, rest controllers, services, ...). For many reasons, sooner or later the way the generated code is setup will get in your way, so I would not put too much effort into this compatibility goal that has no real business value especially when you know that the yearly released major version may break it because of changes in the generator itself or more likely because of changes in underlying frameworks.
Hi I have a maybe a common problem that I think not entirely can be solved by Autofac or any IoC container. It can be a design problem that I need some fresh input on.
I have the classic MVC web solution with EF 6. Its been implemented in a true DDD style with Anti-corruption layer, three bounded contexts, cross-cutting concerns movers out to infrastructure projects. It has been a real pleasure to see all pieces fall in to place in good way. We also added Commands to CUD operations into Domain.
Now here is the problem. Customer want a change log that tracks every entities property and when updates are done we need to save into change log values before and after update. We have implemented that successful in a ILoggerService that wraps a Microsoft test utility that we uses to detect changes. But I, my role is Software Architect, took the decision to Decorate our generic repositories with a ChangeTrackerRepository that have a dependency on ILoggerService. This works fine. The Decorator track methods Add(…) and Modify(…) in our IRepository<TEntity>.
The problem is that we have Repositories that have custom repositories that have custom queries like this:
public class CounterPartRepository : Repository<CounterPart>, ICounterPartRepository
{
public CounterPartRepository(ManagementDbContext unitOfWork)
: base(unitOfWork)
{}
public CounterPart GetAggregate(Guid id)
{
return GetSet().CompleteAggregate().SingleOrDefault(s => s.Id == id);
}
public void DeleteCounterPartAddress(CounterPartAddress address)
{
RemoveChild(address);
}
public void DeleteCounterPartContact(CounterPartContact contact)
{
RemoveChild(contact);
}
}
We have simple repositories that just closes the generic repository and get proper EF Bounded context injected into it (Unit Of Work pattern):
public class AccrualPeriodTypeRepository : Repository<AccrualPeriodType>, IAccrualPeriodTypeRepository
{
public AccrualPeriodTypeRepository(ManagementDbContext unitOfWork)
: base(unitOfWork)
{
}
}
The problem is that when decorating AccrualPeriodTypeRepository with AutoFac through generic Decorator we can easily inject that repo into CommandHandler actor like this
public AddAccrualPeriodCommandHandler(IRepository<AccrualPeriod> accrualRepository)
This works fine.
But How do we also decorate CounterPartRepository???
I have gone through several solutions in my head and they all end up with a dead-end.
1) Manually decorate every custom repository generate to many custom decorators that it will be near unmaintainable.
2) Decorate the closed Repository Repository with extended custom queries. This smells bad. Should be part of that repository?
3) If we consider 2… maybe Skip our Services and only rely on IRepository for operating on our Aggregate Roots and IQueryHandler (see article https://cuttingedge.it/blogs/steven/pivot/entry.php?id=92)
I need some fresh input to a common problem I think, when it comes to decorating your repositories when you have custom closed repositories and simple repositories also closed but both inherit from same Repository
Have you consider decorating command handlers instead of decorating repositories?
Repos are too low level, and it is not their responsibility to know what should be logged and how.
What about the following:
1) You have your command handlers in a way:
public class DeleteCounterPartAddressHandler : IHandle<DeleteCounterPartAddressCommand>
{
//this might be set by a DI container, or passed to a constructor
public ICounterPartRepository Repository { get; set; }
public void Handle(DeleteCounterPartAddressCommand command)
{
var counterpart = repository.GetPropertyById(command.CounterPartId);
// in DDD you always want to read and aggregate
// and save an aggregate as a whole
property.DeleteAdress(command.AddressId);
repository.Save(counterpart)
}
}
2) Now you can simply use Chain Of Responsibility pattern to "decorate" your handlers with logging, transactions, whatever:
public class LoggingHandler<T> : IHandler<T> {
private readonly IHandler<T> _innerHandler;
public LoggingHandler(IHandler<T> innerHandler) {
_innerHandler = innerHandler;
}
public void Handle(T command)
{
//Obviously you do it properly, but you get the idea
_log.Info("Before");
_innerHandler.Handle(command);
_log.Info("After");
}
}
Now you have just one piece of code responsible for logging and you can compose it with any command handler, so if you ever want to log a particular command then you just "wrap" it with the logging handler, and it is still your IHandle<T> so the rest of the system is not impacted.
And you can do it with other concerns too (threading, queueing, transactions, multiplexing, routing, etc.) without messing around and plumbing this stuff here and there.
Concerns are very well separated this way.
It is also much better (to me) because you log on a real operation (business) level, rather than on low-level repository.
Hope it helps.
P.S. In DDD you really want your repositories to only expose aggregate-level methods because Aggregates suppose to take care of their invariants (and nothing else, no services, no repositories), and because Aggregate represents transaction boundary.
Really, it is up to the Repository how to get the Aggregate from persisted storage and how to persist it back, outside it should look like you ask someone for an object and it gives you an object you can call behaviors on.
So normally you would only get an aggregate from the repository, call its behavior(s) and then save it back. Which really means that your repositories would mostly have GetById and Save methods, not some internals like "UpdateThatPartOfAnAggregate".
The OWIN AppBuilder "UseStatic" bits deliver files from a local filesystem which is handy for some situations, but I would like to instead have it deliver content from an in-memory IDictionary that I pre-populated at application startup. Can anyone point me in a good direction to investigate overriding/changing the behavior?
Thanks.
You'll want to implement your own IFileSystem and IFileInfo classes. An example of this can be seen in the dev branch on CodePlex under src/Microsoft.Owin.FileSystems/EmbeddedResourceFileSystem.cs. This was a community contribution based on this project.
Once implemented you would use it like so
public class InMemoryFileSystem : IFileSystem
{
public InMemoryFileSystem(IDictionary<string, object> files)
{}
}
var files = LoadFilesIntoDictionary();
app.UseStaticFiles(options => {
options.WithFileSystem(new InMemoryFileSystem(files));
});
I am new to codedUI and for a start I am reading a lot about what should be a best practice.
I have read that if you are using complex application that is advisable to use multiple UImaps. Although I can not see a benefit at the moment I have created small project with two UImaps.
In the first initial setup (with initial UImap and CodedUITest1) I can choose whether to use Test builder or existing action recording for generating code. What ever I do it 'goes' to initial UImap. When I create new UI, test builder is started and I can record some actions and when I save it, it is added to newly created UImap in my case called AdvanceSettings. But I can not generate code from existing recording. Why is that? I would like to create automated test cases based on manual test cases with recordings.
Below is my code. I am using CodedUITest1 class for both UImaps. Should I use new class for
every UImap?
If you have some comments on code please do write some.
As I see it. Multiple UImaps are used if you have complex application so you can more easily change something. every GUI element has one UImap so if something changes on that GUI you only edit that UImap. But if you have one UImap and you use proper naming you can also easily replace or re-record certain method. So I am missing big picture with multiple UImaps.
using System;
using System.Collections.Generic;
using System.Text.RegularExpressions;
using System.Windows.Input;
using System.Windows.Forms;
using System.Drawing;
using Microsoft.VisualStudio.TestTools.UITesting;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Microsoft.VisualStudio.TestTools.UITest.Extension;
using Keyboard = Microsoft.VisualStudio.TestTools.UITesting.Keyboard;
using EAEP.AdvanceSettingsClasses;
namespace EAEP
{
/// <summary>
/// Summary description for CodedUITest1
/// </summary>
[CodedUITest]
public class CodedUITest1
{
public CodedUITest1()
{
}
[TestInitialize]
public void InitializationForTest()
{
this.UIMap.AppLaunch();
}
[TestMethod]
public void MainGUIMethod()
{
// To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items.
// For more information on generated code, see http://go.microsoft.com/fwlink/?LinkId=179463
this.UIMap.AssertMethod1();
this.UIMap.RestoreDefaults();
this.UIMap.AssertMethod1();
}
[TestMethod]
public void AdvanceSettignsWindowMethod()
{
AdvanceSettings advanceSettings = new AdvanceSettings();
advanceSettings.MoreSettingsReopenedAfterCancel();
this.UIMap.AssertVerificationAfterCancel();
advanceSettings.MoreSettingsReopenedAfterOK();
this.UIMap.AssertVerificationAfterOK();
}
#region Additional test attributes
// You can use the following additional attributes as you write your tests:
////Use TestInitialize to run code before running each test
//[TestInitialize()]
//public void MyTestInitialize()
//{
// // To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items.
// // For more information on generated code, see http://go.microsoft.com/fwlink/?LinkId=179463
//}
////Use TestCleanup to run code after each test has run
//[TestCleanup()]
//public void MyTestCleanup()
//{
// // To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items.
// // For more information on generated code, see http://go.microsoft.com/fwlink/?LinkId=179463
//}
#endregion
public TestContext TestContext
{
get
{
return testContextInstance;
}
set
{
testContextInstance = value;
}
}
private TestContext testContextInstance;
public UIMap UIMap
{
get
{
if ((this.map == null))
{
this.map = new UIMap();
}
return this.map;
}
}
private UIMap map;
}
}
You cann't use multiple UI Maps with the from existing recording feature. this feature always generates code in a map called UIMap. I've explained a bit about these limitation in a blog post i did about integrating specflow with Coded Ui tests
http://rburnham.wordpress.com/2011/05/30/record-your-coded-ui-test-methods/
If you want to use Multiple UIMaps for better maintainability you have to use this method
Record each action individually by right clicking the UIMap and selecting Coded UI Test Builder.
Manually wire up the test to the actions by creating a blank Coded UI Test, update the UIMap it references and then in the test methods call the required actions to perform the test.
Its a limitation that makes what is good about the MTM integration pointless.
Having multiple UIMaps speeds the test execution. Additionally this makes editions, assertions, properties and settings a lot easier.
To create tests for the second UIMap you just right click on it and press "Edit With Coded UI Test Builder"
Regarding the But I can not generate code from existing recording. Why is that? I have no clue - what do you mean by can not?