We're using in our product Jest for Unit and Integration tests.
At the moment, we're searching for a solution to measure the duration of scenarios (Unit, API and E2E level) between two CI/CD builds, to see if code changes lead to performance decrease/increase.
There are solutions outside like JMeter and Gattling, but it feels not a right fit. On the one side, you have to write tests again that we already have written in Jest. And on the other side, these tools are more focused on Load, Scalability, Breakpoint, Stress testing etc. which feels a bit over dimensioned for our use case. (We're using completely serverless architecture, and we only want to know if code changes have an impact on performance)
So I was thinking if it's not maybe simply possible to utilize the Jest tests that we have already written, to measure also in some way the performance and compare it between CI/CD builds.
Do you know if there is some library or tool that could help me with that? Or do you have perhaps a complete different opinion, how to approach that?
Gitlab provides option to mute/quarantine flaky tests as mentioned in GitLab Documentation.
I Understand its not the best of practices but do want to explore it.
Some of the examples I was able to find are here - Quarantine tests that are very flaky but in rubi language.
Please help to understand if that can be done in languages like C# (Dotnet) or Java. An example with that will be highly appreciated.
As per Documentation:
If the test cannot be fixed in a timely fashion, there is an impact on the productivity of all the developers, so it should be placed in quarantine by assigning the :quarantine metadata.
I am not able to figure out how we can do that in Tests using NUnit (dotnet) or JUnit(Java)
Document Flaky test is under contributor section must not be read as product documentation intended for users of GitLab.
The page linked to the question describes the guidelines and procedures that the developers of GitLab need to perform when they encounter a flaky test in GitLab's own codebase.
Feature to mute/quarantine flaky test does not appear to be currently planned for implementation.
You can up vote or comment on this issue opened for the same here!.
As a Solution to this with existing resources, You can make a use of allow_failure: the required step/Job in pipeline.
In this case, you will have 2 Jobs :
With Non-Flaky test + allow_failure: false
WIth Flaky Test + allow_failure: true
example 1
example 2
In this way the test cases meant to be flaky are only handle by JOB 2 which is allowed to fail and never blocks the pipeline.
Now as a suggestion to identify/Filter test at Job level: you can use filter at test level with some category attribute. Or Can read it from TextFile/DataSource and filter basis on that.
Karate recently released GUI Automation feature. I always liked the karate way of writing script. I'm thinking to implement Karate's unified framework for Rest + GUI at larger scale in my org.
Problem statement: The existing teams uses purely cucumber based framework and have automated significant number of tests. In order to migrate Karate framework we will need to re write automated tests with Karate's standards. It would take huge efforts to migrate existing tests considering volume of work. I am just looking for best possible ways to migrate with minimum efforts.
Is there a way I can add Karate to my existing cucumber based framework so that I can keep existing tests running while writing new tests with karate guidelines.
It should be possible (in theory) to mix Karate and Cucumber in the same Maven (or Gradle) project. Unfortunately I don't know of too many people who have done this.
Please refer to this discussion for more: https://github.com/intuit/karate/issues/444#issuecomment-419852761
Sorry I can't provide a more clearer answer, you may need to experiment a bit.
I was wondering if there are any tools worth of trying to test dotnet core WebAPI performance during high load.
In the past I used jMeter of Apache, but configuring that alongside with TeamCity and dotnet core builds is a bit of pain.
I am looking for something that could deliver statistics, so automated run of tests can give me information if recent changes have or haven't decreased performance etc.
I also did a quick google, VisualStudio has something on board, but first of all it requires Enterprise edition of software, and I am not convinced if that tool is good enough.
Thank you
Take a look at https://github.com/dotnet/BenchmarkDotNet - it's a widely used and respected library.
If your goal is to test WebAPI you can create a HttpClient and use it for making calls to the API, wrap it in a benchmark and run it on TeamCity as a simple dotnet step.
The tool also can create reports in HTML which then can be added to TeamCity as a custom tab.
If it's not enough then you can extract performance metrics e.g. from PlainExporter and integrate it with TeamCity built-in Service Messages and create Custom Charts from those stats so TeamCity will help you track performance trends.
You could even act on those measures and e.g. fail the build if there is a significant performance degradation.
I just started getting into BizTalk at work and would love to keep using everything I've learned about DDD, TDD, etc. Is this even possible or am I always going to have to use the Visio like editors when creating things like pipelines and orchestrations?
You can certainly apply a lot of the concepts of TDD and DDD to BizTalk development.
You can design and develop around the concept of domain objects (although in BizTalk and integration development I often find interface objects or contract first design to be a more useful way of thinking - what messages get passed around at my interfaces). And you can also follow the 'Build the simplest possible thing that will work' and 'only build things that make tests pass' philosophies of TDD.
However, your question sounds like you are asking more about the code-centric sides of these design and development approaches.
Am I right that you would like to be able to follow the test driven development approach of first writing a unti test that exercises a requirement and fails, then writing a method that fulfils the requirement and causes the test to pass - all within a traditional programing language like C#?
For that, unfortunately, the answer is no. The majority of BizTalk artifacts (pipelines, maps, orchestrations...) can only really be built using the Visual Studio BizTalk plugins. There are ways of viewing the underlying c# code, but one would never want to try and directly develop this code.
There are two tools BizUnit and BizUnit Extensions that give some ability to control the execution of BizTalk applications and test them but this really only gets you to the point of performing more controled and more test driven integration tests.
The shapes that you drag onto the Orchestration design surface will largely just do their thing as one opaque unit of execution. And Orchestrations, pipelines, maps etc... all these things are largely intended to be executed (and tested) within an entire BizTalk solution.
Good design practices (taking pointers from approaches like TDD) will lead to breaking BizTalk solutions into smaller, more modular and testable chunks, and are there are ways of testing things like pipelines in isolation.
But the detailed specifics of TDD and DDD in code sadly don't translate.
For some related discussion that may be useful see this question:
Mocking WebService consumed by a Biztalk Request-Response port
If you often make use of pipelines and custom pipeline components in BizTalk, you might find my own PipelineTesting library useful. It allows you to use NUnit (or whatever other testing framework you prefer) to create automated tests for complete pipelines, specific pipeline components or even schemas (such as flat file schemas).
It's pretty useful if you use this kind of functionality, if I may say so myself (I make heavy use of it on my own projects).
You can find an introduction to the library here, and the full code on github. There's also some more detailed documentation on its wiki.
I agree with the comments by CKarras. Many people have cited that as their reason for not liking the BizUnit framework. But take a look at BizUnit 3.0. It has an object model that allows you to write the entire test step in C#/VB instead of XML. BizUnitExtensions is being upgraded to the new object model as well.
The advantages of the XML based system is that it is easier to generate test steps and there is no need to recompile when you update the steps. In my own Extensions library, I found the XmlPokeStep (inspired by NAnt) to be very useful. My team could update test step xml on the fly. For example, lets say we had to call a webservice that created a customer record and then checked a database for that same record. Now if the webservice returned the ID (dynamically generated), we could update the test step for the next step on the fly (not in the same xml file of course) and then use that to check the database.
From a coding perspective, the intellisense should be addressed now in BizUnit 3.0. The lack of an XSD did make things difficult in the past. I'm hoping to get an XSD out that will aid in the intellisense. There were some snippets as well for an old version of BizUnit but those havent been updated, maybe if theres time I'll give that a go.
But coming back to the TDD issue, if you take some of the intent behind TDD - the specification or behavior driven element, then you can apply it to some extent to Biztalk development as well because BizTalk is based heavily on contract driven development. So you can specify your interfaces first and create stub orchestrations etc to handle them and then build out the core. You could write the BizUnit tests at that time. I wish there were some tools that could automate this process but right now there arent.
Using frameworks such as the ESB guidance can also help give you a base platform to work off so you can implement the major use cases through your system iteratively.
Just a few thoughts. Hope this helps. I think its worth blogging about more extensively.
This is a good topic to discuss.Do ping me if you have any questions or we can always discuss more over here.
Rgds
Benjy
You could use BizUnit to create and reuse generic test cases both in code and excel(for functional scenarios)
http://www.codeplex.com/bizunit
BizTalk Server 2009 is expected to have more IDE integrated testability.
Cheers
Hemil.
BizUnit is really a pain to use because all the tests are written in XML instead of a programming language.
In our projects, we have "ported" parts of BizUnit to a plain old C# test framework. This allows us to use BizUnit's library of steps directly in C# NUnit/MSTest code. This makes tests that are easier to write (using VS Intellisense), more flexible, and most important, easier to debug in case of a test failure. The main drawback of this approach is that we have forked from the main BizUnit source.
Another interesting option I would consider for future projects is BooUnit, which is a Boo wrapper on top of BizUnit. It has advantages similar to our BizUnit "port", but also has the advantage of still using BizUnit instead of forking from it.