We set up junit test reporting in gitlab and can see the results in the pipeline.
Is it possible in any way (maybe via API?) to extract a statistic of how often a test fails? For example:
TestX 0/20 times successful
TestY 17/20 times successful
TestZ 19/20 times successful
...
Background: we have very many integration tests and some of them show timing issues which cause them to fail. I would like to identify the tests which fail most often.
Related
I have some performance test result printed in gitlab job console like below,
Actual ResultSet Total Records [260]!
Performance Test Successfully Completed! Total time [1354108]
How to generate a performance report out of which and show in gitlab pipeline?
Let's say we have the following files:
- component.tsx
- component.test.tsx (1 behavioral test)
- component_snapshot.test.tsx (1 snapshot test)
Let's say component.test.tsx provides 50% code coverage of component.tsx, while component_snapshot.test.tsx provides 100% code coverage.
Is it possible to configure jest to run both test files to see if both sets of tests pass, but exclude the snapshot testing from the overall test coverage report? In this scenario, the final coverage report would show component.tsx as having 50% test coverage, but with 2 tests passing.
I can achieve this behavior in individual commands:
react-scripts test -> 2 tests are run, but no coverage report
react-scripts test --coverage --testMatch='**/__tests__/**/*.+(ts|tsx|js)' --testMatch='**/?(*.)+(spec|test).+(ts|tsx|js)' --testMatch='!**/*snapshot*' -> 1 test run, snapshots are excluded, so they don't run and therefore don't report coverage
I'm hoping for a configuration that will run all tests (including snapshots), but prevent snapshots from having an effect on the coverage report.
Using automation testing running Cucumber through Jenkins, I am able to get the results for execution, however I would need to accumulate the total results, any idea or tool that could allow me to aggregate the results by day, week, month,...
I have about 6000 specflow [version 1.9.0.77] tests and those tests are split across 10 categories [tags], roughly 600 test cases per categories and takes about an hour to complete. Currently I’m using Nunit 2.6.4 to execute the tests [executing sequentially] and generating the Specflow flow report out of Nunit test report xml.
I’m planning to move the sequential execution model to parallel execution to reduce the test execution time. There are no static references, no feature or scenario context and test data are unique to test case.
I explored Nunit 3.5 with Specflow 2.0 but couldn’t find a solution to run the tests parallel through categories or by tags. Every time it runs sequentially.
I followed the page http://www.specflow.org/documentation/Parallel-Execution/ to setup parallel execution but didn’t worked for me.
Any thoughts?
Two things comes to mind that might go wrong that you don't really mention:
To run in parallel, SpecFlow runs Features in parallel. So it doesn't matter how many tags (categories) you use if the are all in the same feature file.
Another error source is that to run parallel tests you need at least two processors on the machine running the tests. And if you have a lot of things running, consuming processor power, the number of available processors will decrease to 1 which equal sequential execution of tests.
Can we run the load test irrespective of time duration, for example, if i am running tests for 25 users then test will automatically stops, once all the users finished their scripts. please help on this ?
Set the Use Test Iterations property to True and Test Iterations to 25.
The first property will overide the test duration property and second will force load test to execute 25 total tests. Since you have 25 virtual users in your test it will share them to your users and so each one will execute one test.
Check here for mor details:
Load Test Run Setting Properties - Test Iterations Properties
Test iteration setting in loadtest using vs 2010