In search of the mysterious skipped Jest test - jestjs

We have the following output at the end of our Jest test base:
Test Suites: 273 passed, 273 total
Tests: 1 skipped, 1923 passed, 1924 total
Snapshots: 61 passed, 61 total
Time: 38.885 s, estimated 39 s
You see there is one skipped test.
When I search my test files either for it.skip or test.skip or generally skip I find nothing.
What I also tried is outputting the test run into JSON via:
jest --json --outputFile=testrun.json
In the top of the file I find this information:
{
"numFailedTestSuites": 0,
"numFailedTests": 0,
"numPassedTestSuites": 273,
"numPassedTests": 1923,
"numPendingTestSuites": 0,
"numPendingTests": 1,
"numRuntimeErrorTestSuites": 0,
"numTodoTests": 0,
"numTotalTestSuites": 273,
"numTotalTests": 1924,
...
}
so it looks like that numPendingTests is the one pointing to the skipped one. But when I search the output file, again, no trace of a skipped test. In fact, I did a search for "status": "[a-z]and there is no other status to be found than passed.
Short of looking through 270+ test suites, how else could a skipped test hide from me? Is there any way to find it?

As mentioned by johnrsharp as a comment, another way to skip tests in Jest is to prefix the term it(test) with x- so if you want to scan the files for skipped tests, you also need to look out for xit or xtest.

Related

Run only tagged testcases from testrunner

I've created two tags in ReadyAPI: "Test" and "Stage". I've tagged two testcases with "Test" and one with "Stage".
When I launch testrunner I only would like to run testcase/s tagged with "Stage":
I add follow parameters:
"C:\Program Files\SmartBear\ReadyAPI-3.20.2\bin\testrunner.bat" -r -a -j -f${WORKSPACE} "-RJUnit-Style HTML Report" -FXML "-EDefault environment" "-TTestCase Stage" C:\TestOfTags-readyapi-project.xml -Dreadyapi.skip.endpoints.checks=true
But my tagged testcase don't run.
ReadyAPI 3.20.2 TestCaseRunner Summary
Time Taken: 753ms
Total TestSuites: 1
Total TestCases: 0 (0 failed)
Total TestSteps: 0
Total Request Assertions: 0
Total Failed Assertions: 0
Total Exported Results: 0
What am I missing here?
So I figured it out myself:
Create, let's say a tag with name = exampleTest
Tag testcases that you want to run with "exampleTest"
In testrunner simply add: "-TTestCase exampleTest"
Done

Error generating the report: org.apache.jmeter.report.core.SampleException: Could not read metadata

I'm trying to run jmeter load testing scripts in non GUI mode to generate HTML report with below command
./jmeter.sh -n -t "/home/dsbloadtest/DSB_New_21_01_2022/apache-jmeter-5.4.3/dsb_test_plans/SERVICE_BOOKING.jmx" -l /home/dsbloadtest/DSB_New_21_01_2022/apache-jmeter-5.4.3/dsb_test_results/testresults.csv -e -o /home/dsbloadtest/DSB_New_21_01_2022/apache-jmeter-5.4.3/dsb_test_results/HTMLReports
It was working fine, but now not getting the result as im getting as below
summary = 0 in 00:00:00 = \*\*\*\*\*\*/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%)
Tidying up ... # Fri Apr 01 11:22:40 IST 2022 (1648792360414)
Error generating the report: org.apache.jmeter.report.core.SampleException: Could not read metadata !
... end of run
I have tried to generate HTML report in J meter non GUI mode.
summary = 0 in 00:00:00 = ******/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%)
it means that JMeter didn't execute any Sampler, your testresults.csv is empty and you don't have any data to generate the dashboard from.
The reason for test failure normally can be figured out from jmeter.log file, the most common mistakes are:
the file referenced in the CSV Data Set Config doesn't exist
the JMeter Plugins used in the test are not installed for this particular JMeter instance

How can I make C8 output total code coverage %?

I am able to generate code coverage reports using c8.
But how do I output total coverage % every time? Currently, it outputs total coverage % only if it is less than the threshold like this.
But I wanted to see the % every time it runs.
Config :
{
"include": "app",
"check-coverage": true,
"lines": 99,
"reporter": ["text","cobertura"]
}
I just needed to add "text-summary" as one of the reporters.
"reporter": ["text","text-summary","cobertura"]

Artillery.io: How to generate test report for each Scenario?

Artillery: How to run the scenarios sequentially and also display the results of each scenario in the same file?
I'm currently writing nodejs test with artillery.io to compare performance between two endpoints that I implemented. I defined two scenarios and I would like to get the result of each in a same report file.
The execution of the tests is not sequential, it means that at the end of the test I have a result already combined and impossible to know the performance of each one but for all.
config:
target: "http://localhost:8080/api/v1"
plugins:
expect: {}
metrics-by-endpoint: {}
phases:
- duration: 60
arrivalRate: 2
environments:
dev:
target: "https://backend.com/api/v1"
phases:
- duration: 60
arrivalRate: 2
scenarios:
- name: "Nashhorn"
flow:
- post:
url: "/casting/nashhorn"
auth:
user: user1
pass: user1
json:
body:
fromFile: "./casting-dataset-01-as-input.json"
options:
filename: "casting_dataset"
conentType: "application/json"
expect:
statusCode: 200
capture:
regexp: '[^]*'
as: 'result'
- log: 'result= {{result}}'
- name: "Nodejs"
flow:
- post:
url: "/casting/nodejs"
auth:
user: user1
pass: user1
json:
body:
fromFile: "./casting-dataset-01-as-input.json"
options:
filename: "casting_dataset"
conentType: "application/json"
expect:
statusCode: 200
capture:
regexp: '[^]*'
as: 'result'
- log: 'result= {{result}}'
How to run the scenarios sequentially and also display the results of each scenario in the same file?
Thank you in advance for your answers
I think you miss the param weight, this param defines de probability to execute the scenario. if in you first scenario put a weight of 1 and in the second put the same value, both will have the same probability to been execute (50%).
If you put in the first scenario a weight of 3 and in the second one a weight of 1, the second scenario will have a 25% probability of execution while the first one will have a 75% probability of being executed.
This combined with the arrivalRate parameter and setting the value of rampTo to 2, will cause 2 scenarios to be executed every second, in which if you set a weight of 1 to the two scenarios, they will be executed at the same time.
Look down for scenario weights in the documentation
scenarios:
- flow:
- log: Scenario for GET requests
- get:
url: /v1/url_test_1
name: Scenario for GET requests
weight: 1
- flow:
- log: Scenario for POST requets
- post:
json: {}
url: /v1/url_test_2
name: Scenario for POST
weight: 1
I hope this helps you.
To my knowledge, there isn't a good way to do this with the existing the artillery logic.
using this test script:
scenarios:
- name: "test 1"
flow:
- post:
url: "/postman-echo.com/get?test=123"
weight: 1
- name: "test 2"
flow:
- post:
url: "/postman-echo.com/get?test=123"
weight: 1
... etc...
Started phase 0 (equal weight), duration: 1s # 13:21:54(-0500) 2021-01-06
Report # 13:21:55(-0500) 2021-01-06
Elapsed time: 1 second
Scenarios launched: 20
Scenarios completed: 20
Requests completed: 20
Mean response/sec: 14.18
Response time (msec):
min: 117.2
max: 146.1
median: 128.6
p95: 144.5
p99: 146.1
Codes:
404: 20
All virtual users finished
Summary report # 13:21:55(-0500) 2021-01-06
Scenarios launched: 20
Scenarios completed: 20
Requests completed: 20
Mean response/sec: 14.18
Response time (msec):
min: 117.2
max: 146.1
median: 128.6
p95: 144.5
p99: 146.1
Scenario counts:
test 7: 4 (20%)
test 5: 2 (10%)
test 3: 1 (5%)
test 1: 4 (20%)
test 9: 2 (10%)
test 8: 3 (15%)
test 10: 2 (10%)
test 4: 1 (5%)
test 6: 1 (5%)
Codes:
404: 20
So basically you can see that they are weighted equally, but are not running equally. So I think there needs to be something added to the code itself for artillery. Happy to be wrong here.
You can use the per endpoint metrics plugin to give you the results per endpoint instead of aggregated.
https://artillery.io/docs/guides/plugins/plugin-metrics-by-endpoint.html
I see you already have this in your config, but it cannot be working if it is not giving you what you need. Did you install it as well as add to config?
npm install artillery-plugin-metrics-by-endpoint
In terms of running sequentially, I'm not sure why you would want to, but assuming you do, you just need to define each POST as part of the same Scenario instead of 2 different scenarios. That way the second step will only execute after the first step has responded. I believe the plugin is per endpoint, not per scenario so will still give you the report you want.

Do hooks in Mocha run regardless of test failures?

Just like the title says. Do before/after/beforeEach/afterEach hooks in Mocha work even if tests fail?
Yes, they do run even if the test fail.
However, if you run Mocha with the bail option, the teardown hooks will not run.
Just did an quick check with existing test suites here.
The code:
setup(function() {
console.log("1");
});
teardown(function() {
console.log("2");
});
The executions:
D:\Dev\JS\toposort>mocha
Toposort
◦ should sort correctly: 1
√ should sort correctly
2
◦ should find cyclic dependencies: 1
1) should find cyclic dependencies
2
◦ #2 - should add the item if an empty array of dependencies is passed: 1
√ #2 - should add the item if an empty array of dependencies is passed
2
2 passing (15 ms)
1 failing
1) Toposort should find cyclic dependencies:
AssertionError: expected [Function] to not throw 'Error' but [Error: Cyclic dependency found. '3' is dependent of itself
.] was thrown
D:\Dev\JS\toposort>mocha --bail
Toposort
◦ should sort correctly: 1
√ should sort correctly
2
◦ should find cyclic dependencies: 1
1) should find cyclic dependencies
1 passing (14 ms)
1 failing
1) Toposort should find cyclic dependencies:
AssertionError: expected [Function] to not throw 'Error' but [Error: Cyclic dependency found. '3' is dependent of itself
.] was thrown

Resources