Im having a weird issue where Jest is deleting all snapshots when I am attempting to update a single test.
if I have a directory:
07/03/2018 11:05 AM 131,285 p-Error.ts.snap
07/03/2018 11:05 AM 75,741 p-Lot.ts.snap
06/29/2018 03:39 PM 134,879 p-Split.ts.snap
and I run:
npm test -- -i -u -t="p-Split"
Here is the console output:
PASS src/__tests__/p-Split.ts (279.875s)r\openvrpaths.vrpath
FAIL src/__tests__/p-Error.ts
● Test suite failed to run
Your test suite must contain at least one test.
at node_modules/jest-cli/build/test_scheduler.js:245:22
FAIL src/__tests__/p-Lot.ts
● Test suite failed to run
Your test suite must contain at least one test.
at node_modules/jest-cli/build/test_scheduler.js:245:22
and the directory now contains:
06/29/2018 03:40 PM 134,879 p-Split.ts.snap
Thanks, Joe
Snapshots are deleted because of the -u flag, which automatically removes unused/obsolete snapshots.
Why would you want to keep those snapshots, if you removed everything from the test files p-Error and p-Lot? If you removed p-Error and p-Lot test cases by accident, bring them back.
Related
I am using pytest with --ignore and --junitxml options to generate a report of the test cases those are not ignore but when my report is generated, It also takes into account the ignored tests.
I am using the following command
pytest --ignore=tests/test_a.py --junitxml=pytest_not_a.xml
I am able to resolve this using pytest.mark.httpapi, so rather than using it over each test suite. I added pytest_collection_modifyingitems which puts the marker on the test on run time.
def pytest_collection_modifyitems(config, items):
for item in items:
if 'test_a.py' in str(item.fspath):
mark = getattr(pytest.mark, "httpapi")
item.add_marker(mark)
item.add_marker(pytest.mark.common)
Now the above command would be slightly changed like
py.test -v -m "not httpapi" --junitxml=pytest_not_a.xml. Now the Junit gitlab artifacts only takes the processed tests and do not include skipped tests in success rate calculation.
I have some precondition for Suite and they should be runs once per suite, so I will add on Suite Setup.
also, I have some precondition for each test-case and they should run at the start of each test-case.
The question is if I use both of them, which one start first if I only run one of the test-cases? Suite Setup or Test Setup?
something like this:
*** Settings ***
Library ...
Varialbles ...
Suite Setup: suite_precondition
Test Setup: test_precondtion
*** Test Cases ***
TC1
<Some code>
TC2
<Some code>
You know that we can run TC1, or TC2 one by one for checking the test-case PASS or not. So what happened here when I run TC1?
Suite setup always runs before any tests start. After that, the test setup will run for each test when the test first starts.
I found that someone has asked a relevant question before PintOS, kernel panic with -v option bochs on ubuntu
However, I tried but it didn't work. "pintos -- run alarm-multiple" seems fine but when I do "make check"
......
Run didn't start up properly: no "Pintos booting" message
pintos -v -k -T 480 --bochs -- -q -mlfqs run mlfqs-block < /dev/null 2> tests/threads/mlfqs-block.errors > tests/threads/mlfqs-block.output
perl -I../.. ../../tests/threads/mlfqs-block.ck tests/threads/mlfqs-block tests/threads/mlfqs-block.result
FAIL tests/threads/mlfqs-block
Run didn't start up properly: no "Pintos booting" message
FAIL tests/threads/alarm-single
FAIL tests/threads/alarm-multiple
FAIL tests/threads/alarm-simultaneous
FAIL tests/threads/alarm-priority
FAIL tests/threads/alarm-zero
FAIL tests/threads/alarm-negative
FAIL tests/threads/priority-change
FAIL tests/threads/priority-donate-one
FAIL tests/threads/priority-donate-multiple
FAIL tests/threads/priority-donate-multiple2
FAIL tests/threads/priority-donate-nest
FAIL tests/threads/priority-donate-sema
FAIL tests/threads/priority-donate-lower
FAIL tests/threads/priority-fifo
FAIL tests/threads/priority-preempt
FAIL tests/threads/priority-sema
FAIL tests/threads/priority-condvar
FAIL tests/threads/priority-donate-chain
FAIL tests/threads/mlfqs-load-1
FAIL tests/threads/mlfqs-load-60
FAIL tests/threads/mlfqs-load-avg
FAIL tests/threads/mlfqs-recent-1
FAIL tests/threads/mlfqs-fair-2
FAIL tests/threads/mlfqs-fair-20
FAIL tests/threads/mlfqs-nice-2
FAIL tests/threads/mlfqs-nice-10
FAIL tests/threads/mlfqs-block
27 of 27 tests failed.
../../tests/Make.tests:26: recipe for target 'check' failed
make: *** [check] Error 1
I had the same problem today and it was because i was trying to set qemu as the default simulator so I changed line 103 in utils/pintos to
$sim = "qemu" if !defined $sim;
but i forget to change the SIMULATOR value in threads/Make.vars to
SIMULATOR = --qemu
Since I didn't setup bochs on my machine make check was trying to run the tests on it but if fails to boot.
Note that this is one scenario why tests fail to run, it could be another reason but since
pintos -- run alarm-multiple
is working fine, I think this might be the same problem you have.
Having in gitlab-ci a job like the following one:
static_test_service:
stage: test code
script:
- docker run --rm -v $(pwd):/data -w /data dparra0007/sonar-scanner:20171010-1 sonar-scanner
-Dsonar.projectKey=$CI_PROJECT_NAMESPACE:$CI_PROJECT_NAME
-Dsonar.projectName=$CI_PROJECT_NAME
-Dsonar.branch=$CI_COMMIT_REF_NAME
-Dsonar.projectVersion=$CI_JOB_ID
-Dsonar.sources=./greetingapi/src
-Dsonar.java.binaries=./greetingapi/target
-Dsonar.gitlab.project_id=$CI_PROJECT_ID
-Dsonar.gitlab.commit_sha=$CI_COMMIT_SHA
-Dsonar.gitlab.ref_name=$CI_COMMIT_REF_NAME
I would need to fail the gitlab job when the sonarqube analysis fails. But in that case, the error in analysis is reported but not sending a fail status to the job in Gitlab CI and the step always finish with success.
It seems that there is no way to raise any event from "docker run" to be managed by gitlab job.
Any idea on how to force to fail the job if the sonarqube analysis fails?
Thanks,
To break the CI build for a failed Quality Gate, you have write script based on the following steps
1.Search in /report-task.txt the values of the CE Task URL (ceTaskUrl) and CE Task Id (ceTaskId)
2.Call /api/ce/task?id=XXX where XXX is the CE Task Id retrieved from step 1 Ex:- https://yourSonarURL/api/ce/task?id=Your ceTaskId
3.Wait for sometime until the status is SUCCESS, CANCELED or FAILED from Step 2
4.If it is FAILED, break the build (Here failure is unable to generate sonar report)
5.If successful,then Use the analysisId from the JSON returned by /api/ce/task? id=XXX(step2)and Immediately call /api/qualitygates/project_status?analysisId=YYY to check the status of the quality gate.
Ex:- https://yourSonarURL/api/qualitygates/project_status?analysisId=Your analysisId
6.Step 5 gives the status of the critical, major and minor error threshold limit
7.Based on the limit break the build.
I faced this problem with GitLab and Sonar where Sonar was failing the QualityAnalysis but GitLab job was still passing with
INFO: ANALYSIS SUCCESSFUL, you can find the results at:
Now the problem is below missing config in sonar.properties
sonar.qualitygate.wait=true
sonar.qualitygate.timeout=1800
So basically, the SonarScan takes time to do the analysis and by default it won't wait for the analysis to complete and may returns default SUCCESSFUL ANALYSIS result to GitLab
With the mentioned configuration, we are explicitly asking GitLab to wait for the qualitygate to finish and gave some timeout as well (in case analysis takes long time to finish)
Now we see the GitLab job fails with below
ERROR: QUALITY GATE STATUS: FAILED - View details
I am trying to integrate tSQLt / SQLTest with CruiseControl.NET
My tests are running and I've written xsl files to display the results but I need to know how to mark the build as failed if any tests fail.
My CCNet exec is:
<exec executable="$(sqlCmdPath)">
<description>Run Unit Tests</description>
<buildArgs>-E -d MyDatabase
-i "\CruiseControlProjects\Configuration\CI_SQL\RunTests.sql"
</buildArgs>
<baseDirectory>\Artifacts\MyDatabase</baseDirectory>
<successExitCodes>0,63</successExitCodes>
</exec>
RunTests.sql:
IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[tSQLt].[RunAll]')
AND TYPE IN (N'P',N'PC'))
BEGIN
EXECUTE [tSQLt].[RunAll]
END
The tests are run and I have a subsequent task which produces the results in xml that are then merged into the build log:
<exec executable="$(sqlCmdPath)">
<description>Get Unit Tests</description>
<buildArgs>-E -b -d MyDatabase -h-1 -y0 -I
-i "\CruiseControlProjects\Configuration\CI_SQL\GetTestResults.sql"
-o "\CruiseControlProjects\Configuration\CI_SQL\Results\TestResults.xml"
</buildArgs>
<baseDirectory>\Artifacts\MDatabase</baseDirectory>
<successExitCodes>0,63</successExitCodes>
</exec>
So how do I get the overall build to fail?
If you use the -b parameter to sqlcmd, you should find that it will throw an error with a non-zero code when the batch fails (which will happen if tSQLt fails at least one test).
However, I have one potential suggestion to explore. If you can load the XML file within Cruise Control, then the tests can be loaded in as the XML file is in the same format as an nUnit test output file. (Note - I've used this method on TeamCity and Jenkins, but not tried with Cruise Control). This will treat the tests as tests rather than an 'all-or-nothing' approach, and enabling you to track which tests fail repeatedly.
Hope that helps,
Dave.