I am trying to integrate tSQLt / SQLTest with CruiseControl.NET
My tests are running and I've written xsl files to display the results but I need to know how to mark the build as failed if any tests fail.
My CCNet exec is:
<exec executable="$(sqlCmdPath)">
<description>Run Unit Tests</description>
<buildArgs>-E -d MyDatabase
-i "\CruiseControlProjects\Configuration\CI_SQL\RunTests.sql"
</buildArgs>
<baseDirectory>\Artifacts\MyDatabase</baseDirectory>
<successExitCodes>0,63</successExitCodes>
</exec>
RunTests.sql:
IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[tSQLt].[RunAll]')
AND TYPE IN (N'P',N'PC'))
BEGIN
EXECUTE [tSQLt].[RunAll]
END
The tests are run and I have a subsequent task which produces the results in xml that are then merged into the build log:
<exec executable="$(sqlCmdPath)">
<description>Get Unit Tests</description>
<buildArgs>-E -b -d MyDatabase -h-1 -y0 -I
-i "\CruiseControlProjects\Configuration\CI_SQL\GetTestResults.sql"
-o "\CruiseControlProjects\Configuration\CI_SQL\Results\TestResults.xml"
</buildArgs>
<baseDirectory>\Artifacts\MDatabase</baseDirectory>
<successExitCodes>0,63</successExitCodes>
</exec>
So how do I get the overall build to fail?
If you use the -b parameter to sqlcmd, you should find that it will throw an error with a non-zero code when the batch fails (which will happen if tSQLt fails at least one test).
However, I have one potential suggestion to explore. If you can load the XML file within Cruise Control, then the tests can be loaded in as the XML file is in the same format as an nUnit test output file. (Note - I've used this method on TeamCity and Jenkins, but not tried with Cruise Control). This will treat the tests as tests rather than an 'all-or-nothing' approach, and enabling you to track which tests fail repeatedly.
Hope that helps,
Dave.
Related
The command gitlab-runner lets you "test" a gitlab job locally. However, the local run of a job seems to have the same problem as a gitlab job run in gitlab CI: The output is not immediate!
What I mean: Even if your code/test/whatever produces printed output, it is not shown immediately in your log or console.
Here is how you can reproduce this behavior (on Linux):
Create a new git repository
mkdir testrepo
cd testrepo
git init
Create file .gitlab-ci.yml with the following content
job_test:
image: python:3.8-buster
script:
- python tester.py
Create a file tester.py with the following content:
import time
for index in range(10):
print(f"{time.time()} test output")
time.sleep(1)
Run this code locally
python tester.py
which produces the output
1648130393.143866 test output
1648130394.1441162 test output
1648130395.14529 test output
1648130396.1466148 test output
1648130397.147796 test output
1648130398.148115 test output
1648130399.148294 test output
1648130400.1494567 test output
1648130401.1506176 test output
1648130402.1508648 test output
with each line appearing on the console every second.
You commit the changes
git add tester.py
git add .gitlab-ci.yml
git commit -m "just a test"
You start the job within a gitlab runner
gitlab-runner exec docker job_test
....
1648130501.9057398 test output
1648130502.9068272 test output
1648130503.9079702 test output
1648130504.9090931 test output
1648130505.910158 test output
1648130506.9112566 test output
1648130507.9120533 test output
1648130508.9131665 test output
1648130509.9142723 test output
1648130510.9154003 test output
Job succeeded
Here you get essentially the same output, but you have to wait for 10 seconds and then you get the complete output at once!
What I want is to see the output as it happens. So like one line every second.
How can I do that for both, the local gitlab-runner and the gitlab CI?
In the source code, this is controlled mostly by the clientJobTrace's updateInterval and forceSendInterval properties.
These properties are not user-configurable. In order to change this functionality, you would have to patch the source code for the GitLab Runner and compile it yourself.
The parameters for the job trace are passed from the newJobTrace function and their defaults (where you would need to alter the source) are defined here.
Also note that the UI for GitLab may not necessarily get the trace in realtime, either. So, even if the runner has sent the trace to GitLab, the javascript responsible for updating the UI only polls for trace data every ~4 or 5 seconds.
You can poll gitlab web for new log lines as fast as you can:
For running job, use url like: https://gitlab.example.sk/grpup/project/-/jobs/42006/trace It will send you a json structure with lines of log file, offset, size and so on. You can have a look at documentation here: https://docs.gitlab.com/ee/api/jobs.html#get-a-log-file
Sidenote: you can use undocumented “state” parameter from response in subsequent request to get only new lines (if any). This is handy.
Through, this does not affect latency of arrival newlines from actual job from runner to gitlab web/backend. See sytech answer for this question.
This answer should help, when there is configured redis cache, incremental logging architecture, and someone wants to get logs from currently running job in "realtime". Polling is still needed through.
Some notes can be found also on forum: https://forum.gitlab.com/t/is-there-an-api-for-getting-live-log-from-running-job/73072
I am using pytest with --ignore and --junitxml options to generate a report of the test cases those are not ignore but when my report is generated, It also takes into account the ignored tests.
I am using the following command
pytest --ignore=tests/test_a.py --junitxml=pytest_not_a.xml
I am able to resolve this using pytest.mark.httpapi, so rather than using it over each test suite. I added pytest_collection_modifyingitems which puts the marker on the test on run time.
def pytest_collection_modifyitems(config, items):
for item in items:
if 'test_a.py' in str(item.fspath):
mark = getattr(pytest.mark, "httpapi")
item.add_marker(mark)
item.add_marker(pytest.mark.common)
Now the above command would be slightly changed like
py.test -v -m "not httpapi" --junitxml=pytest_not_a.xml. Now the Junit gitlab artifacts only takes the processed tests and do not include skipped tests in success rate calculation.
I am in need of scheduling my test such that my JMeter script automatically runs without I having to manually execute it daily. This can be accomplished via a Windows cron job but I do not know how to configure the JMeter script to run as a Windows cron job. Normally I use the command "jmeter -n -t path\filename.jmx -l path\log.csv" to execute my JMeter script via the command line so I assume if I can make this command run as a cron job it should solve the problem theoretically. So I sincerely appreciate if someone could provide the steps and details to accomplish this, thanks.
You can do it using Windows Task Scheduler like
Open Task Scheduler
Click Action -> Create Task
On "General" tab provide name
On "Triggers" tab provide when you would like to run it
On "Actions" tab create a new action like:
Program: c:\windows\system32\cmd.exe
Arguments: /c c:\jmeter\bin\jmeter.bat -n -t c:\jmeter\extras\Test.jmx -l c:\jmeter\bin\Test_%date:~10,4%%date:~4,2%%date:~7,2%.jtl
Change JMeter and .jmx script location to match your details.
Each time your task runs the file with current date should appear in "bin" folder of your JMeter installation like Test_20180514.jtl for today
Just in case here is exported task:
<?xml version="1.0" encoding="UTF-16"?>
<Task version="1.2" xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task">
<RegistrationInfo>
<Date>2018-05-14T07:50:02.7061254</Date>
<Author>aldan\anonymous</Author>
<URI>\JMeter</URI>
</RegistrationInfo>
<Triggers />
<Principals>
<Principal id="Author">
<UserId>S-1-5-21-2873627350-121124179-3591956082-1001</UserId>
<LogonType>InteractiveToken</LogonType>
<RunLevel>LeastPrivilege</RunLevel>
</Principal>
</Principals>
<Settings>
<MultipleInstancesPolicy>IgnoreNew</MultipleInstancesPolicy>
<DisallowStartIfOnBatteries>true</DisallowStartIfOnBatteries>
<StopIfGoingOnBatteries>true</StopIfGoingOnBatteries>
<AllowHardTerminate>true</AllowHardTerminate>
<StartWhenAvailable>false</StartWhenAvailable>
<RunOnlyIfNetworkAvailable>false</RunOnlyIfNetworkAvailable>
<IdleSettings>
<StopOnIdleEnd>true</StopOnIdleEnd>
<RestartOnIdle>false</RestartOnIdle>
</IdleSettings>
<AllowStartOnDemand>true</AllowStartOnDemand>
<Enabled>true</Enabled>
<Hidden>false</Hidden>
<RunOnlyIfIdle>false</RunOnlyIfIdle>
<WakeToRun>false</WakeToRun>
<ExecutionTimeLimit>PT72H</ExecutionTimeLimit>
<Priority>7</Priority>
</Settings>
<Actions Context="Author">
<Exec>
<Command>c:\windows\system32\cmd.exe</Command>
<Arguments>/c c:\jmeter\bin\jmeter.bat -n -t c:\jmeter\extras\Test.jmx -l c:\jmeter\bin\Test_%date:~10,4%%date:~4,2%%date:~7,2%.jtl</Arguments>
</Exec>
</Actions>
</Task>
Be aware that easier option could be using Jenkins to orchestrate your builds, this way you will have history, metrics, conditional failure criteria and performance trend charts.
See Continuous Integration 101: How to Run JMeter With Jenkins article for more information regarding adding performance tests under Jenkins control
IntelliJ IDEA 13 has really excellent support for Mocha tests through the Node.js plugin: https://www.jetbrains.com/idea/webhelp/running-mocha-unit-tests.html
The problem is, while I edit code on my local machine, I have a VM (vagrant) in which I run and test the code, so it's as production-like as possible.
I wrote a small bash script to run my tests remotely on this VM whenever I invoke "Run" from within IntelliJ, and the results pop up in the console well enough, however I'd love to use the excellent interface that appears whenever the Mocha test runner is invoked.
Any ideas?
Update: There's a much better way to do this now. See https://github.com/TechnologyAdvice/fake-mocha
Success!!
Here's how I did it. This is specific to connecting back to vagrant, but can be tweaked for any remote server to which you have key-based SSH privileges.
Somewhere on your remote machine, or even within your codebase, store the NodeJS plugin's mocha reporter (6 .js files at the time of this writing). These are found in NodeJS/js/mocha under your main IntelliJ config folder, which on OSX is ~/Library/Application Support/IntelliJIdea13. Know the absolute path to where you put them.
Edit your 'Run Configurations'
Add a new one using 'Mocha'
Set 'Node interpreter' to the full path to your ssh executable. On my machine, it's /usr/bin/ssh.
Set the 'Node options' to this behemoth, tweaking as necessary for your own configuration:
-i /Users/USERNAME/.vagrant.d/insecure_private_key vagrant#MACHINE_IP "cd /vagrant; node_modules/mocha/bin/_mocha --recursive --timeout 2000 --ui bdd --reporter /vagrant/tools/mocha_intellij/mochaIntellijReporter.js test" #
REMEMBER! The # at the end is IMPORTANT, as it will cancel out everything else the Mocha run config adds to this command. Also, remember to use an absolute path everywhere that I have one.
Set 'Working directory', 'Mocha package', and 'Test directory' to exactly what they should be if you were running mocha tests locally. These will not impact the test execution, but this interface WILL check to make sure these are valid paths.
Name it, save, and run!
Fully integrated, remote testing bliss.
1) In Webstorm, create a "Remote Debug" configuration, using port 5858.
2) Make sure that port is open on your server or VM.
3) On the remote server, execute Mocha with the --debug-brk option: mocha test --debug-brk
4) Back in Webstorm, start the remote-debug you created in Step 1, and and execution should pause on set breakpoints.
I have this task in publishers after xmllogger:
<exec>
<executable>CheckForWarnings.cmd</executable>
<successExitCodes>0</successExitCodes>
<baseDirectory>C:\Program Files (x86)\CruiseControl.NET\server</baseDirectory>
<buildArgs>all</buildArgs>
</exec>
I've verified that this task is returning a non-0 exit code via the ccnet service logs:
2013-01-29 23:21:20,571 [Encompass.2013R1:INFO] Integration complete: Failure - 1/29/2013 11:21:20 PM
So why is the build still green?
Tasks put inside the publisher section will not change the build result, as they are part of the report (publisher) and not the build (tasks).
The publishers section is run after the build completes (whether it
passes or fails). This is where you aggregate and publish the build
results.
http://www.cruisecontrolnet.org/projects/ccnet/wiki/Tasks_and_Publishers
You have to put the exec task in the tasks section, not the publish section, if you want it to fail the build.