I have this task in publishers after xmllogger:
<exec>
<executable>CheckForWarnings.cmd</executable>
<successExitCodes>0</successExitCodes>
<baseDirectory>C:\Program Files (x86)\CruiseControl.NET\server</baseDirectory>
<buildArgs>all</buildArgs>
</exec>
I've verified that this task is returning a non-0 exit code via the ccnet service logs:
2013-01-29 23:21:20,571 [Encompass.2013R1:INFO] Integration complete: Failure - 1/29/2013 11:21:20 PM
So why is the build still green?
Tasks put inside the publisher section will not change the build result, as they are part of the report (publisher) and not the build (tasks).
The publishers section is run after the build completes (whether it
passes or fails). This is where you aggregate and publish the build
results.
http://www.cruisecontrolnet.org/projects/ccnet/wiki/Tasks_and_Publishers
You have to put the exec task in the tasks section, not the publish section, if you want it to fail the build.
Related
Running into a total dead-end here.
I've created a Publish Profile for a .net6 application that we want to publish to IIS with Web Deploy. In the Entity Framework Migration section, the option to "Apply this migration on publish" is selected.
When manually clicking publish, everything works. However, we want to automate this in TeamCity using the .NET build runner. The publish step fails at:
Generating Entity Framework SQL Scripts...
Executing command: dotnet ef migrations script --no-build --idempotent --configuration Release --output "C:\TeamCity\buildAgent\work\cbf95cc2b4413601\MySolution.Api\obj\Release\net6.0\PubTmp\EFSQLScripts\MySolution.Data.MyContext.sql" --context MySolution.Data.MyContext
C:\Program Files\dotnet\sdk\6.0.400\Sdks\Microsoft.NET.Sdk.Publish\targets\TransformTargets\Microsoft.NET.Sdk.Publish.TransformFiles.targets(221,5): error : Entity Framework SQL Script generation failed
Internal error message details: BuildMessage1 0 Text DefaultMessage ERROR 400682522803500 tags:'tc:parseServiceMessagesInside'
Error message is logged
Build FAILED.
I cannot find any specific error messages anywhere in any log. Looking in the Microsoft.NET.Sdk.Publish.TransformFiles.targets file shows that it's failing on GenerateEFSQLScripts - an MSBuild command that executes dotnet ef under the covers.
I thought this might be a case of dotnet ef not being installed on the build agent. But when I manually run the command myself from C:\TeamCity\buildAgent\work\cbf95cc2b4413601\MySolution.Api, it succeeds, and the SQL scripts are successfully created.
I also thought it might just be a case of the command being run in the wrong directory (i.e. in the root MySolution folder rather than the MySolution.Api folder), but explicitly setting the working directly fails at the same point, with the same error.
Has anyone seen this before? Or could point me to where an actual error might be located?
I'm using Testcomplete as my automation tool. Our pipeline is in Azure and its newly created for QA runs. The test VM was set up in VMSS in Azure. I'm using TestExecute as my test runner. Testexecute is already installed in the VM. When i run the pipeline, I'm getting an error which says
</RunSettings>
**************** Starting test execution *********************
C:\a\_tool\VsTest\17.4.0-preview-20220726-02\x64\tools\net462\Common7\IDE\Extensions\TestPlatform\vstest.console.exe "#C:\a\_temp\4hdijfnknda.tmp"
Microsoft (R) Test Execution Command Line Tool Version 17.4.0-preview-20220726-02 (x64)
Copyright (c) Microsoft Corporation. All rights reserved.
vstest.console.exe "C:\a\1\s\GrizzlyMatters.pjs"
/Settings:"C:\a\_temp\ro5un5cn0ip.tmp.runsettings"
/Logger:"trx"
/TestAdapterPath:"C:\a\1\s\TestCompleteAdapter"
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
##[error]Failed to get a list of tests from the "C:\a\1\s\GrizzlyMatters.pjs" file due to the following error: Unable to connect to TestExecute: it is running with different rights, or its state is incorrect. Please close it and try again.
No test is available in C:\a\1\s\GrizzlyMatters.pjs. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
Results File: C:\a\_temp\TestResults\AzDevOps_vmliaqa000008_2022-09-01_14_51_35.trx
##[error]Test Run Failed.
Vstest.console.exe exited with code 1.
**************** Completed test execution *********************
Test results files: C:\a\_temp\TestResults\AzDevOps_vmliaqa000008_2022-09-01_14_51_35.trx
No Result Found to Publish 'C:\a\_temp\TestResults\AzDevOps_vmliaqa000008_2022-09-01_14_51_35.trx'.
Created test run: 2724386
Publishing test results: 0
Publishing test results to test run '2724386'.
TestResults To Publish 0, Test run id:2724386
Published test results: 0
Publishing Attachments: 1
Execution Result Code 1 is non zero, checking for failed results
Completed TestExecution Model...
##[warning]Vstest failed with error. Check logs for failures. There might be failed tests.
##[error]Error: The process 'C:\a\_tasks\VSTest_ef087383-ee5e-42c7-9a53-ab56c98420f9\2.205.0\Modules\DTAExecutionHost.exe' failed with exit code 1
##[error]Vstest failed with error. Check logs for failures. There might be failed tests.
Finishing: VsTest - testAssemblies
I've researched everything possible to get a solution. I'm not sure what I need to do. I'm attaching the screenshot of the pipeline config and the error.
Screenshots
Pipeline config 1:
Pipeline config 2:
Pipeline config 3:
Pipeline Run Error:
I have a job in my pipeline that has a script with two very important steps:
mvn test to run JUnit tests against my code
junit2html to convert the XML result of the tests to a HTML format (only possible way to see the results as my pipelines aren't done through MRs) that is uploaded to GitLab as an artifact
docker rm to destroy a container created earlier in the pipeline
My problem is that when my tests fail, the script stops immediately at mvn test, so the junit2html step is never reached, meaning the test results are never uploaded in the event of failure, and docker rm is never executed either, so the container remains and messes up subsequent pipelines as a result.
What I want is to be able to keep a job going till the end even if the script fails at some point. Basically, the job should still count as failed in GitLab CI / CD, but its entire script should be executed. How can I configure this?
In each step that you need to continue even if the step fails, you can add a flag to your .gitlab-ci.yml file in that step. For example:
...
Unit Tests:
stage: tests
only:
- branches
allow_failure: true
script:
- ...
It's that allow_failure: true flag that will continue the pipeline even if that specific step fails. Gitlab CI Documentation about allow_failure is here: https://docs.gitlab.com/ee/ci/yaml/#allow_failure
Update from comments:
If you need the step to keep going after a failure, and be aware that something failed, this has worked well for me:
./script_that_fails.sh || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
Having in gitlab-ci a job like the following one:
static_test_service:
stage: test code
script:
- docker run --rm -v $(pwd):/data -w /data dparra0007/sonar-scanner:20171010-1 sonar-scanner
-Dsonar.projectKey=$CI_PROJECT_NAMESPACE:$CI_PROJECT_NAME
-Dsonar.projectName=$CI_PROJECT_NAME
-Dsonar.branch=$CI_COMMIT_REF_NAME
-Dsonar.projectVersion=$CI_JOB_ID
-Dsonar.sources=./greetingapi/src
-Dsonar.java.binaries=./greetingapi/target
-Dsonar.gitlab.project_id=$CI_PROJECT_ID
-Dsonar.gitlab.commit_sha=$CI_COMMIT_SHA
-Dsonar.gitlab.ref_name=$CI_COMMIT_REF_NAME
I would need to fail the gitlab job when the sonarqube analysis fails. But in that case, the error in analysis is reported but not sending a fail status to the job in Gitlab CI and the step always finish with success.
It seems that there is no way to raise any event from "docker run" to be managed by gitlab job.
Any idea on how to force to fail the job if the sonarqube analysis fails?
Thanks,
To break the CI build for a failed Quality Gate, you have write script based on the following steps
1.Search in /report-task.txt the values of the CE Task URL (ceTaskUrl) and CE Task Id (ceTaskId)
2.Call /api/ce/task?id=XXX where XXX is the CE Task Id retrieved from step 1 Ex:- https://yourSonarURL/api/ce/task?id=Your ceTaskId
3.Wait for sometime until the status is SUCCESS, CANCELED or FAILED from Step 2
4.If it is FAILED, break the build (Here failure is unable to generate sonar report)
5.If successful,then Use the analysisId from the JSON returned by /api/ce/task? id=XXX(step2)and Immediately call /api/qualitygates/project_status?analysisId=YYY to check the status of the quality gate.
Ex:- https://yourSonarURL/api/qualitygates/project_status?analysisId=Your analysisId
6.Step 5 gives the status of the critical, major and minor error threshold limit
7.Based on the limit break the build.
I faced this problem with GitLab and Sonar where Sonar was failing the QualityAnalysis but GitLab job was still passing with
INFO: ANALYSIS SUCCESSFUL, you can find the results at:
Now the problem is below missing config in sonar.properties
sonar.qualitygate.wait=true
sonar.qualitygate.timeout=1800
So basically, the SonarScan takes time to do the analysis and by default it won't wait for the analysis to complete and may returns default SUCCESSFUL ANALYSIS result to GitLab
With the mentioned configuration, we are explicitly asking GitLab to wait for the qualitygate to finish and gave some timeout as well (in case analysis takes long time to finish)
Now we see the GitLab job fails with below
ERROR: QUALITY GATE STATUS: FAILED - View details
I have deployed CruiseControl.Net (Version 1.6.7981.1) server and it does the following tasks:
Build trigger
Labeller
VSTS Soursecontrol block (Get the soursecode from TFS 2010 server)
Build the code in Debug mode
Run NUnit test using Nanat task
Merge NUnit-Result.xml (Publisher task)
As I need to clear NUnit-Result.xml file every time before running the NUnit task, I have added a delete task in Nant.build file which deletes NUnit-results.xml before NUnit task run.
Now my problem is when my build get triggers and if my TFS server is not accessible, build get failed and only publisher task runs so Old Nunit result file merge in the failed build.
I Tried running "Prebuild" task but it works only if TFS server is accessible.
Now What I want is a task to delete Nunit-result.xml which can run even if my TFS is not accessible (either before soursecontrol block or within/after publisher block)
Thanks in advance
You can add an exec task to delete the file in the publisers setion just before the file merge
Like this:
<publishers>
<xmllogger />
<statistics />
<buildpublisher>
<sourceDir>$(buildDir)\_PublishedWebsites\$(projectName)</sourceDir>
<publishDir>$(webDir)</publishDir>
<useLabelSubDirectory>false</useLabelSubDirectory>
<alwaysPublish>false</alwaysPublish>
</buildpublisher>
<exec>
<executable>$(workingDir)\deleteNunitResultxml.cmd</executable>
</exec>
...
</publishers>
Have a publisher at the end which moves the nunit result file or deletes it. Then it won't be there for the next build.
Another option is to create a task to run before the nunit task runs, which deletes the nunit-result.xml file.
E.g. execute
cmd /c "del NUnit-Result.xml"