Circle ci runs deploy section despite test failing - node.js

Circle ci is running the deploy section of my circle.yml depsite the test section failing. I'd expect that if anything goes wrong in the test section, the deploy section wouldn't be run, but it is.
There's two command in the test section the second one fails with :
npm run test:coverage -- --maxWorkers=2 returned exit code 1
I do not seem to be the first one, and the oldest post is a few months old:
https://discuss.circleci.com/t/deployment-triggered-after-tests-fail/12356/3
Is this a bug or am I doing something wrong ?
Ideas ?
circleci v 1.0

Related

gitlab-ci Job failed: exit status 1 with no error

I'm trying to run njsscan to SAST my code on gitlab-ci. But the results of the job always fail even though there are no errors as shown in the image below.
If I run the manual on my server the command runs without any problems in the image below.
Is this a bug of gitlab-ci ? or is there a solution I can do? thank you
I have the same issue using gitlab-runner 15.3.0 with docker executor (docker version is 20.10.17):
Job is failing with RC=1 while running the before_script part
Restarting the Job (without any changes to code or pipeline-definitions) just succeed in the most cases.
We are using a dozen of runners, but even if a job is restarted on the same runner, it succeeds although it just failed there before.

GitLab CI: How to continue job even when script fails

I have a job in my pipeline that has a script with two very important steps:
mvn test to run JUnit tests against my code
junit2html to convert the XML result of the tests to a HTML format (only possible way to see the results as my pipelines aren't done through MRs) that is uploaded to GitLab as an artifact
docker rm to destroy a container created earlier in the pipeline
My problem is that when my tests fail, the script stops immediately at mvn test, so the junit2html step is never reached, meaning the test results are never uploaded in the event of failure, and docker rm is never executed either, so the container remains and messes up subsequent pipelines as a result.
What I want is to be able to keep a job going till the end even if the script fails at some point. Basically, the job should still count as failed in GitLab CI / CD, but its entire script should be executed. How can I configure this?
In each step that you need to continue even if the step fails, you can add a flag to your .gitlab-ci.yml file in that step. For example:
...
Unit Tests:
stage: tests
only:
- branches
allow_failure: true
script:
- ...
It's that allow_failure: true flag that will continue the pipeline even if that specific step fails. Gitlab CI Documentation about allow_failure is here: https://docs.gitlab.com/ee/ci/yaml/#allow_failure
Update from comments:
If you need the step to keep going after a failure, and be aware that something failed, this has worked well for me:
./script_that_fails.sh || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi

Gitlab CI variables returns empty string?

It's been 2 days since one of my project' build starts failing on Gitlab CI. The main error was E_MISSING_APP_KEY and when I check another variable just by echoing $HOST and $PORT from my .gitlab-ci.yml config, like this
tests:
script:
- echo "${HOST} ${PORT}"
- node -e "console.log(process.env.HOST, process.env.PORT)"
- node_modules/.bin/nyc node ace test -t 0
I got nothing.
The build was failed because it can't read my environment variable that I set on its CI Settings.
Anyone experiencing same issue? & how to solve this?
Update:
I'm trying to create new project with only containing .gitlab-ci.yml file here and it's seems working just fine
But why the world it's still failing on my main project?
For anyone else having a similar problem:
check your variable, if it is protected your branch has to be protected as well or remove the protected option on your variable
The issue is solved by delete all of my variables I've had & set them back from the CI Setting. And the build pipeline is running without any errors. (except the actual testing is still failed, lol)
Honestly, I'm still wondering why this could happened? and hopefully no one will experiencing same kind of issue like me here..

Force to fail a sonarqube job in gitlab CI

Having in gitlab-ci a job like the following one:
static_test_service:
stage: test code
script:
- docker run --rm -v $(pwd):/data -w /data dparra0007/sonar-scanner:20171010-1 sonar-scanner
-Dsonar.projectKey=$CI_PROJECT_NAMESPACE:$CI_PROJECT_NAME
-Dsonar.projectName=$CI_PROJECT_NAME
-Dsonar.branch=$CI_COMMIT_REF_NAME
-Dsonar.projectVersion=$CI_JOB_ID
-Dsonar.sources=./greetingapi/src
-Dsonar.java.binaries=./greetingapi/target
-Dsonar.gitlab.project_id=$CI_PROJECT_ID
-Dsonar.gitlab.commit_sha=$CI_COMMIT_SHA
-Dsonar.gitlab.ref_name=$CI_COMMIT_REF_NAME
I would need to fail the gitlab job when the sonarqube analysis fails. But in that case, the error in analysis is reported but not sending a fail status to the job in Gitlab CI and the step always finish with success.
It seems that there is no way to raise any event from "docker run" to be managed by gitlab job.
Any idea on how to force to fail the job if the sonarqube analysis fails?
Thanks,
To break the CI build for a failed Quality Gate, you have write script based on the following steps
1.Search in /report-task.txt the values of the CE Task URL (ceTaskUrl) and CE Task Id (ceTaskId)
2.Call /api/ce/task?id=XXX where XXX is the CE Task Id retrieved from step 1 Ex:- https://yourSonarURL/api/ce/task?id=Your ceTaskId
3.Wait for sometime until the status is SUCCESS, CANCELED or FAILED from Step 2
4.If it is FAILED, break the build (Here failure is unable to generate sonar report)
5.If successful,then Use the analysisId from the JSON returned by /api/ce/task? id=XXX(step2)and Immediately call /api/qualitygates/project_status?analysisId=YYY to check the status of the quality gate.
Ex:- https://yourSonarURL/api/qualitygates/project_status?analysisId=Your analysisId
6.Step 5 gives the status of the critical, major and minor error threshold limit
7.Based on the limit break the build.
I faced this problem with GitLab and Sonar where Sonar was failing the QualityAnalysis but GitLab job was still passing with
INFO: ANALYSIS SUCCESSFUL, you can find the results at:
Now the problem is below missing config in sonar.properties
sonar.qualitygate.wait=true
sonar.qualitygate.timeout=1800
So basically, the SonarScan takes time to do the analysis and by default it won't wait for the analysis to complete and may returns default SUCCESSFUL ANALYSIS result to GitLab
With the mentioned configuration, we are explicitly asking GitLab to wait for the qualitygate to finish and gave some timeout as well (in case analysis takes long time to finish)
Now we see the GitLab job fails with below
ERROR: QUALITY GATE STATUS: FAILED - View details

TeamCity ".Net Process Runner" hangs

We have started migrating our one of several projects to team city as part of CI. Below is how we have setup teamcity build. We are trying to deploy WebSite.
1) Build Step 1 (Package installation)
Using "command line " runner type install required package.
2) Build Step 2 (Build)
Using Runner type "Visual Studio (sln)" (Visual Studio 2010) build website.
3) Build Step 3 (Deploy Web Site)
Using ".Net Process Runner", deployer.exe (x86 built with .Net Framework 4) deploy site.
Deployer.exe reads config file. Config file contains "BuildId", "Environment" and "Servers" where we want build to be pushed.
<buildType id="bt52">
<env name="Debug">
<server path="SERVER1" />
</env>
<env name="QA">
<server path="SERVER2" />
<server path="SERVER3" />
</env>
<env name="UAT">
<server path="SERVER4" />
<server path="SERVER5" />
</env>
</buildType>
Deployer.exe is called with required parameters as below. Which reads config and deploys site to Server2 and Server3.
Deployer.exe "bt52" "QA" "siteQA" "E:\BuildAgent\work\2483052e33e5e1e8\src\diy\" msdeploy.exe
Problem area is step #3.
When we run deployer.exe using .Net process runner as part of team city we see its hanging and not responsind sometime even for 45 minutes. When we try to execute same deployer.exe from build server using command line script executes within couple of seconds.
E:\TeamCity_custom_applications\deployer>Deployer.exe farm1-1 QA siteQA E:\BuildAgent\work\2483052e33e5e1e8\src\diy\ msdeploy.exe
Info
: Processing batch run ... Info : Processing command ...msdeploy.exe
-verb:sync -source:contentPath="E:\BuildAgent\work\2483052e33e5e1e8\src\diy\" -dest:contentPath="siteQA",wmsvc="SERVER2",userName="*****",password="******",authType="Basic"-skip:objectName=filePath,absolutePath=web.config -skip:objectName=dirPath,absolutePath="bin" -enableRule:DoNotDeleteRule -allowUntrusted Info : output >>Total changes: 0 (0 added, 0 deleted, 0 updated, 0 parameters changed, 0
bytes copied) Info : error >>(none) Info : ExitCode >> 0 Info :
Processing command ...msdeploy.exe -verb:sync
-source:contentPath="E:\BuildAgent\work\2483052e33e5e1e8\src\diy\" -dest:contentPath="siteQA",wmsvc="SERVER3",userName="******",password="******",authType="Basic"
-skip:objectName=filePath,absolutePath=web.config -skip:objectName=dirPath,absolutePath="bin" -enableRule:DoNotDeleteRule -allowUntrusted Info : output >>Total changes: 0 (0 added, 0 deleted, 0 updated, 0 parameters ch anged, 0
bytes copied) Info : error >>(none) Info : ExitCode >> 0
Info: Deploy Script Complete.
One more thing we observed is running deployer.exe through teamcity I see that site content gets copied but only for 1 server and teamcity build status stays in "Running" mode. I am wondering if someone can please put little bit of insight on how can I look into this issue.
Update 1:
Thanks for your time looking into it !! What we ended up doing is, Instead of running command "msdeploy.exe" from "cmd.exe" we added "msdeploy.exe" location as Environment variable and executed "msdeploy.exe" in loop for # of servers. This has resolved issue of hanging. Now I am just curious to know why would it behave in such manner where if you execute "msdeploy.exe" from "cmd.exe" it would hang while running directly "msdeploy.exe" it would execute successfully. Any insight into same would be greatly appreciated.
Update 2:
I have added image which explains behavior using process explorer. If we kill msdeploy.exe from process explorer than for next all deployments to that server will not have the issue of build hanging. Please see below image
To be honest, it sounds like you're running into issues with redirecting input/output streams. TeamCity is running your application in a totally headless environment and then you, in turn, are attempting to redirect and parse the output of msdeploy.exe
If that's the case, I'd recommend looking into using the MSDeploy API instead of msdeploy.exe. The latter is just a command line wrapper for the former, so all the functionality is available to you. There's a sample deployment application available on the IIS blog if you need help getting started.
It seems you have NUnit build step configured in TeamCity and invoke cmd.exe from your test. This looks like an issue with the test code then. Most probably it will reproduce without TeamCity if you run the test in question with NUnit directly.
As Richard noted, most probably the issue root cause is related to stdin/stdout processing.
If you want to fix it in your code, you can try to experiment by explicitly closing stdin or the other way around, try writing something into it, etc.
Work around we did is, we observed msdeploy doesn't take more than 3-5 seconds to execute and deploy (Even for our biggest project which is almost 300mb website). So we set timeout of 20 seconds. So far since last 1 weeks we have not seen any issue with it and hopefully it will not cause more trouble but still we are not sure why such behavior.

Resources