I'm unable to test a condition in gitlab cicd. Here is the condition check I wanted to do.
count=docker ps -aq | wc -l && if [ "$count" -gt 0 ]; then echo "TESTING $count";fi
It works fine within bash shell but doesn't work inside gitlab-runner
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY root#172.10.10.10"
docker login -u $REGISTRY_USER -p $REGISTRY_PASS &&
count=`docker ps -aq | wc -l` && if [ "$count" -gt 0 ]; then echo "TESTING $count";fi "
I get the following error unary operator expected any idea why?
Figured it out.
I had to escape the special characters when using ssh.
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY root#172.10.10.10"
docker login -u $REGISTRY_USER -p $REGISTRY_PASS &&
count=\`docker ps -aq | wc -l\` && if \[ "\$count" -gt 0 \]; then echo "TESTING \$count";fi "
Related
I have a GitLab pipeline that deploy a site and need to restart fpm service.
stages:
- deploy
Deploy:
image: gotechnies/alpine-ssh
stage: deploy
before_script:
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
# other steps
- ssh forge#$SERVER_IP -o "SendEnv=FORGE_PHP_FPM" -o "SendEnv=FORGE_SUDO_PWD" 'bash -O extglob -c "(flock -w 10 9 || exit 1\n echo 'Restarting FPM...'; echo "$FORGE_SUDO_PWD" | sudo -S service $FORGE_PHP_FPM reload) 9>/tmp/fpmlock"'
variables:
FORGE_PHP_FPM: php8.1-fpm
FORGE_SUDO_PWD: $PRODUCTION_SUDO_PWD
only:
- master
$PRODUCTION_SUDO_PWD is added on gitlab variables and marked as protected.
My problem is with this line:
- ssh forge#$SERVER_IP -o "SendEnv=FORGE_PHP_FPM" -o "SendEnv=FORGE_SUDO_PWD" 'bash -O extglob -c "(flock -w 10 9 || exit 1\n echo 'Restarting FPM...'; echo "$FORGE_SUDO_PWD" | sudo -S service $FORGE_PHP_FPM reload) 9>/tmp/fpmlock"'
I want to restart php8.1-fpm service but each time I run the pipeline I get:
[sudo] password for forge: Sorry, try again.
[sudo] password for forge:
sudo: no password was provided
sudo: 1 incorrect password attempt
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
As far as I know the SendEnv should pass the value of the variable and if I remove the bash command and just add echo $FORGE_SUDO_PWD it print the value.
What am I missing?
I'm trying to run a script which provides the status code of some url, using azure pipeline.
My Azure Yaml file:
pool:
vmImage: ubuntu-16.04
steps:
- script: echo Hello
displayName: ' Welcome'
- script: cat webpages.txt
displayName: 'display file'
- script: curl -s -w '%{http_code}\n' -o /dev/null https://www.google.co.in
displayName: 'Checking Curl Code'
- script: cat -v script.sh
displayName: 'Cariage retrun'
- task: ShellScript#2
inputs:
scriptPath: script.sh
My Script.sh file
#!/bin/sh
while read line ; do echo "$line - `curl -s -w '%{http_code}\n' -o /dev/null $line`" ;done < webpages.txt
webpages.txt file
https://www.vodafone.co.uk/good-stuff
https://www.vodafone.co.uk/help-and-information/cancel-your-account
https://www.vodafone.co.uk/help-and-information/complaints
https://www.vodafone.co.uk/help-and-information/complaints/code-of-practice
https://www.vodafone.co.uk/help-and-information/costs-and-charges
https://www.vodafone.co.uk/help-and-information/costs-and-charges/call-and-text-charges
https://www.vodafone.co.uk/help-and-information/costs-and-charges/data-charges
Problem
When I run my pipeline, the curl command is not working
and output comes as
2020-11-17T09:59:05.4094324Z ##[section]Starting: ShellScript
2020-11-17T09:59:05.4102534Z ==============================================================================
2020-11-17T09:59:05.4102904Z Task : Shell script
2020-11-17T09:59:05.4103204Z Description : Run a shell script using Bash
2020-11-17T09:59:05.4103453Z Version : 2.165.2
2020-11-17T09:59:05.4103701Z Author : Microsoft Corporation
2020-11-17T09:59:05.4104086Z Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/shell-script
2020-11-17T09:59:05.4104506Z ==============================================================================
2020-11-17T09:59:05.7637654Z [command]/bin/bash /home/vsts/work/1/s/script.sh
2020-11-17T09:59:05.7692023Z https://www.vodafone.co.uk/good-stuff
2020-11-17T09:59:05.7700780Z - 000
2020-11-17T09:59:05.7797276Z https://www.vodafone.co.uk/help-and-information/cancel-your-account
2020-11-17T09:59:05.7798378Z - 000
2020-11-17T09:59:05.7851183Z https://www.vodafone.co.uk/help-and-information/complaints
2020-11-17T09:59:05.7866944Z - 000
2020-11-17T09:59:05.7908420Z https://www.vodafone.co.uk/help-and-information/complaints/code-of-practice
2020-11-17T09:59:05.7909144Z - 000
2020-11-17T09:59:05.7967261Z https://www.vodafone.co.uk/help-and-information/costs-and-charges
2020-11-17T09:59:05.7967920Z - 000
2020-11-17T09:59:05.8023329Z https://www.vodafone.co.uk/help-and-information/costs-and-charges/call-and-text-charges
2020-11-17T09:59:05.8024443Z - 000
2020-11-17T09:59:05.8095527Z https://www.vodafone.co.uk/help-and-information/costs-and-charges/data-charges
but if I replace my curl with
curl -s -w '%{http_code}\n' -o /dev/null https://www.vodafone.co.uk/good-stuff
it gives the output at 200.
When I'm running this locally it works perfectly:
$ while read line ; do echo "$line - `curl -s -w '%{http_code}\n' -o /dev/null $line`" ;done < webpages.txt
https://www.vodafone.co.uk/good-stuff - 200
https://www.vodafone.co.uk/help-and-information/cancel-your-account - 200
(...)
I notice that a \n character is printed in your echo $line, that's probably causing the issue. What could solve it, is:
replacing $line by ${line},
replacing echo by echo -n to omit newlines.
So this is what I did,
I replaced the whole curl command with a simple curl $line
It gave me an error : curl: (3) Illegal characters found in URL
So, I figured my URL is adding some unrequired values.
I replaced my do with
line=${line%$'\r'}
while read line ; do line=${line%$'\r'} ; echo "$line - `curl -s -w '%{http_code}\n' -o /dev/null $line`"; done < webpages.txt
and Voila it works!
I am trying to copy files as part of GitLab pipeline but I am getting
busybox v1.22.1 multicall binary error on copy command. It was working earlier but suddently showing this error.
Here is my script
script:
- mkdir -p ./input
- git log -m -2 --name-only --diff-filter=d --pretty="format:" >
./input/changes.lst
- |
file="./input/changes.lst"
while IFS= read -r line
do
printf '%s\n' "$line";
if [ -e $line ]
then
`cp -R $line ./input/`;
fi
done < "$file"
only:
- master
artifacts:
when: always
paths:
- input
I have a test project for end2end tests based on Nightwatch.js that is an NodeJS framework. I want to use 'Jenkinsfile' for my project to build a pipeline for my end2end tests to execute them over a Jenkins in a Docker container. So, I want to start a Docker container and execute the tests inside this Docker container. And this should be realized over a Jenkinsfile. Everything is perfect when I don't use a Jenkinsfile but directly use shell commands in a manually created job. While using Jenkinsfile I get an MultipleCompilationErrorsException while running the pipeline and I don't know why.
This is my Jenkinsfile:
pipeline {
agent any
parameters {
text(defaultValue: 'grme/nightwatch-chrome-firefox:0.0.3', description: '', name: 'docker_image')
text(defaultValue: 'npm-test-chrome', description: '', name: 'run_script_method')
text(defaultValue: '/Applications/Docker.app/Contents/Resources/bin/docker', description: '', name: 'docker')
}
stages {
stage('Test') {
steps {
sh 'sudo chmod -R 777 $(pwd)'
echo "------ stop all Docker containers ------"
sh '(sudo ${params.docker} stop $(sudo ${params.docker} ps -a -q) || echo "------ all Docker containers are still stopped ------")'
echo "------ remove all Docker containers ------"
sh '(sudo ${params.docker} rm $(sudo ${params.docker} ps -a -q) || sudo echo "------ all Docker containers are still removed ------")'
echo "------ pull Docker image from Docker Cloud ------"
sh 'sudo ${params.docker} pull "${params.docker_image}"'
echo "------ start Docker container from image ------"
sh 'sudo ${params.docker} run -d -t -i -v $(pwd):/my_tests/ "${params.docker_image}" /bin/bash'
echo "------ execute end2end tests on Docker container ------"
sh 'sudo ${params.docker} exec -i $(sudo ${params.docker} ps --format "{{.Names}}") bash -c "cd /my_tests && xvfb-run --server-args='-screen 0 1600x1200x24' npm run ${params.run_script_method} || true && google-chrome --version && firefox --version"'
echo "------ cleanup all temporary files ------"
sh 'sudo rm -Rf $(pwd)/tmp-*'
sh 'sudo rm -Rf $(pwd)/.com.google*'
sh 'sudo rm -Rf $(pwd)/rust_mozprofile*'
sh 'sudo rm -Rf $(pwd)/.org.chromium*'
echo "------ stop all Docker containers again ------"
sh '(sudo ${params.docker} stop $(sudo ${params.docker} ps -a -q) || sudo echo "------ all Docker containers are still stopped ------")'
echo "------ remove all Docker containers again ------"
sh '(sudo ${params.docker} rm $(sudo ${params.docker} ps -a -q) || sudo echo "------ all Docker containers are still removed ------")'
}
}
}
}
And this is the exception I get when running the pipeline:
Started by user GRme
> git rev-parse --is-inside-work-tree # timeout=10
Setting origin to https://github.com/GRme/e2e-web-tests
> git config remote.origin.url https://github.com/GRme/e2e-web-tests # timeout=10
Fetching origin...
Fetching upstream changes from origin
> git --version # timeout=10
using GIT_ASKPASS to set credentials
> git fetch --tags --progress origin +refs/heads/*:refs/remotes/origin/*
Seen branch in repository origin/master
Seen 1 remote branch
Obtained Jenkinsfile from 0eb7d8c437df1efc56e46171d945e7f2806b838b
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 23: Expected a symbol # line 23, column 9.
sh 'sudo ${params.docker} exec -i $(sudo ${params.docker} ps --format "{{.Names}}") bash -c "cd /my_tests && xvfb-run --server-args='-screen 0 1600x1200x24' npm run ${params.run_script_method} || true && google-chrome --version && firefox --version"'
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:129)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:123)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:516)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:479)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:269)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:419)
Finished: FAILURE
What do I wrong and how can I solve this exception?
After escaping the ' in the line, the pipeline has no syntax error anymore :)
sh 'sudo ${params.docker} exec -i $(sudo ${params.docker} ps --format "{{.Names}}") bash -c "cd /my_tests && xvfb-run --server-args=\'-screen 0 1600x1200x24\' npm run ${params.run_script_method} || true && google-chrome --version && firefox --version"'
Currently, In my gitlab configuration workflow, i have some manual stage tests. So I can decide if the test pass or fail. Now the manual steps are always skipped by default. Whenever the normal stage steps are building it jump to another normal stage without considering manual steps. Now, How i can make it work. Please help me in this.
stages:
- start_pipeline
- auto_testing
- manual_test_PASS
- manual_test_FAIL
- UAT_test_PASS
- UAT_test_FAIL
- Validation_PASS
- Validation_FAIL
- merge_to_master
variables:
start_pipeline:
stage: start_pipeline
script:
- if [[ -d "$USER_DIR" ]]; then echo -e "Direcory exists"; else sudo mkdir -p $USER_DIR; fi
- sudo chown -R root:gitlab-runner ${TARGET}/*
auto_testing:
stage: auto_testing
script:
- find . -type d -name "manifests" -exec chown -R gitlab-runner:gitlab-runner {} \;
- find . -type d -name "manifests" -exec puppet parser validate {} \;
- if [[ -d "$PRODUCTION_TARGET" ]]; then echo -e "Direcory exists"; else sudo mkdir -p $PRODUCTION_TARGET; fi
- if [[ -d "$LAB_TARGET" ]]; then echo -e "Direcory exists"; else sudo mkdir -p $LAB_TARGET; fi
manual_test_FAIL:
stage: manual_test_FAIL
script:
- echo "FAIL"
- exit 1;
when: manual
manual_test_PASS:
stage: manual_test_PASS
script:
- echo "PASS"
- sudo cp -r * ${TARGET}/${MODIFIED_COMMIT_USER}/
- sudo cp -r * ${LAB_TARGET}/
- sudo cp -r * ${PRODUCTION_TARGET}/
dependencies:
- auto_testing
I know this is likely too late to help you, but this is a known issue that they're targeting to fix in V9.0.
https://gitlab.com/gitlab-org/gitlab-ce/issues/26360