Gitlab CI CD Pipeline Variable Concat Two Variables Not Working - gitlab

The Problem: I am trying to concat two variables for a copy cmd in a before script for a gitlab ci/cd pipeline job.
What I expect: myfile_filesuffix
What I get: _filesuffix
Can anyone see what I am doing wrong? When I run this for loop on my local CLI I have no problems. Thank you!
before_script:
- rm -rf .terraform
- terraform --version
- mkdir ~/.aws
- echo "[default]" > ~/.aws/credentials
- echo "aws_access_key_id=$AWS_ACCESS_KEY_ID" >> ~/.aws/credentials
- echo "aws_secret_access_key=$AWS_SECRET_ACCESS_KEY" >> ~/.aws/credentials
- mkdir ./deployments
- ls common
- common_files=$(find common -type f)
- echo $common_files
- prefix_common=$(echo $common_files | cut -d"/" -f 1)
- echo $prefix_common
- for f in $common_files;
do
common_file="$(basename $f)"
cp $f ./deployments/""${common_file}"_"${prefix_common}"";
done

you can used GitLab repo settings -> CI/CD -> Variables to add FILE type variable and use mv command move to your folder.
ex: File Type variable is ANSIBLE_VARIABLES_FILE
script:
- mv $ANSIBLE_VARIABLES_FILE ./deployments/variables_common.tf

Related

Unable to create ~/.ssh file using .gitlab-ci.yml

The following code was from my deploy stage in my .gitlab-ci.yml file.
deploy_website:
stage: deploy
artifacts:
paths:
- public
before_script:
- "command -v ssh-agent >/dev/null || ( apk add --update openssh )"
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- pwd && ls
- ssh-keyscan $VM_IPADDRESS >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
# - apk add bash
# - ls deploy
# - bash ./deploy/deploy.sh
- ssh $SSH_USER#$VM_IPADDRESS "hostname && echo 'Welcome!!!' > welcome.txt"
This line "ssh-keyscan $VM_IPADDRESS >> ~/.ssh/known_hosts" failed to run when I execute my pipeline. Please help :(
You can start and echo $VM_IPADDRESS to check if the IP variable is properly instanciated.
"failed to run"
Then it depends on the error message (or if the commands simply froze).
Before the keyscan, you can test if the network route is reachable from your GitLab-CI runnner with:
curl -v telnet://$VM_IPADDRESS:22
If it does not connect immediately, that would explain why the ssh-keyscan fails.

Bitbucket pipeline for merge commit only

I have this git command at Bitbucket pipeline script which will generate list of changed files with path.
export FILES=$(git diff-tree --no-commit-id --name-only -r HEAD^^..HEAD)
problem is that it will generate on all commint. How to get list files only merge commit to master?
Or how to say in pipelipe script to run on merge event only?
After some experiments I extended my script like this:
image: python:3.5.7
pipelines:
branches:
master:
- step:
script:
- apt-get update
- apt-get -qq install zip curl
- mkdir $BITBUCKET_REPO_SLUG
- export VERSION_LABEL=$(date +%Y-%m-%d_%H:%M:%S)
- export ZIP_FILE=package_$BITBUCKET_REPO_SLUG_$VERSION_LABEL.zip
- export FILES=$(git diff-tree --no-commit-id --name-only -r HEAD^^..HEAD)
- echo "Repo name is $BITBUCKET_REPO_SLUG & version is $VERSION_LABEL"
- cp -R --parents $FILES $BITBUCKET_REPO_SLUG/ 2>/dev/null
- rm -f $BITBUCKET_REPO_SLUG/bitbucket-pipelines.yml
- rm -f $BITBUCKET_REPO_SLUG/.gitignore
- zip -r $ZIP_FILE $BITBUCKET_REPO_SLUG/
And now it is executed when I make merge into master but it do nothing and raw report is:
+ cp -R --parents $FILES $BITBUCKET_REPO_SLUG/ 2>/dev/null
Searching for test report files in directories named [test-results, failsafe-reports, test-reports, TestResults, surefire-reports] down to a depth of 4
Finished scanning for test reports. Found 0 test report files.
Merged test suites, total number tests is 0, with 0 failures and 0 errors.
Do I need change
HEAD^^..HEAD
into other parameter?

Bitbucket pipeline zipped files without directories

This is my script:
image: python:3.5.7
pipelines:
default:
- step:
script:
- apt-get update
- apt-get -qq install zip curl
- mkdir $BITBUCKET_REPO_SLUG
- export VERSION_LABEL=$(date +%Y-%m-%d_%H:%M:%S)
- export ZIP_FILE=update_$BITBUCKET_REPO_SLUG_$VERSION_LABEL.zip
- export FILES=$(git diff-tree --no-commit-id --name-only -r HEAD^^..HEAD)
- echo "Repo name is $BITBUCKET_REPO_SLUG & version is $VERSION_LABEL"
- echo $FILES
- cp -R $FILES $BITBUCKET_REPO_SLUG/
- rm -f $BITBUCKET_REPO_SLUG/bitbucket-pipelines.yml
- rm -f $BITBUCKET_REPO_SLUG/.gitignore
- zip -r $ZIP_FILE $BITBUCKET_REPO_SLUG/
Why all files in zip are in root and not in directories as I see then when I echo them?
What is the problem?
I don't exactly understand your question but my first impression is it can be a relative path/full path situation. Bitbucket pipelines are using a standard directory something line /opt/atlassian/pipelines/agent/build to pull the code. This variable can be extracted by using BITBUCKET_CLONE_DIR built-in variable, Maybe you can try to combine this variable with your relative path to create your directory. something like mkdir -p $BITBUCKET_CLONE_DIR/$BITBUCKET_REPO_SLUG can be useful.
Reference : https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/
I fixed my problem with copy command to this:
cp -R --parents $FILES $BITBUCKET_REPO_SLUG/

cp command not working in Bash Script in build pipeline of azure devops

I need to copy the folder with multiple files to another folder in build pipeline.
I use
cp -R -v pathToSourceFolder pathToDestFolder
cp -R -v /Users/runner/runners/2.166.4/work/1/s/en.lproj/ /Users/runner/runners/2.166.4/work/1/s/platforms/ios/AppName/Resources
and I am getting error with exit code 126:
usage: cp [-R [-H | -L | -P]] [-fi | -n] [-apvXc] source_file target_file
cp [-R [-H | -L | -P]] [-fi | -n] [-apvXc] source_file ... target_directory
can anyone help with this as I am new to linux/macOS uses/cmd?
Note: pipeline run on macOS.
Since there is nothing obviously wrong in the cp command, I would just write some safety checks about the presence of the directories:
# Define directories involved
from="pathToSourceFolder"
to="pathToDestFolder"
# Check existence
if [[ -d "$from" ]]
then
# If the destination directory does not exist, we create it.
[[ -d $to ]] || mkdir -p "$to"
if [[ -d $to ]]
then
cp -R -v "$from" "$to"
else
# If we get here, we will likely have permission problems.
echo Can not create directory $to
ls -ld "$to"
fi
else
echo Source directory $from does not exist
fi
I need to copy the folder with multiple files to another folder in
build pipeline.
1.I assume /Users/runner/runners/2.166.4/work/1/s is the default working folder of your build. So please avoid hard-coding the path, instead you can use $(System.DefaultWorkingDirectory) which represents that path. See Azure Devops predefined variables.
2.Since your original purpose is to copy the files in Azure Devops pipeline, you don't need to care too much about the corresponding copy syntax in different OS systems (Linux,MacOS or Windows).
You can do what you want easily using official Copy Files task. This task requires three inputs: Source folder, Target folder and Contents we want to copy, that's it.
Classic UI format:
You can choose the source folder via Browse Source Folder option. And then use ** as Contents, $(System.DefaultWorkingDirectory)/platforms/ios/Fixi/Resources as Target folder.
Yaml format:
- task: CopyFiles#2
displayName: 'My Copy Task'
inputs:
SourceFolder: en.lproj
TargetFolder: '$(System.DefaultWorkingDirectory)/platforms/ios/Fixi/Resources'
We've done the logic in the behind for you so that you can use this task in MacOS/Linux/Windows easily. Log of my test:
I know the accepted answer says the "cp" command is not broken, but it actually is in azure pipelines. Microsoft seems to append a "/" at the end by default, which breaks the behaviour of the command.
if you do:
cp -r ./src ./dest
it will copy the files from src folder to dest folder.
if you write:
cp -r ./src ./dest/
it will copy the src folder into the destination one leaving you with /dest/src/*
Also, my build pipeline failed when I tried to copy a file
cp ./src/myfile.txt ./dest/myfile.txt
Adding the "/" at the end will cause it to fail as it's attempting to dump the file into a directory that does not exist since the actual command that runs is the following.
cp ./src/myfile.txt ./dest/myfile.txt/

GitLab CI syntax to write FOR loop statement?

Below is the script mentioned in the gitlab-ci.yml file. This GitLab CI configuration is valid. But, when the CI/CD build is run, the job fails. Is it something to do with the FOR loop syntax?
deploy_dv:
stage: deploy_dv
variables:
GIT_STRATEGY: none
script:
- echo "Deploying Artifacts..."
- echo "Configure JFrog CLI with parameters of your Artifactory instance"
- 'c:\build-tools\JFROG-CLI\jfrog rt config --url %ARTIFACTORY_WEBSITE% --user %ARTIFACTORY_USER% --apikey %APIKEY%'
- 'cd ..\artifacts'
- 'SETLOCAL ENABLEDELAYEDEXPANSION'
- FOR %%i in (*) do (
'c:\build-tools\curl\bin\curl.exe --header "PRIVATE-TOKEN:%HCA_ACCESS_TOKEN%" --insecure https://code.example.com/api/repository/tags/%CI_COMMIT_TAG% | c:\build-tools\jq\jq-win64.exe ".release.description" > temp.txt'
'set /p releasenote=<temp.txt'
'rem del temp.txt'
'set mydate=%DATE:~6,4%-%DATE:~3,2%-%DATE:~0,2%'
'c:\build-tools\JFROG-CLI\jfrog rt u "%%i" %ARTIFACTORY_ROOT_PATH%/%PROJECT_NAME%/%%i --build-name=%%i --build-number=%BUILDVERSION% --props releasenote=%releasenote%;releaseversion=%BUILDVERSION%;releasedate=%mydate% --flat=false'
)
- '%CURL% -X POST -F token=%REPOSITORY_TOKEN% -F ref=master -F "variables[RELEASE]=false" -F "variables[PROGRAM]=test" --insecure https://code.example.com/api/repository/trigger'
only:
- /^(dv-)(\d+\.)(\d+\.)(\d+)$/
I get this below error:
$ echo "Deploying Artifacts..."
"Deploying Artifacts..."
$ echo "Configure JFrog CLI with parameters of your Artifactory instance"
"Configure JFrog CLI with parameters of your Artifactory instance"
$ c:\build-tools\JFROG-CLI\jfrog rt config --url %ARTIFACTORY_WEBSITE% --user %ARTIFACTORY_USER% --apikey %APIKEY%
Artifactory server ID [Default-Server]: $ cd ..\artifacts
$ SETLOCAL ENABLEDELAYEDEXPANSION
$ FOR %%i in (*) do ( 'c:\build-tools\curl\bin\curl.exe --header "PRIVATE-TOKEN:%HCA_ACCESS_TOKEN%" --insecure https://code.example.com/api/repository/tags/%CI_COMMIT_TAG% | c:\build-tools\jq\jq-win64.exe ".release.description" > temp.txt' 'set /p releasenote=<temp.txt' 'rem del temp.txt' 'set mydate=%DATE:~6,4%-%DATE:~3,2%-%DATE:~0,2%' 'c:\build-tools\JFROG-CLI\jfrog rt u "%%i" %ARTIFACTORY_ROOT_PATH%/%PROJECT_NAME%/%%i --build-name=%%i --build-number=%BUILDVERSION% --props releasenote=%releasenote%;releaseversion=%BUILDVERSION%;releasedate=%mydate% --flat=false' )
The filename, directory name, or volume label syntax is incorrect.
ERROR: Job failed: exit status 255
Since there is still no good answer to this question, I will give it a try. I used this snippet to start multiple Docker builds for every directory in my repository. Notice the |+ and the > characters, which lets you put multi-line commands in YAML and are part of GitLab syntax.
Linux example:
build:
stage: loop
script:
- |+
for i in $(seq 1 3)
do
echo "Hello $i"
done
Windows example:
build:
stage: loop
script:
- >
setlocal enabledelayedexpansion
for %%a in ("C:\Test\*.txt") do (
set FileName=%%~a
echo Filename is: !FileName!
)
endlocal
Here is a working example of a job in a .gitlab-ci with a loop running on GNU/Linux OS and using Sh/Bash shell :
edit:
stage: edit
script:
- for file in $(find ${CI_PROJECT_DIR} -type f -name deployment.yml)
do
CURRENT_IMAGE=$(grep "image:" $file | cut -d':' -f2- | tr -d '[:space:]' | cut -d':' -f3)
sed -ie "s/$CURRENT_IMAGE/$VERSION/g" "$file"
done
only:
- master
I'm not an expert on Gitlab-Runner on Windows but Windows Batch is default shell used but you can also use Powershell.
In .gitlab.yml anything you write under "script" is shell. Thus for loop will be same as it works in shell script.
for var in ${NAME_1} ${NAME_2} ${NAME_3} ; do
*----computations----*
done

Resources