I was wondering how I can set the system path variables in the GitHub actions workflow.
export "$PATH:$ANYTHING/SOMETHING:$AA/BB/bin"
You can use the following run command to set a system path variable in your actions workflow.
Syntax:
echo "{path}" >> $GITHUB_PATH
- run: |
echo "$AA/BB/bin" >> $GITHUB_PATH
Additionally, if you have downloaded some binaries and trying to set its path, GitHub uses a special directory called $GITHUB_WORKSPACE as your current directory. You may need to specify this variable in your path in that case.
- run: |
echo "$GITHUB_WORKSPACE/BB/bin" >> $GITHUB_PATH
If you are using Bash shell
- name: Add to PATH
shell: bash
run: |
echo "Folder PATH" >> $GITHUB_PATH
For Powershell as a shell:
- name: Add to PATH
shell: pwsh
run: |
echo "Folder PATH" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
Related
This question already has answers here:
Setting environment variable in shell script does not make it visible to the shell
(2 answers)
Closed 4 months ago.
I am writing a bash script to automate the task of setting environment variables for my project. but when I execute my bash script using sh env.sh (env.sh is my file name). I am able to get value from the AWS secret manager and when I do echo inside the bash script I am able to print the env variable but when I run the echo $variable after the bash file is executed then it returns nothing.
I tried replacing eval to source but no luck
also i searched on stackoverflow for the issue but none of them helped.
find the script below
#! /usr/bin/env bash
if [[ "$OSTYPE" == "darwin"* ]]; then
echo 'running'
if ! [ -x "$(command -v aws)" ]; then
echo 'Aws is not installed. Installing aws............................' >&2
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
if ! [ $(id -u) = 0 ]; then
echo "The script need to be run as root." >&2
exit 1
fi
sudo installer -pkg AWSCLIV2.pkg -target /
if ! [ -x "$(command -v aws)" ]; then
echo 'There was some issue installing aws cli. Install aws-cli manually and then run the script!!!' >&2
exit 1
fi
echo "Running aws command please enter the aws access key and secrect"
aws configure
fi
aws secretsmanager get-secret-value --secret-id abc --query SecretString --output text | jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]' > /tmp/secrets.env
eval $(cat /tmp/secrets.env | sed 's/^/export /')
fi
I am currently running this bash file on Mac OS, but I would like it to operate on any OS.
If the file contained enviroment variable names setup_local_env.sh, Try
source setup_local_env.sh
This will add them to your current session.
There is another solution called dot source. Check the reference here
. ./setup_local_env.sh
The reason if you directly run ./setup_local_env.sh, it does not work, is because it creates a new bash process, and sets the environment variable there, and then it's lost once the new bash process exits.
The Problem: I am trying to concat two variables for a copy cmd in a before script for a gitlab ci/cd pipeline job.
What I expect: myfile_filesuffix
What I get: _filesuffix
Can anyone see what I am doing wrong? When I run this for loop on my local CLI I have no problems. Thank you!
before_script:
- rm -rf .terraform
- terraform --version
- mkdir ~/.aws
- echo "[default]" > ~/.aws/credentials
- echo "aws_access_key_id=$AWS_ACCESS_KEY_ID" >> ~/.aws/credentials
- echo "aws_secret_access_key=$AWS_SECRET_ACCESS_KEY" >> ~/.aws/credentials
- mkdir ./deployments
- ls common
- common_files=$(find common -type f)
- echo $common_files
- prefix_common=$(echo $common_files | cut -d"/" -f 1)
- echo $prefix_common
- for f in $common_files;
do
common_file="$(basename $f)"
cp $f ./deployments/""${common_file}"_"${prefix_common}"";
done
you can used GitLab repo settings -> CI/CD -> Variables to add FILE type variable and use mv command move to your folder.
ex: File Type variable is ANSIBLE_VARIABLES_FILE
script:
- mv $ANSIBLE_VARIABLES_FILE ./deployments/variables_common.tf
I have a file hosted on an s3 bucket that has the following format:
var1=value1
var2=value2
var3=value3
I wish to create a bash script on my Linux box that when executed, set environment variables from the remote file. SO far, I have tried the following:
#!/bin/sh
export $(aws s3 cp s3://secret-bucket/file.txt - | sed -e /^$/d -e /^#/d | xargs)
#!/bin/sh
eval $(aws s3 cp s3://secret-bucket/file.txt - | sed 's/^/export /')
But none of them seems to work because when I execute printenv the variables I need do not show. Any help would be very much appreciated.
Below is the script mentioned in the gitlab-ci.yml file. This GitLab CI configuration is valid. But, when the CI/CD build is run, the job fails. Is it something to do with the FOR loop syntax?
deploy_dv:
stage: deploy_dv
variables:
GIT_STRATEGY: none
script:
- echo "Deploying Artifacts..."
- echo "Configure JFrog CLI with parameters of your Artifactory instance"
- 'c:\build-tools\JFROG-CLI\jfrog rt config --url %ARTIFACTORY_WEBSITE% --user %ARTIFACTORY_USER% --apikey %APIKEY%'
- 'cd ..\artifacts'
- 'SETLOCAL ENABLEDELAYEDEXPANSION'
- FOR %%i in (*) do (
'c:\build-tools\curl\bin\curl.exe --header "PRIVATE-TOKEN:%HCA_ACCESS_TOKEN%" --insecure https://code.example.com/api/repository/tags/%CI_COMMIT_TAG% | c:\build-tools\jq\jq-win64.exe ".release.description" > temp.txt'
'set /p releasenote=<temp.txt'
'rem del temp.txt'
'set mydate=%DATE:~6,4%-%DATE:~3,2%-%DATE:~0,2%'
'c:\build-tools\JFROG-CLI\jfrog rt u "%%i" %ARTIFACTORY_ROOT_PATH%/%PROJECT_NAME%/%%i --build-name=%%i --build-number=%BUILDVERSION% --props releasenote=%releasenote%;releaseversion=%BUILDVERSION%;releasedate=%mydate% --flat=false'
)
- '%CURL% -X POST -F token=%REPOSITORY_TOKEN% -F ref=master -F "variables[RELEASE]=false" -F "variables[PROGRAM]=test" --insecure https://code.example.com/api/repository/trigger'
only:
- /^(dv-)(\d+\.)(\d+\.)(\d+)$/
I get this below error:
$ echo "Deploying Artifacts..."
"Deploying Artifacts..."
$ echo "Configure JFrog CLI with parameters of your Artifactory instance"
"Configure JFrog CLI with parameters of your Artifactory instance"
$ c:\build-tools\JFROG-CLI\jfrog rt config --url %ARTIFACTORY_WEBSITE% --user %ARTIFACTORY_USER% --apikey %APIKEY%
Artifactory server ID [Default-Server]: $ cd ..\artifacts
$ SETLOCAL ENABLEDELAYEDEXPANSION
$ FOR %%i in (*) do ( 'c:\build-tools\curl\bin\curl.exe --header "PRIVATE-TOKEN:%HCA_ACCESS_TOKEN%" --insecure https://code.example.com/api/repository/tags/%CI_COMMIT_TAG% | c:\build-tools\jq\jq-win64.exe ".release.description" > temp.txt' 'set /p releasenote=<temp.txt' 'rem del temp.txt' 'set mydate=%DATE:~6,4%-%DATE:~3,2%-%DATE:~0,2%' 'c:\build-tools\JFROG-CLI\jfrog rt u "%%i" %ARTIFACTORY_ROOT_PATH%/%PROJECT_NAME%/%%i --build-name=%%i --build-number=%BUILDVERSION% --props releasenote=%releasenote%;releaseversion=%BUILDVERSION%;releasedate=%mydate% --flat=false' )
The filename, directory name, or volume label syntax is incorrect.
ERROR: Job failed: exit status 255
Since there is still no good answer to this question, I will give it a try. I used this snippet to start multiple Docker builds for every directory in my repository. Notice the |+ and the > characters, which lets you put multi-line commands in YAML and are part of GitLab syntax.
Linux example:
build:
stage: loop
script:
- |+
for i in $(seq 1 3)
do
echo "Hello $i"
done
Windows example:
build:
stage: loop
script:
- >
setlocal enabledelayedexpansion
for %%a in ("C:\Test\*.txt") do (
set FileName=%%~a
echo Filename is: !FileName!
)
endlocal
Here is a working example of a job in a .gitlab-ci with a loop running on GNU/Linux OS and using Sh/Bash shell :
edit:
stage: edit
script:
- for file in $(find ${CI_PROJECT_DIR} -type f -name deployment.yml)
do
CURRENT_IMAGE=$(grep "image:" $file | cut -d':' -f2- | tr -d '[:space:]' | cut -d':' -f3)
sed -ie "s/$CURRENT_IMAGE/$VERSION/g" "$file"
done
only:
- master
I'm not an expert on Gitlab-Runner on Windows but Windows Batch is default shell used but you can also use Powershell.
In .gitlab.yml anything you write under "script" is shell. Thus for loop will be same as it works in shell script.
for var in ${NAME_1} ${NAME_2} ${NAME_3} ; do
*----computations----*
done
I have a bunch of Bash scripts and they each make use of the following:
BIN_CHATTR="/usr/bin/chattr"
BIN_CHKCONFIG="/sbin/chkconfig";
BIN_WGET="/usr/bin/wget";
BIN_TOUCH="/bin/touch";
BIN_TAR="/bin/tar";
BIN_CHMOD="/bin/chmod";
BIN_CHOWN="/bin/chown";
BIN_ECHO="/bin/echo";
BIN_GUNZIP="/usr/bin/gunzip";
BIN_PATCH="/usr/bin/patch";
BIN_FIND="/usr/bin/find";
BIN_RM="/bin/rm";
BIN_USERDEL="/usr/sbin/userdel";
BIN_GROUPDEL="/usr/sbin/groupdel";
BIN_MOUNT="/bin/mount";
Is there a way I could just wget a Bash script with global settings like that and then include them in the script I want to run?
Yes, you can put all those variables in a file like "settings.sh" and then do this in your scripts:
source settings.sh
You can keep your variables in a shell script and then source that file:
source /path/to/variables.sh
You should actually use . which in bash is the same thing as source but will offer better portability:
. /path/to/variables.sh
Yes you can. Just add your variables and functions to a file, make it executable and "execute" it at the top of any script that needs to access them. Here's an example:
$ pwd
/Users/joe/shell
$ cat lib.sh
#!/bin/bash
BIN_WGET="/usr/bin/wget"
BIN_MOUNT="/bin/mount"
function test() {
echo "This is a test"
}
$ cat script.sh
#!/bin/bash
. /Users/joe/shell/lib.sh
echo "wget=$BIN_WGET"
test
$ ./script.sh
wget=/usr/bin/wget
This is a test
are you looking for the source command?
mylib.sh:
#!/bin/bash
JAIL_ROOT=/www/httpd
is_root(){
[ $(id -u) -eq 0 ] && return $TRUE || return $FALSE
}
test.sh
#!/bin/bash
# Load the mylib.sh using source comamnd
source mylib.sh
echo "JAIL_ROOT is set to $JAIL_ROOT"
# Invoke the is_root() and show message to user
is_root && echo "You are logged in as root." || echo "You are not logged in as root."
btw - use rsync -a to mirror scripts with +x flag.