Passing arguments to an inline script Bash task fails? [duplicate] - azure

This question already has answers here:
Are shell scripts sensitive to encoding and line endings?
(14 answers)
Unbound variable with bash script
(1 answer)
How to run a bash script with arguments in azure devops pipeline?
(3 answers)
Closed last month.
I have an inline script as shown. The requirement is that there could be different variables that needs to be passed and those are secretive. In azure pipelines, we can use variable groups and encrypt them. The idea is used below. From the documentation(https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/bash-v3?view=azure-pipelines), arguments can be passed only if we write a shell script file, and execute the file through the task mentioned below.
However, the shell script file fails at the set -eufo pipefail command and it throws the following error:
/home/vsts/work/1/s/script.sh: line 2: set: pipefail
: invalid option name
##[error]Bash exited with code '2'.
Hence, there is an inline script as below:
variables:
- group: ScheduledUpdate
steps:
- task: Bash#3
inputs:
targetType: 'inline'
arguments: '$(SOURCE_URL)'
script: |
#!usr/bin/sh
set -eufo pipefail
echo "Argument 1 is $1"
echo "done"
However, there is a failure with respect to passing arguments. I get the following error:
/home/pkapikad/myagent/_work/_temp/7dfddc0a-4a40-4235-9752-c1c0e136372c.sh: line 3: $1: unbound variable
##[error]Bash exited with code '1'.
Finishing: Bash
Why is this caused?
How can be pass arguments to a inline script inside bash task in azure devops?

Related

Gitlab: Fail job in "after_script"?

Consider this .gitlab-ci.yml:
variables:
var1: "bob"
var2: "bib"
job1:
script:
- "[[ ${var1} == ${var2} ]]"
job2:
script:
- echo "hello"
after_script:
- "[[ ${var1} == ${var2} ]]"
In this example, job1 fails as expected but job2 succeeds, incomprehensibly. Can I force a job to fail in the after_script section?
Note: exit 1 has the same effect as "[[ ${var1} == ${var2} ]]".
The status of a job is determined solely by its script:/before_script: sections (the two are simply concatenated together to form the job script).
after_script: is a completely different construct -- it is not part of the job script. It is mainly for taking actions after a job is completed. after_script: runs even when jobs fail beforehand, for example.
Per the docs: (emphasis added on the last bullet)
Scripts you specify in after_script execute in a new shell, separate from any before_script or script commands. As a result, they:
Have the current working directory set back to the default (according to the variables which define how the runner processes Git requests).
Don’t have access to changes done by commands defined in the before_script or script, including:
Command aliases and variables exported in script scripts.
Changes outside of the working tree (depending on the runner executor), like software installed by a before_script or script script.
Have a separate timeout, which is hard-coded to 5 minutes.
Don’t affect the job’s exit code. If the script section succeeds and the after_script times out or fails, the job exits with code 0 (Job Succeeded).

Why does `exec bash` not work in a CI pipeline?

I have written a github workflow file. I want to run a python program in github actions to validate few changes. I have one environment.yml file which contains all conda environment dependencies required by this program. The thing is, actual program is not running at all, and my workflow is completed with success.
Following is jobs section of workflow.yml file
jobs:
build-linux:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout#v2
with:
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- name: Set up Python 3.8
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Cache conda
uses: actions/cache#v2
env:
# Increase this value to reset cache if etc/example-environment.yml has not changed
CACHE_NUMBER: 0
with:
path: ~/conda_pkgs_dir
key:
${{ runner.os }}-conda-${{ env.CACHE_NUMBER }}-${{hashFiles('**/environment.yml') }}
- uses: conda-incubator/setup-miniconda#v2
with:
activate-environment: test-env
environment-file: environment.yml
use-only-tar-bz2: true # IMPORTANT: This needs to be set for caching to work properly!
- name: Test
run: |
export PATH="./:$PATH"
conda init bash
exec bash
conda activate test-env
echo "Conda prefix: $CONDA_PREFIX"
python test.py
shell: bash
I also tried removing shell:bash in the last step, but this is also giving me the same result.
The logs in last step looks like this:
Run export PATH="./:$PATH"
export PATH="./:$PATH"
conda init bash
exec bash
conda activate test-env
echo "Conda prefix: $CONDA_PREFIX"
python test.py
shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
env:
pythonLocation: /opt/hostedtoolcache/Python/3.8.11/x64
LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.8.11/x64/lib
CONDA_PKGS_DIR: /home/runner/conda_pkgs_dir
no change /usr/share/miniconda/condabin/conda
no change /usr/share/miniconda/bin/conda
no change /usr/share/miniconda/bin/conda-env
no change /usr/share/miniconda/bin/activate
no change /usr/share/miniconda/bin/deactivate
no change /usr/share/miniconda/etc/profile.d/conda.sh
no change /usr/share/miniconda/etc/fish/conf.d/conda.fish
no change /usr/share/miniconda/shell/condabin/Conda.psm1
no change /usr/share/miniconda/shell/condabin/conda-hook.ps1
no change /usr/share/miniconda/lib/python3.9/site-packages/xontrib/conda.xsh
no change /usr/share/miniconda/etc/profile.d/conda.csh
modified /home/runner/.bashrc
==> For changes to take effect, close and re-open your current shell. <==
As we can clearly see, the line echo "Conda prefix: $CONDA_PREFIX" is not getting executed at all, and the workflow terminates with success. We should expect it to either run or fail the job, but nothing happens. The workflow simply ignores these commands and marks the workflow as success.
Your CI script contains the line:
exec bash
When this line is executed, the shell process is replaced with a new one, and the new shell process has no idea it should continue executing the script the previous process was: all the execution state is lost. GitHub Actions passes the script to execute as a command-line argument to the initial shell process and sets standard input to /dev/null; as the new shell process is started with an empty command line and an empty file on standard input, it simply exits immediately. The fact that this works well with an interactive shell is something of a lucky coincidence.
The reason the installer directs you to restart your shell is to apply the environment variable changes added to the shell’s initialisation file. As such, it should probably be enough to replace the exec bash line with
source "$HOME/.bashrc"
However, even with this line the environment modifications will not be applied in subsequent steps, as the documentation of the setup-miniconda action warns:
Bash shells do not use ~/.profile or ~/.bashrc so these shells need to be
explicitely declared as shell: bash -l {0} on steps that need to be properly
activated (or use a default shell). This is because bash shells are executed
with bash --noprofile --norc -eo pipefail {0} thus ignoring updated on bash
profile files made by conda init bash. See
Github Actions Documentation
and
thread.
Based on this advice, I think the best course of action is to end the actions step at the point where you would put exec bash, and apply the shell: setting to all further steps (or at least those which actually need it):
- name: Set up Conda environment
run: |
export PATH="./:$PATH"
conda init bash
- name: Perform Conda tests
shell: bash -l {0}
run: |
export PATH="./:$PATH"
conda activate test-env
echo "Conda prefix: $CONDA_PREFIX"
python test.py
As #user3840170 mentioned, bash shells do not use ~/.profile or ~/.bashrc. Then, a way to make it work would be to run what conda initialization would run. On GitHub Actions the path to the conda installation is on the variable $CONDA, so you can use it to run the initialization on each step needing conda activate. The following code worked for me on GitHub Actions (the one provided above didn't work in my case).
- name: Set up Conda environment
run: |
echo "${HOME}/$CONDA/bin" >> $GITHUB_PATH
conda init --all --dry-run
- name: On the steps you want to use
shell: bash
run: |
source $CONDA/etc/profile.d/conda.sh
conda activate test-env
python test.py

Open multiple tabs and execute command in shell script

#!/bin/bash
tab="--tab"
cmd="bash -c 'python';bash"
foo=""
for i in 1 2 3; do
foo+=($tab -e "$cmd")
done
gnome-terminal "${foo[#]}"
exit 0
i'm using this scirpt to open multiple tabs using shell script.
call it multitab.sh and execute this way user#user:~$ sh multitab.sh
currently this script supposed to open 3 tabs and all of them will execute python command.
but when i execute it, throws en error
multitab.sh: 8: multitab.sh: Syntax error: word unexpected (expecting ")")
What is the reason of this error? How can I make this script to execute 3 different commands?
I've already gone through. below SOF threads but none of them worked for me.
https://askubuntu.com/questions/315408/open-terminal-with-multiple-tabs-and-execute-application
https://askubuntu.com/questions/500357/opening-multiple-terminal-tabs-and-running-command
https://askubuntu.com/questions/521084/bash-script-for-multiple-tabs-program-running
This is because you are running the script with sh, where the += syntax to add elements is not available:
foo+=($tab -e "$cmd")
# ^^
So all you need to do is to run the script with Bash:
bash multitab.sh
Or just using ./multitab.sh (after giving executing mode to the file), since the shebang in the script (#!/bin/bash) already mentions Bash.
From the Bash Reference Manual:
Appendix B Major Differences From The Bourne Shell
- Bash supports the ‘+=’ assignment operator, which appends to the value of the variable named on the left hand side.

Unable to export the variable through script file [duplicate]

This question already has answers here:
Global environment variables in a shell script
(7 answers)
Closed 5 years ago.
I am trying to export a variables through myDeploy.sh but the export is not getting set. When i am echoing it is not echoing. However, when i set the variable explicitly on command it sets properly and echoes too.Below is the snippet of my code.
myDeploy.sh
#!/bin/bash
# export the build root
export BUILD_ROOT=/tibco/data/GRISSOM2
export CUSTOM1=/tibco/data/GRISSOM2/DEPLOYMENT_ARTIFACTS/common/MDR_ITEM_E1/rulebase
export CLEANUP=$BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common
cd $BUILD_ROOT/DEPLOYMENT_ARTIFACTS/common
When I echoes echo $BUILD_ROOT it is not echoing the path for me. But when I do it explicitly on command prompt like
[root#krog3-rhel5-64 GRISSOM2]# export BUILD_ROOT=/tibco/data/GRISSOM2
It sets properly and echoes too. What am I missing?
Running your script like
. ./script
or
source script
would execute your script in the current shell context (without creating a subshell) and the environment variables set within the script would be available in your current shell.
From the manual:
. filename [arguments]
Read and execute commands from the filename argument in the current
shell context. If filename does not contain a slash, the PATH variable
is used to find filename. When Bash is not in POSIX mode, the current
directory is searched if filename is not found in $PATH. If any
arguments are supplied, they become the positional parameters when
filename is executed. Otherwise the positional parameters are
unchanged. The return status is the exit status of the last command
executed, or zero if no commands are executed. If filename is not
found, or cannot be read, the return status is non-zero. This builtin
is equivalent to source.

Launch shell scripts from Jenkins

I'm a complete newbie to Jenkins.
I'm trying to get Jenkins to monitor the execution of my shell script so i that i don't have to launch them manually each time but i can't figure out how to do it.
I found out about the "monitor external job" option but i can't configure it correctly.
I know that Jenkins can understand Shell script exit code so this is what i did :
test1(){
ls /home/user1 | grep $2
case $? in
0) msg_error 0 "Okay."
;;
*) msg_error 2 "Error."
;;
esac
}
It's a simplified version of my functions.
I execute them manually but i want to launch them from Jenkins with arguments and get the results of course.
Can this be done ?
Thanks.
You might want to consider setting up an Ant build that executes your shell scripts by using Ant's Exec command:
http://ant.apache.org/manual/Tasks/exec.html
By setting the Exec task's failonerror parameter to true, you can have the build fail if your shell script returns an error code.
To use parameters in your shell you can always send them directly. for example:
Define string parameter in your job Param1=test_param
in your shell you can use $Param1 and it will send the value "test_param"
Regarding the output, everything you do under the shell section will be only relevant to the session of the shell. you can try to return your output into a key=value txt file in the workspace and inject the results using EnvInject Plugin. Then you can access the value as if you defined it as a parameter for the job. In the example above, after injecting the file, executing shell echo $Param1 will print "test_param"
Hope it's helpful!

Resources