Why does `exec bash` not work in a CI pipeline? - python-3.x

I have written a github workflow file. I want to run a python program in github actions to validate few changes. I have one environment.yml file which contains all conda environment dependencies required by this program. The thing is, actual program is not running at all, and my workflow is completed with success.
Following is jobs section of workflow.yml file
jobs:
build-linux:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout#v2
with:
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- name: Set up Python 3.8
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Cache conda
uses: actions/cache#v2
env:
# Increase this value to reset cache if etc/example-environment.yml has not changed
CACHE_NUMBER: 0
with:
path: ~/conda_pkgs_dir
key:
${{ runner.os }}-conda-${{ env.CACHE_NUMBER }}-${{hashFiles('**/environment.yml') }}
- uses: conda-incubator/setup-miniconda#v2
with:
activate-environment: test-env
environment-file: environment.yml
use-only-tar-bz2: true # IMPORTANT: This needs to be set for caching to work properly!
- name: Test
run: |
export PATH="./:$PATH"
conda init bash
exec bash
conda activate test-env
echo "Conda prefix: $CONDA_PREFIX"
python test.py
shell: bash
I also tried removing shell:bash in the last step, but this is also giving me the same result.
The logs in last step looks like this:
Run export PATH="./:$PATH"
export PATH="./:$PATH"
conda init bash
exec bash
conda activate test-env
echo "Conda prefix: $CONDA_PREFIX"
python test.py
shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
env:
pythonLocation: /opt/hostedtoolcache/Python/3.8.11/x64
LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.8.11/x64/lib
CONDA_PKGS_DIR: /home/runner/conda_pkgs_dir
no change /usr/share/miniconda/condabin/conda
no change /usr/share/miniconda/bin/conda
no change /usr/share/miniconda/bin/conda-env
no change /usr/share/miniconda/bin/activate
no change /usr/share/miniconda/bin/deactivate
no change /usr/share/miniconda/etc/profile.d/conda.sh
no change /usr/share/miniconda/etc/fish/conf.d/conda.fish
no change /usr/share/miniconda/shell/condabin/Conda.psm1
no change /usr/share/miniconda/shell/condabin/conda-hook.ps1
no change /usr/share/miniconda/lib/python3.9/site-packages/xontrib/conda.xsh
no change /usr/share/miniconda/etc/profile.d/conda.csh
modified /home/runner/.bashrc
==> For changes to take effect, close and re-open your current shell. <==
As we can clearly see, the line echo "Conda prefix: $CONDA_PREFIX" is not getting executed at all, and the workflow terminates with success. We should expect it to either run or fail the job, but nothing happens. The workflow simply ignores these commands and marks the workflow as success.

Your CI script contains the line:
exec bash
When this line is executed, the shell process is replaced with a new one, and the new shell process has no idea it should continue executing the script the previous process was: all the execution state is lost. GitHub Actions passes the script to execute as a command-line argument to the initial shell process and sets standard input to /dev/null; as the new shell process is started with an empty command line and an empty file on standard input, it simply exits immediately. The fact that this works well with an interactive shell is something of a lucky coincidence.
The reason the installer directs you to restart your shell is to apply the environment variable changes added to the shell’s initialisation file. As such, it should probably be enough to replace the exec bash line with
source "$HOME/.bashrc"
However, even with this line the environment modifications will not be applied in subsequent steps, as the documentation of the setup-miniconda action warns:
Bash shells do not use ~/.profile or ~/.bashrc so these shells need to be
explicitely declared as shell: bash -l {0} on steps that need to be properly
activated (or use a default shell). This is because bash shells are executed
with bash --noprofile --norc -eo pipefail {0} thus ignoring updated on bash
profile files made by conda init bash. See
Github Actions Documentation
and
thread.
Based on this advice, I think the best course of action is to end the actions step at the point where you would put exec bash, and apply the shell: setting to all further steps (or at least those which actually need it):
- name: Set up Conda environment
run: |
export PATH="./:$PATH"
conda init bash
- name: Perform Conda tests
shell: bash -l {0}
run: |
export PATH="./:$PATH"
conda activate test-env
echo "Conda prefix: $CONDA_PREFIX"
python test.py

As #user3840170 mentioned, bash shells do not use ~/.profile or ~/.bashrc. Then, a way to make it work would be to run what conda initialization would run. On GitHub Actions the path to the conda installation is on the variable $CONDA, so you can use it to run the initialization on each step needing conda activate. The following code worked for me on GitHub Actions (the one provided above didn't work in my case).
- name: Set up Conda environment
run: |
echo "${HOME}/$CONDA/bin" >> $GITHUB_PATH
conda init --all --dry-run
- name: On the steps you want to use
shell: bash
run: |
source $CONDA/etc/profile.d/conda.sh
conda activate test-env
python test.py

Related

bash - Unable to set environment variable using script

I have a scrip that gets called in a Dockerfile entrypoint:
ENTRYPOINT ["/bin/sh", "-c", "/var/run/Scripts/entrypoint.sh"]
I need to set an environment variable based on a value in a file. I am using the following command to retrieve the value: RSYSLOG_LISTEN_PORT=$(sed -nE 's/.*port="([^"]+)".*/\1/p' /etc/rsyslog.d/0_base.conf)
Locally, this command works and even running the command from the same directory that the entrypoint script is located in will result in the env var being set.
However, adding this command after export (export SYSLOG_LISTEN_PORT=$(sed -nE 's/.*port="([^"]+)".*/\1/p' /etc/rsyslog.d/0_base.conf)) in the entrypoint script does not result in the env var being set.
Additionally, trying to use another script and sourcing the script within the entrypoint script also does not work:
#!/bin/bash
. ./rsyslog_listen_port.sh
I am unable to use source as I get a source: not found error - I have tried a few different ways to use source but it doesn't seem compatible.
Can anyone help me as I have spent too much time on trying to get this to work at this point for what seems like a relatively simple task.
A container only runs one process, and then it exits. A pattern I find useful here is to make the entrypoint script be a wrapper that does whatever first-time setup is useful, then exec the main container process:
#!/bin/sh
# set the environment variable
export SYSLOG_LISTEN_PORT=$(sed -nE 's/.*port="([^"]+)".*/\1/p' /etc/rsyslog.d/0_base.conf)
# then run the main container command
exec "$#"
In your Dockerfile, set the ENTRYPOINT to this script (it must use JSON-array syntax, and it must not have an explicit sh -c wrapper) and CMD to whatever you would have set it to without this wrapper.
ENTRYPOINT ["/var/run/Scripts/entrypoint.sh"]
CMD ["rsyslog"]
Note that this environment variable will be set for the main container process, but not for docker inspect or a docker exec debugging shell. Since the wrapper sets up the environment variable and then runs the main container process, you can replace the command part (only) when you run the container to see this.
docker run --rm your-image env \
| grep SYSLOG_LISTEN_PORT
(source is a bash-specific extension. POSIX shell . does pretty much the same thing, and I'd always use . in preference to source.)

Dockerfile set ENV based on npm package version [duplicate]

Is it possible to set a docker ENV variable to the result of a command?
Like:
ENV MY_VAR whoami
i want MY_VAR to get the value "root" or whatever whoami returns
As an addition to DarkSideF answer.
You should be aware that each line/command in Dockerfile is ran in another container.
You can do something like this:
RUN export bleah=$(hostname -f);echo $bleah;
This is run in a single container.
At this time, a command result can be used with RUN export, but cannot be assigned to an ENV variable.
Known issue: https://github.com/docker/docker/issues/29110
I had same issue and found way to set environment variable as result of function by using RUN command in dockerfile.
For example i need to set SECRET_KEY_BASE for Rails app just once without changing as would when i run:
docker run -e SECRET_KEY_BASE="$(openssl rand -hex 64)"
Instead it i write to Dockerfile string like:
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
and my env variable available from root, even after bash login.
or may be
RUN /bin/bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" > /etc/profile.d/docker_init.sh'
then it variable available in CMD and ENTRYPOINT commands
Docker cache it as layer and change only if you change some strings before it.
You also can try different ways to set environment variable.
This answer is a response to #DarkSideF,
The method he is proposing is the following, in Dockerfile :
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
( adding an export in the /etc/bash.bashrc)
It is good but the environment variable will only be available for the process /bin/bash, and if you try to run your docker application for example a Node.js application, /etc/bash.bashrc will completely be ignored and your application won't have a single clue what SECRET_KEY_BASE is when trying to access process.env.SECRET_KEY_BASE.
That is the reason why ENV keyword is what everyone is trying to use with a dynamic command because every time you run your container or use an exec command, Docker will check ENV and pipe every value in the process currently run (similar to -e).
One solution is to use a wrapper (credit to #duglin in this github issue).
Have a wrapper file (e.g. envwrapper) in your project root containing :
#!/bin/bash
export SECRET_KEY_BASE="$(openssl rand -hex 64)"
export ANOTHER_ENV "hello world"
$*
and then in your Dockerfile :
...
COPY . .
RUN mv envwrapper /bin/.
RUN chmod 755 /bin/envwrapper
CMD envwrapper myapp
If you run commands using sh as it seems to be the default in docker.
You can do something like this:
RUN echo "export VAR=`command`" >> /envfile
RUN . /envfile; echo $VAR
This way, you build a env file by redirecting output to the env file of your choice. It's more explicit than having to define profiles and so on.
Then as the file will be available to other layers, it will be possible to source it and use the variables being exported. The way you create the env file isn't important.
Then when you're done you could remove the file to make it unavailable to the running container.
The . is how the env file is loaded.
As an addition to #DarkSideF's answer, if you want to reuse the result of a previous command in your Dockerfile during in the build process, you can use the following workaround:
run a command, store the result in a file
use command substitution to get the previous result from that file into another command
For example :
RUN echo "bla" > ./result
RUN echo $(cat ./result)
For something cleaner, you can use also the following gist which provides a small CLI called envstore.py :
RUN envstore.py set MY_VAR bla
RUN echo $(envstore.py get MY_VAR)
Or you can use python-dotenv library which has a similar CLI.
Not sure if this is what you were looking for, but in order to inject ENV vars or ARGS into your .Dockerfile build this pattern works.
in your my_build.sh:
echo getting version of osbase image to build from
OSBASE=$(grep "osbase_version" .version | sed 's/^.*: //')
echo building docker
docker build -f \
--build-arg ARTIFACT_TAG=$OSBASE \
PATH_TO_MY.Dockerfile \
-t my_artifact_home_url/bucketname:$TAG .
for getting an ARG in your .Dockerfile the snippet might look like this:
FROM scratch
ARG ARTIFACT_TAG
FROM my_artifact_home_url/bucketname:${ARTIFACT_TAG}
alternatively for getting an ENV in your .Dockerfile the snippet might look like this:
FROM someimage:latest
ARG ARTIFACT_TAG
ENV ARTIFACT_TAG=${ARTIFACT_TAG}
the idea is you run the shell script and that calls the .Dockerfile with the args passed in as options on the build.

Create a spawn shell in Makefile

I want to create a makefile target in Ubuntun to spawn a poetry shell. Here are the things I want to do if I were in the command shell:
type poetry shell, which is going to spawn a shell within the virtual environment.
do something in the poetry shell, such as executing a python script using command python ...
To facilitate the process, I want to create a makefile looking something like below
# if I can set SHELL in a specific way
# SHELL = ?????
foo:
poetry shell
echo "Success"
# many lines to be executed in the poetry shell, here is an example
python <a_python_file>
The problem as I found is that the execution will be hanging after poetry shell and will not execute echo "Success"
I know this could be a general question on spawning a shell from command shell, and it is not limited to poetry. Any comments/suggestions would be appreciated.
As a comment pointed out, what I really want is python ... instead of poetry run python .... I edited it.
As a comment pointed out, I added some pseudo code in the makefile.
I think there's some misunderstanding. I'd never heard of poetry before but a quick look at its manual makes clear how it works.
If you run poetry shell then you get an interactive shell which you are expected to type commands into from your keyboard. The reason it "hangs" is that it started a poetry shell and is now waiting for you to enter commands. It's not hung, it's waiting for some input.
You don't want to run an interactive set of poetry commands, you have a predefined set of poetry commands you want to run. For that, you would use poetry run as mentioned in the comments above:
foo:
poetry run python <first command>
poetry run python <second command>
...
echo "Success"
If you want to run all the commands within a single instance of poetry, you have to combine them all into a single invocation, maybe something like this (I didn't try this so the quoting might be wrong):
foo:
poetry run 'python <first command> && python <second command> ...'
echo "Success"
You could do this:
foo:
poetry run $(MAKE) in-poetry
echo "Success"
in-poetry:
python <command1>
python <command2>
Now if you run make foo all the commands in the in-poetry target are run within the poetry environment, because poetry run runs a make program in its environment, and that make program runs a bunch of python.
But if someone ran make in-poetry directory (not via the foo target) then those python operations would not be run inside a poetry environment (unless the user set it up before they ran make).

shell script that matches string in git commit message and exports it

I try to write a shell script (bash).
The aim of the script is
to get the message of the last git commit
grasp any content inside []-parenthesis of the last git commit message
export that content into an environment variable called GIT_COMMIT_MESSAGE_CONTEXT
Example:
Last git commit message = "[Stage] - gitlab trial"
Working example: The environment variable exported should be
echo $GIT_COMMIT_MESSAGE_CONTEXT
Stage
I found the following, to get the message of the last git commit:
echo $(git log -1 --pretty=%B)
[Stage] - gitlab trial
I am new to bash-scripts and therefore my trial (see below) is somewhat poor so far.
Maybe somebody has more experience to help out here.
My bash script (my-bash-script.sh) looks as follows:
#!/usr/bin/bash
# get last git commit Message
last_git_commit_message="$(git log -1 --pretty=%B)"
export LAST_GIT_COMMIT_MESSAGE="$last_git_commit_message"
I run the bash-script in a terminal as follows:
bash my-bash-script.sh
After closing/re-opening Terminal, I type:
echo $LAST_GIT_COMMIT_MESSAGE
Unfortunately without any result.
Here my questions:
Why do I not get any env-variable echo after running the bash script ?
How to deduct the content of []-parenthis of the last git commit message ?
How to re-write my script ?
The script seems fine, but the approach is flawed. Bash can only export variables to subshells but not vice versa. When you call a script a new shell is started. All variables in that shell, even the exported ones, will be lost after the script exits. See also here.
Some possible ways around that problem:
Source the script.
Let the script print the value and capture its output: variable=$(myScript)
Write the script as a bash function inside your .bashrc.
Depending on what you want to do I recommend 2. or 3. To do 3., put the following in your ~/.bashrc (or ~/.bash_profile if you are using Mac OS) file, start a new shell and use the command extractFromLastCommit as if it were a script.
extractFromLastCommit() {
export LAST_GIT_COMMIT_MESSAGE=$(
git log -1 --pretty=%B |
grep -o '\[[^][]\]' |
tr -d '\[\]' |
head -n1 # only take the first "[…]" – remove this line if you want all
)
}
bash my-bash-script.sh
Starts a new bash process into which your var is exported, then it exits and takes its environment with it.
source my-bash-script.sh
Executes the script in the current shell context and will have the desired effect.

Global modified $PATH in Ansible not working as expected from regular Linux shell

I want to download some binaries like Helm and make them globally avaliable using $PATH. To avoid downloading with root privileges or alternatively have to steps (download with standard user and move to some bin folder in $PATH like /usr/local/bin), my idea was to create $HOME/bin and add it to $PATH.
This blog article was used to add a custom path to /etc/environment. Then I reload it as described here. This is my POC playbook where I try to add exa:
- name: Test
hosts: all
vars:
- bin: /home/vagrant/bin
tasks:
- name: Test task
file:
path: "{{bin}}"
state: directory
- name: Add {{bin}} to path
become: yes
lineinfile: >
dest=/etc/environment
state=present
backrefs=yes
regexp='PATH=(["]*)((?!.*?{{bin}}).*?)(["]*)$'
line="PATH=\1\2:{{bin}}\3"
- name: Check path1
shell: echo $PATH
- name: Download exa
unarchive:
src: https://github.com/ogham/exa/releases/download/v0.8.0/exa-linux-x86_64-0.8.0.zip
dest: "{{bin}}"
remote_src: yes
- name: reload env file
shell: for env in $( cat /etc/environment ); do export $(echo $env | sed -e 's/"//g'); done
- name: Check path after reload env file
shell: echo $PATH
- name: Test exa from PATH
shell: exa-linux-x86_64 --version
In the last tash Test exa from PATH it throws the error:
"stderr": "/bin/sh: 1: exa-linux-x86_64: not found"
The echo $PATH commands remain both at
"stdout": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
But the modified /etc/environment works. When I go to ssh on the machine without ansible, $PATH is fine and also exa-linux-x86_64 --version works:
~$ echo $PATH
/home/vagrant/bin:/home/vagrant/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/vagrant/bin:/snap/bin
Environment
Ubuntu 18.04 host system running Vagrant with a Ubuntu 16.04 box. Ansible was executed by Vagrant on Ubuntu 16.04.
Possible workarounds
Using separate env variable
When setting the environment variable like this
- name: Test exa from PATH
shell: exa-linux-x86_64 --version
environment:
PATH: "{{ ansible_env.PATH }}:{{bin}}"
it works. But I have to apply those lines at least on play-level. I would like to set it globally like /etc/environment does at regular shells. This questions seems to have the same target, but on Solaris. The answear seems to only set$PATH from the host, which isn't usefull for me since the custom bin dir isn't there.
Use only absolute paths
- name: Test exa from PATH
shell: "{{bin}}/exa-linux-x86_64 --version"
This causes less overhead, but you've to remember prefixing your commands always with the path variable. Seems also error-prone
Understanding the problem
I want a real solution and realize what causing the problem. It's not clear for me why $PATH modification is so hard on Ansible, where it can be done quite easy in the underlaying linux system. This question say we don't have an interactive session in ansible. There seems to be no $PATH avaliable. According to the documentation, we can archive this by passing -l to bash. So I found the following working:
- name: Test exa from PATH
shell: bash -l -c "exa-linux-x86_64 --version"
But the following result in an error:
- name: Test exa from PATH
shell: exa-linux-x86_64 --version
args:
executable: /bin/bash -l
Here Ansible breaks the command with wrong quoting of the args:
"'/bin/bash -l' -c 'exa-linux-x86_64 --version'"
This ticket recommends the Ansible team to fix this, so that we get an login shell with $PATH. Since 2014, no real solution was provided at all.
Questions
What is the purpose of differnt shell types that get access to the modified $PATH or not?
Why Ansible complicates things? Wouldn't it be easier to provide a login shell that solve this issue? Are there reasons why they did what they did?
How can we deal with the resulting problems? What is best practice?
To answer your questions
1) What is the purpose of different shell types that get access to the modified $PATH or not?
It's not worth the effort to unify it in all operating systems supported by Ansible. This also explains "It's not clear for me why $PATH modification is so hard on Ansible, where it can be done quite easy in the underlaying linux system."
2) Why Ansible complicates things? Wouldn't it be easier to provide a login shell that solve this issue? Are there reasons why they did what they did?
On the contrary, it's easier not to care about it. Keep it simple stupid is the reason.
3) How can we deal with the resulting problems? What is best practice?
Best practice over decades have been Tools, no policy

Resources