I'm using Docker plugin for bamboo and I need to execute a script in the docker container.
The sh script contains:
echo \"ini_source_path\": \"${bamboo.ini_source_path}\",
and if I put this line directly in Container Command, the ${bamboo.ini_source_path} will be replaced with value of this variable.
The problem in when I put /bin/bashscript.sh in Container Command because I'm getting a error:
script.sh: line 35: \"${bamboo.ini_source_path}\",: bad substitution
Is there a way I can reach bamboo.ini_source_path variable from my script in docker container?
Thanks!
What version of Bamboo are you using? This problem was fixed in Bamboo 6.1.0:
Unable to use variables in Container name field in Run docker task
Workaround:
Create a Script Task that runs before the Docker Task.
Run commands like
echo "export sourcepath=$ini_source_path" > scriptname.sh
chmod +x scriptname.sh
The Docker Task will be map the ${bamboo.working.directory} to the Docker \data volume.
So the just created scriptname.sh script is available in the Docker container.The script will be executed, and will set the variable correctly.
Related
To simplify test execution for several different container I want to create an alias to use the same command for every container.
For example for a backend container I want to be able to use docker exec -t backend test instead of docker exec -t backend pytest test
So I add this line in my backend Dockerfile :
RUN echo alias test="pytest /app/test" >> ~/.bashrc
But when I do docker exec -t backend test it doesn't work, otherwise it works when I do docker exec -ti backend bash then test.
I saw that it is because alias in .bashrc works only if we use interactive terminal.
How can I get around that?
docker exec does not run the shell, so .bashrc is just never used.
Create an executable in PATH, most probably in /usr/local/bin. Note that test is a very basic shell command, use a different unique name.
That alias will only work for interactive shells, if you want that alias to work on other programs:
RUN echo -e '#!/bin/bash\npytest /app/test' > /usr/bin/mypytest && \
chmod +x /usr/bin/mypytest
I'm running a set of commands on an ubuntu docker, and I need to set a couple of environment variables for my scripts to work.
I have tried several alternatives but none of them seem to solve my problem.
Alternative 1: Using --env or --env-file
On my already running container, I run either:
docker exec -it --env TESTVAR="some_path" ai_pipeline_new_image bash -c "echo $TESTVAR"
docker exec -it --env-file env_vars ai_pipeline_new_image bash -c "echo $TESTVAR"
The content of env_vars:
TESTVAR="some_path"
In both cases the output is empty.
Alternative 2: Using a dockerfile
I create my image using the following docker file
FROM ai_pipeline_yh
ENV TESTVAR "A_PATH"
With this alternative the variable is set if I attach to the docker (aka if I run an interactive shell), but the output is blank if I run docker exec -it ai_pipeline_new_image bash -c "echo $TESTVAR" from the host.
What is the clean way to do this?
EDIT
Turns out that if I check the state of the variables from a shell script, they are set, but not if check them directly in bash -c "echo $VAR". I would really like to understand why this is so. I provide a minimal example:
Run docker
docker run -it --name ubuntu_env_vars ubuntu
Create a file that echoes a VAR (inside the container)
root#afdc8c494e8a:/# echo "echo \$VAR" > env_check.sh
root#afdc8c494e8a:/# chmod +x env_check.sh
From the host, run:
docker exec -it -e VAR=BLA ubuntu_env_vars bash -c "echo $VAR"
(Blank output)
From the host, run:
docker exec -it -e VAR=BLA ubuntu_env_vars bash -c "/env_check.sh"
output: BLA
Why???????
I revealed my noobness. Answering my own question here:
Both options, --env-file file or -env foo=bar are okay.
I forgot to escape the $ character when testing. Therefore the correct command to test if the variable exists is:
docker exec -it my_docker bash -c "echo \$MYVAR"
Is a good option design your apps driven to environment variables at the runtime(start of application)
Don't use env variables at docker build stage.
Sometimes the problem is not the environment variables or docker, the problem is the app who reads the environment variables.
Anyway, you have these options to inject environment variables to a docker container at runtime:
-e foo=bar
This is the most basic way:
docker run -it -e foo=bar ubuntu
These variables will be available since the start of your container.
remote variables
If you need to pass several variables, using the -e will not be the best way.
Or if you don't want to use .env files or any kind of local file with variables, you should:
prepare your app to read environment variables
inject the variables in a docker entrypoint bash script, reading it from a remote variables manager
in the shell script you will get the remote variables and inject them using source /foo/bar/variables. A sample here
With this approach you will have a variables manager for all of your containers. These variables manager has these features:
login
create applications
create variables by application
create global variables if a variable is required for two or more apps
expose an http endpoint to be used in the client (apps) in order to get the variables
crypt
etc
You have these options:
spring cloud
zookeeper
https://www.vaultproject.io/
https://www.doppler.com/
Configurator (I'm the author)
Let me know if you want to use this approach to help you.
I am trying to execute the following commands on the Execute Shell Script on Remote Host build step in jenkins.
docker exec -it container bash
cd /internal
But the internal folder is not found as if the docker exec command was not executed.
Question
How to run commands inside the docker container in the Execute Sell Script on Remote Host build step in jenkins?
Thanks in advance.
docker exec container bash -c 'cd /internal ; command 2 ; command 3'
solved my problem.
I am new to Linux stuff and would like to know how to run command while opening the shell through /bin/bash?
Eg the steps that I want to perform:
Step1: Run the docker exec command to start the quickstart virtual machine.
$ docker exec -it 7f8c1a16e5b2 /bin/bash
Step2: The above command gives the handle of the quickstart vm on the console. Now I want to run the below command by default when ever some one starts the docker quickstart console (step 1)
cd
. ./.bash_profile
I need some guidance on how to do this. Obviously, putting all these statements in one shell script isn't helping as the commands of Step2 are to be executed in the newly opened shell (of quickstart vm). The idea is to put all these statements in a single shell script and execute it when we want to get hold of the session within the VM console.
You can pass the commands you want to be executed inside the container to bash with the -c option.
That would look something like this:
docker exec -it 7f8c1a16e5b2 /bin/bash -c "cd && . ./.bash_profile && /bin/bash"
I am running Docker (1.10.2) on Windows. I created a script to echo 'Hello World' on my machine and stored it in C:/Users/username/MountTest. I created a new container and mounted this directory (MountTest) as a data volume. The command I ran to do so is shown below:
docker run -t -i --name mounttest -v /c/Users/sarin/MountTest:/home ubuntu /bin/bash
Next, I run the command to execute the script within the container mounttest.
docker exec -it mounttest sh /home/helloworld.sh
The result is as follows:
: not foundworld.sh: 2: /home/helloworld.sh:
Hello World
I get the desired output (echo Hello World) but I want to understand the reason behind the not found errors.
Note: This question might look similar to Run shell script on docker from shared volume, but it addresses permission related issues.
References:
The helloworld.sh file:
#!/bin/sh
echo 'Hello World'
The mounted volumes information is captured below.
Considering the default ENTRYPOINT for the 'ubuntu' image is sh -c, the final command executed on docker exec is:
sh -c 'sh /home/helloworld.sh'
It looks a bit strange and might be the cause of the error message.
Try simply:
docker exec -it mounttest /home/helloworld.sh
# or
docker exec -it mounttest sh -c '/home/helloworld.sh'
Of course, the docker exec should be done in a boot2docker ssh session, simalar to the shell session in which you did a docker run.
Since the docker run opens a bash, you should make a new boot2docker session (docker-machine ssh), and in that new boot2docker shell session, try the docker exec.
Trying docker exec from within the bash made by docker run means trying to do DiD (Docker in Docker). It is not relevant for your test.