Executing a command inside docker shows wrong $PATH - linux

I am trying to run a bash command inside docker from host:
$ docker exec -it -u weiss apollo_dev /bin/bash -c "rosbag"
/bin/bash: rosbag: command not found
So I tried:
$ docker exec -it -u weiss apollo_dev /bin/bash -c "echo \$PATH"
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
But when I run docker interactively:
$ docker exec -it -u weiss apollo_dev /bin/bash
weiss#docker$ echo $PATH
/usr/local/cuda-8.0/bin:/home/tmp/ros/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Any reason why I am getting different results for $PATH?

This path is most likely changed in your .bashrc file, and this file is not loaded when the shell is non interactive (see https://www.gnu.org/software/bash/manual/bash.html#Bash-Startup-Files)
So /bin/bash will load it, /bin/bash -c will not

Here you are getting de $PATH of your Host. Before you run the container the variable is replace for the host's $PATH.
$ docker exec -it -u weiss apollo_dev /bin/bash -c "echo \$PATH"
You need to pass the command without replace the variable, so when run the command in the container just invoke the $PATH variable.
$ docker exec -it -u weiss apollo_dev /bin/bash -c 'echo \$PATH'
Te 'apostrophe' is the key. Bye

Related

No such file when doing `ls /mnt` in docker run

I have a super simple test Dockerfile:
FROM ubuntu:18.04
RUN apt-get update -y && apt-get upgrade -y
CMD ["/bin/bash"]
I build it with docker build . -t dockertest.
Then I try to run it with a test command and get a weird error:
> docker run -it dockertest "ls /mnt"
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "ls /mnt": stat ls /mnt: no such file or directory: unknown.
But when I do just ls, everything is fine:
> docker run -it dockertest "ls"
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
Doing docker run -it dockertest "/bin/bash -c ls /mnt" yields the same error.
What exactly am doing wrong? Thanks!
The documentation for the run command can be found here: https://docs.docker.com/engine/reference/commandline/container_run/
Essentially it states
docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]
It works with "ls" since ls is a valid unix command. However you are passing the command and the args together in the command value. Docker is failing since there is no command "ls /mnt" You need to pass this as command and arg "ls" "/mnt"
ubuntu#vps-f116ed9f:/opt/docker_projects/stack_example$ docker container run -it stack_test "ls /bin"
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "ls /bin": stat ls /bin: no such file or directory: unknown.
ubuntu#vps-f116ed9f:/opt/docker_projects/stack_example$ docker container run -it stack_test "ls" "/bin"
bash chmod findmnt mount sleep zcat
bunzip2 chown grep mountpoint stty zcmp
bzcat cp gunzip mv su zdiff
bzcmp dash gzexe nisdomainname sync zegrep
bzdiff date gzip pidof tar zfgrep
bzegrep dd hostname ps tempfile zforce
bzexe df kill pwd touch zgrep
bzfgrep dir ln rbash true zless
bzgrep dmesg login readlink umount zmore
bzip2 dnsdomainname ls rm uname znew
bzip2recover domainname lsblk rmdir uncompress
bzless echo mkdir run-parts vdir
bzmore egrep mknod sed wdctl
cat false mktemp sh which
chgrp fgrep more sh.distrib ypdomainname

Dockerfile set runtime ENV dinamically by sourcing a script

Basically, I need to keep the functionality of an ubuntu:18.04 image but with some environment variables set every time I execute a docker run or a docker exec this variables are dynamic, so I can't use the keyword ENV in the Dockerfile, I will need to use a script that should be sourced, for simplicity the file I will be using for this post is:
$ cat setenv.sh
#!/usr/bin/env bash
# Set some dynamic variables
export TEST="Hello World"
I have tried different approaches without success, here is my research:
Using an entrypoint
The files I used for this example:
$ cat entrypoint.sh
#!/usr/bin/env bash
echo "Setting environment"
. /setenv.sh
exec $#
$ cat Dockerfile
FROM ubuntu:18.04
COPY setenv.sh /
COPY entrypoint.sh /
ENTRYPOINT [ "/entrypoint.sh" ]
I built this Dockerfile the following command: docker build -f Dockerfile -t test_img .
This works fine except by two problems:
1. exec does not support double ampersand && nor pipes | nor escaping chars \
As I previously stated, I require my container to have the same functionality as the ubuntu image, for example, in ubuntu I can totally execute the following container:
$ docker run --rm ubuntu:18.04 bash -c "echo \"Hello World\" && ls | head -n1 "
Hello World
bin
But if I use the image I created:
$ docker run --rm test_img bash -c "echo \"Hello World\" && ls | head -n1"
Setting environment
It truncates the command every time it finds a quote (doesn't honor the escape character) a double ampersand or a pipe, here is an example of the commands in different order:
$ docker run --rm ubuntu:18.04 bash -c "ls | head -n1 && echo \"Hello World\""
bin
Hello World
$ docker run --rm test_img bash -c "ls | head -n1 && echo \"Hello World\""
Setting environment
bin
boot
dev
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
setenv.sh
srv
sys
tmp
usr
var
In this case, the command truncates when finding the pipe |.
2. Entrypoint is only called for the parent shell.
If I run a ephemeral container I can see that my env variable is there:
$ docker run --rm test_img env | grep TEST
TEST=Hello World
But if I want a keep-alive container, the env var is not set:
$ docker create -ti --name=test test_img bash
e0e5278c46bdcf33195661fac5911326b701586e9a9c638f71a6e08021ee2f57
$ docker start test
test
$ docker exec test env | grep TEST
What is happening here is that the shell I create when running docker create is calling the entrypoint, but the shell I create when running docker exec is a different one.
If you login to the container you can see shells are different:
$ docker exec -ti test bash
root#e0e5278c46bd:/# ps -fe
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 15:21 pts/0 00:00:00 bash
root 15 0 0 15:29 pts/1 00:00:00 bash
root 29 15 0 15:29 pts/1 00:00:00 ps -fe
root#e0e5278c46bd:/# env | grep TEST
If instead of having an entrypoint script to set the environment variable TEST I had used the keyword ENV in my Dockerfile: ENV TEST "Hello World" this would set the variable in every shell created by the commands docker run and docker exec. Here is the example:
$ cat Dockerfile
FROM ubuntu:18.04
ENV TEST "Hello World"
$ docker build -f Dockerfile -t test_img .
Sending build context to Docker daemon 4.096kB
Step 1/2 : FROM ubuntu:18.04
---> 6526a1858e5d
Step 2/2 : ENV TEST "Hello World"
---> Using cache
---> eebe9952bb76
Successfully built eebe9952bb76
Successfully tagged test_img:latest
$ docker create -ti --name=test test_img bash
c1e508dae0f398a40c4c5534cf2811cdfe284a4f6601198f0ca97fdea100c376
$ docker start test
test
$ docker exec test env | grep TEST
TEST=Hello World
$ docker exec -ti test bash
root#c1e508dae0f3:/# env | grep TEST
TEST=Hello World
Sourcing in bashrc
I modify the Dockerfile to look like this, and built the image with the same build command:
$ cat Dockerfile
FROM ubuntu:18.04
COPY setenv.sh /
RUN echo ". /setenv.sh" >> /etc/bash.bashrc
The problem with this approach is the shell used to execute docker run, the bashrc file is not called, only on interactive bash shells, here is the output:
$ docker run --rm test_img echo $SHELL
/bin/bash
$ docker run --rm test_img env | grep TEST
$ docker run --rm test_img bash -c "env" | grep TEST
$ docker run --rm -ti test_img bash
root#1187568e1bec:/# env | grep TEST
TEST=Hello World
First I tried to add the setenv.sh to /etc/profile.d directory, but the problem with this is that /etc/profile is called only for login shells, and I will need to change the commands to explicitly use a login shell, in other words, instead of docker run test_img env I would need it to be docker run test_img bash -lc "env" (The -l is for login).
Create Dockerfile dinamically
This is the best solution so far, but is not the cleaner, I have to have a Dockerfile.pre file to create a container and save the generated variables to a file, then use this file to create a final Dockerfile and write all those ENV lines into the Dockerfile.
Combining two approaches
By using an entrypoint and sourcing in bashrc file I was able to get the variables set in all cases, the problem is the exec $# command that doesn't support full bash scripts. Is any way to modify my entrypoint script? or is there other approach for this problem?
you can create an enviroment file and just pass it to your container with the --env-file flag. This will make all the variables in the file available in the container.
ubuntu#vps-f116ed9f:~$ cat my_env_file
TEST=Hello World
ubuntu#vps-f116ed9f:~$ docker container run -it --rm --env-file my_env_file ubuntu bash -c "echo \$TEST"
Hello World
ubuntu#vps-f116ed9f:~$ docker container run -it --rm --env-file my_env_file ubuntu bash -c "echo \$TEST | wc -c"
12
here you can see i have used the latest ubuntu image, i pass my_env_file to it and then using the bash shell i print the value of this variable (Note i have to escape the $ other wise the shell will interpolate this before passing it to docker, this could be avoided by using single qoutes as the shell wont interpolate variables in single qoutes.)
I also dont see any issues using pipe or &&
ubuntu#vps-f116ed9f:~$ docker container run -it --rm --env-file my_env_file ubuntu bash -c 'ls | head -n1 && echo "$TEST"'
bin
Hello World
This also will persist in detached containers
ubuntu#vps-f116ed9f:~$ docker container run -itd --rm --name=c1 --env-file my_env_file ubuntu bash
3d7705f2f91f3f30c45e855778bd80f08a35616bbe822545c20d5a8886139693
ubuntu#vps-f116ed9f:~$ docker container exec c1 sh -c "ls | head -1 && echo \$TEST"
bin
Hello World

Why am I unable to cd within a dockercontainer?

I would like to automatically execute tasks inside a docker container. The task that should be executed should be run inside a specific, mounted directory. To do this, I am using the this command:
docker run --rm -v /a/dir/on/my/host:/tmp some_container /bin/bash -c "cd /tmp/dir/inside/volume && echo \"$PWD\""
, followed by the actual task, which I omit, due to brevity.
PWD should give me /tmp/dir/inside/volume, but prints /a/dir/on/my/host. Why is that?
$PWD is expanded before you run your container. Use single quotes instead of double quotes to defer expansion. Also it's simpler to use --workdir or -w instead of cd .. && and subshell.
docker run --rm -v /a/dir/on/my/host:/tmp some_container /bin/bash -c 'cd /tmp/dir/inside/volume && echo "$PWD"'
or I suggest:
docker run --rm -v /a/dir/on/my/host:/tmp -w /tmp/dir/inside/volume /some_container pwd

How to write a bash script which automate entering "docker container" and doing other things?

I want to implement an automatic bash script which enters a running docker container, and do some stuffs:
# cat docker.sh
#!/bin/bash -x
docker exec -it hammerdb_net8 bash
cd /data/oracle/tablespaces/
pwd
Executing the script on terminal:
# ./docker.sh
+ docker exec -it hammerdb_net8 bash
[root#npar1 /]#
The output shows only login the docker container, but won't do other operations.
Is there any method to automate entering docker container and doing other things?
You can use bash -c:
docker exec -it hammerdb_net8 bash -c 'cd /data/oracle/tablespaces/; pwd; ls'
For running a series of commands use here-doc in BASH:
docker exec -i hammerdb_net8 bash <<'EOF'
cd /data/oracle/tablespaces/
pwd
ls
EOF

Bash variable inside third remote server

I need to input a variable into third linux system, here is the scheme:
From my laptop > docker server > a container,
#!/bin/bash
domain=$1
ssh -i $SSH_KEY docker#10.10.10.10 "docker run --rm=true 931967fb3e32 /bin/bash -c curl -Is $domain
Of course the variable reaches only the docker server, but not the container.
The first option to test is to pass $domain as an environment variable to your docker run command:
docker run -it --rm -e "domain=$domain" 931967fb3e32 /bin/bash -c curl -Is $domain
(note the use of -it, to be sure to have a tty in an interactive session)
If the curl somehow doesn't pick the right value, (you can test it by replacing /bin/bash -c curl -Is $domain with /bin/bash -c echo $domain), wrap it in a script (which mean your image should include that script)
As discussed in the comments, it seems to work without the bash -c:
ssh -i $SSH_KEY docker#10.10.10.10 "docker run --rm=true 931967fb3e32 curl -Is $domain

Resources