docker-entrypoint.sh : Only exec "$#" not working - linux

I was struggling to fix this issue from couple of days.
File 1 : docker-entrypoint.sh
#!/bin/bash
set -e
source /home/${HADOOP_INSTALL_USERNAME}/.bashrc
kinit -kt /home/${HADOOP_INSTALL_USERNAME}/keytab/dept-test-hadoop.${HADOOP_INSTALL_ENV}.keytab dept-test-hadoop
mkdir /tmp/test222222
exec "$#"
Dockerfile :
ENTRYPOINT ["/home/dept-test-hadoop/docker-entrypoint.sh"]
docker run command :
docker run -it hadoop-hive:v1 /bin/mkdir /tmp/test1
The challenge or what I am trying is to execute what ever the command that pass as command-line argument to docker run command.Please note these commands required kerberos authentication
1) I have noticed /tmp/test222222 but I could not see a directly like /tmp/test1 with above docker run command. I think my exec "$#" in docker-entrypoint.sh not executing. But I can confirm the script is executing as I can see the /tmp/test222222
2) Is there way that we can assign the values from environment variables ?
ENTRYPOINT ["/home/dept-test-hadoop/docker-entrypoint.sh"]

You container will exit as long as it creates the directory. You container life is the life of exec command or docker-entrypoint, so your container will die soon after exec "$#".
If you are looking for a way to create a directory from env then you can try this
#!/bin/bash
set -x
source /home/${HADOOP_INSTALL_USERNAME}/.bashrc
kinit -kt /home/${HADOOP_INSTALL_USERNAME}/keytab/dept-test-hadoop.${HADOOP_INSTALL_ENV}.keytab dept-test-hadoop
mkdir $MY_DIR
ls
exec "$#"
so now pass MY_DIR to env but keep the long process in mind.
docker run -it -e MY_DIR=abcd hadoop-hive:v1 "your_long_running_process_to_exec"
for example
docker run -it -e MY_DIR=abcd hadoop-hive:v1 "<hadoop command>"
If you run any process from ENV in exec then also you can try
#!/bin/sh
set -x
mkdir $MY_DIR
ls
exec ${START_PROCESS}
so you can pass during run time
docker run -it -e MY_DIR=abcd -e START_PROCESS=my_process hadoop-hive:v1

Related

I can't get env var in the Docker container

I've ran my Docker container using this command:
docker run --name test1 -d -e FLAG='***' rastasheep/ubuntu-sshd
Now, when I connect to it via SSH, I can't get my env there via printenv FLAG.
How can I fix this? When running with -it and sh, I can my get env via printenv FLAG.
Now, when I connect to it via SSH, I can't get my env there via
printenv FLAG. How can I fix this? When running with -it and sh, I can
my get env via printenv FLAG
You are doing two different things:
docker run -it -e FLAG='***' rastasheep/ubuntu-sshd sh will run a container in interactive mode with a shell, and this shell session will have the environment variable you passed on the command line. With docker run -d -e FLAG='***' rastasheep/ubuntu-sshd, a SSH daemon process will start with defined env vars.
when you connect in the container with SSH you will create a new shell session which does not have these environment variable set.
This can be observed when running a container, connecting to it using ssh and showing all processes and their environment variable:
docker run -d -p 2222:22 -e FLAG='test' rastasheep/ubuntu-sshd
ssh root#localhost -p 2222
...
We are now connected into the container, we can see the SSH daemon process (PID 1) and our SSH session process (PID 7):
root#788fa982c2d0:~# ps -xf
PID TTY STAT TIME COMMAND
1 ? Ss 0:00 /usr/sbin/sshd -D # <== does have the FLAG env var
7 ? Ss 0:00 sshd: root#pts/0 # <== no FLAG env var
Lets check it out, print our current process env var, and the env var of the SSH daemon process:
root#788fa982c2d0:~# printenv FLAG # Nothing
root#788fa982c2d0:~# cat /proc/1/environ # We see the FLAG env var!
[..]FLAG=test[...]
As pointed out by #Dmitrii, you can read Dockerize an SSH service for more details.
Try Using below Command:
docker exec <container-id> bash -c 'echo "$<variable-name>"'
As suggested by docs
you might need to create your own Dockerfile with following changes
Project
|--Dockerfile
|--entrypoint.sh
Dockerfile
FROM rastasheep/ubuntu-sshd
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["/usr/sbin/sshd", "-D"]
File: entrypoint.sh
#!/bin/bash
echo "export FLAG=$FLAG" >> /etc/profile
exec "$#"
Command:
docker build -t your-ubuntu-sshd .
docker run --name test1 -d -e FLAG='abc' -p 2222:22 your-ubuntu-sshd

Enviroment variables in docker containers - how does it work?

I can't understand some thing, namely as we know we can pass to docker run argument -e SOME_VAR=13.
Then each process launched (for example using docker exec ping localhost -c $SOME_VAR=13) can see this variable.
How does it work ? After all, enviroment are about bash, we don't launched bash. I can't understand it. Can you explain me how -e does work without shell ?
For example, let's look at following example:
[user#user~]$ sudo docker run -d -e XYZ=123 ubuntu sleep 10000
2543e7235fa9
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo test
test
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo $XYZ
<empty row>
Why did I get <empty row> instead of 123 ?
The problem is that your $XYZ is getting interpolated in the host shell environment, not your container.
$ export XYZ=456
$ docker run -d -e XYZ=123 ubuntu sleep 10000
$ docker exec -it $(docker ps -ql) echo $XYZ
$ 456
$ docker exec -it $(docker ps -ql) sh -c 'echo $XYZ'
$ 123
You have to quote it so it's passed through as a string literal to the container. Then it works fine.
The environment is not specific to shells. Even ordinary processes have environments. They work the same for both shells and ordinary processes. This is because shells are ordinary processes.
When you do SOMEVAR=13 someBinary you define an environment variable called SOMEVAR for the new process, someBinary. You do this with -e in docker because you ask another process to start your process, the docker daemon.

How to docker exec a shell builtin of docker container specifically on Ubuntu docker image/container

thank you for reading my post.
Problem:
# docker ps
CONTAINER ID IMAGE COMMAND
35c8b832403a ubuntu1604:1 "sh -c /bin/sh"
# docker exec -i -t 35c8b832403a type type
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:262: starting container process caused "exec: \"type\": executable file not found in $PATH"
# Dockerfile
FROM ubuntu:16.04
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN apt-get update && apt-get -y upgrade
ENTRYPOINT ["sh", "-c"]
CMD ["/bin/bash"]
Description:
My objective is to get "type" shell builtin been execute in a way of writing docker exec as below
docker exec -i -t 35c8b832403a type type (FAILED)
NOT
docker exec -i -t 35c8b832403a sh -c "type type" (PASSED)
I have googling around, do some modification in the container (change /etc/profile, /etc/environment, bashrc) but failed.
From the docker documentation itself, it has state that:
COMMAND will run in the default directory of the container. It the
underlying image has a custom directory specified with the WORKDIR
directive in its Dockerfile, this will be used instead.
COMMAND should be an executable, a chained or a quoted command will
not work. Example: docker exec -ti my_container "echo a && echo
b" will not work, but docker exec -ti my_container sh -c "echo a &&
echo b" will.
But seem it IS POSSIBLE when I able to get the right output FROM DOCKER FEDORA (Dockerfile: FROM fedora:25)
# docker ps
CONTAINER ID IMAGE COMMAND
2a17b2338518 fedora25:1 "sh -c /bin/sh"
# docker exec -i -t 2a17b2338518 type type
type is a shell builtin
Question:
Is there any way to enable this on Ubuntu docker? Image/Container tweaks? Vagrantfile Configuration? Please help.
Others:
Using docker run, I able to get the right output because of the "ENTRYPOINT" in the Dockerfile. However the image need to be save instead of export.
Just in case, to be able to execute type as you expect, it would need to be part of the path. Being a shell builtin wouldn't help because as you said, you don't want to execute /bin/bash -c 'type type'
If you want to have type executed as a builtin shell command, this means you need to execute a shell /bin/bash or /bin/sh and then execute 'type type' on it, making it /bin/bash -c 'type type'
After all, as #Henry said, docker exec is a the full command that will be executed and there is no place for CMD or ENTRYPOINT on it.
CMD and ENTRYPOINT are meaningless if you run docker exec. The remaining arguments are taken as the command and executed inside the already existing container.
Maybe you wanted to use docker run?

Execute two commands with docker exec

I'm trying to do two commands in docker exec. Concretely, I have to run a command inside a specific directory.
I tried this, butit didn't work:
docker exec [id] -c 'cd /var/www/project && composer install'
Parameter -c is not detected.
I also tried this:
docker exec [id] cd /var/www/project && composer install
But the command composer install is executed after the docker exec command.
How can I do it?
In your first example, you are giving the -c flag to docker exec. That's an easy answer: docker exec does not have a -c flag.
In your second example, your shell is parsing this into two commands before Docker even sees it. It is equivalent to this:
if docker exec [id] cd /var/www/project
then
composer install
fi
First, the docker exec is run, and if it exits 0 (success), composer install will try to run locally, outside of Docker.
What you need to do is pass both commands in as a single argument to docker exec using a string. Then they will not be interpreted by a shell until already inside the container.
docker exec [id] "cd /var/www/project && composer install"
However, as you noted in the comments, this also does not work. That's because cd is a shell builtin, and doesn't exist on its own. Trying to execute it as the initial command will fail. So the next step is to hand this off to a shell to execute.
docker exec [id] "bash -c 'cd /var/www/project && composer install'"
And finally, at this point the && has moved into an inner set of quote marks, so we don't really need the quotes around the bash command... you can drop them if you prefer.
docker exec [id] bash -c 'cd /var/www/project && composer install'
Everything after the container id is the command to run, so in the first example -c isn't an option to exec, but a command docker tries to run and fails since that command doesn't exist.
Most likely you found this syntax from a docker run command where the entrypoint was set to /bin/sh. However, exec bypasses the entrypoint, so you need to include the full command to run. As others have pointed out, that command includes a shell like bash or in the below example, sh:
docker exec [id] /bin/sh -c 'cd /var/www/project && composer install'
The other answers are fine if you want to run 2 arbitrary commands. But if the first command is simply cd, then you should use the -w option to set the working directory instead.
docker exec -w {dir} {container} {commands}
So in your example:
docker exec -w /var/www/project {container} composer install
as Nehal J Wani said in his commentary, the correct syntax is the following:
docker exec [id] /bin/bash -c 'cd /var/www/project && composer install'
many thanks!
I would like to add my example here because it is a bit more complex then the ones thate were shown above. This example also illustrates on how to find a container id that should be used in the docker exec command.
I needed to execute a composite docker exec command against docker container over ssh.
I managed to achieve this in 2 steps:
-definition of the variable that contains a command expored as an environment variable
-ssh command that runs it
environment varialbe definition:
export COMMAND="bash -c 'php bin/console --version && composer --version'"
ssh command that runs on a remote system:
ssh -t -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i keyfile.pem ec2-user#111.222.111.222 'docker exec docker ps|grep php|grep api|grep -v cron|awk '"'"'{print $1}'"'"' '$COMMAND
As you can see I left the command out of the single quotes to pass its actual value to the SSH process
The output of the command execution is:
Warning: Permanently added '111.222.111.222' (ECDSA) to the list of known hosts.
Cannot load Xdebug - it was already loaded
Symfony 4.3.5 (env: dev, debug: true)
Cannot load Xdebug - it was already loaded
Composer version 1.9.1 2019-11-01 17:20:17
Connection to 111.222.111.222 closed.
If you wish to execute this command in a single line you can use a bit modified version of my first example:
COMMAND="bash -c 'php bin/console --version && composer --version'" ssh -t -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i keyfile.pem ec2-user#111.222.111.222 'docker exec `docker ps|grep php|grep api|grep -v cron|awk '"'"'{print $1}'"'"'` '$COMMAND

Execute configuration bash script during docker build

During docker build I need to run a bash script, which sets up some environment variables.
The script looks something like this:
#!/bin/bash
export ENVVAR=TEST
export HOST=local
export PORT=port
I try to call this script in my dockerfile in different ways but none of them are working. I tried these:
ADD ./myscript.sh
RUN chmod +x /myscript.sh;\
/bin/bash -c 'source ./myscript.sh';\
/bin/bash -c 'source /myscript.sh';\
/bin/bash -c source ./myscript.sh;\
/bin/bash -c source /myscript.sh;\
source ./myscript.sh;\
source /myscript.sh;\
/myscript.sh;\
./myscript.sh;\
ENTRYPOINT ["/bin/bash"]
Of course I only had one of these commands in my RUN and I just put them here grouped.
If I run the container and use source ./myscript.sh it works as expected.
Because of multiple restrictions and other reasons it is not possible for me to use docker compose, the -e argument, ENV KEY VALUE in dockerfile or similar approaches. I need to set up the environment variables during the docker build process.
You're simply not sourcing the shell that is specified in your ENTRYPOINT. Just add your myscript.sh to your image (use COPY instead of ADD).
COPY myscript.sh /usr/local/bin
Then source it on the shell that is actually started by your entrypoint.
docker run myimage source /usr/local/bin/myscript.sh
btw, myscript.sh is pretty non descriptive. You could use env.sh for instance.

Resources