Redirecting command output in docker - linux

I want to do some simple logging for my server which is a small Flask app running in a Docker container.
Here is the Dockerfile
# Dockerfile
FROM dreen/flask
MAINTAINER dreen
WORKDIR /srv
# Get source
RUN mkdir -p /srv
COPY perfektimprezy.tar.gz /srv/perfektimprezy.tar.gz
RUN tar x -f perfektimprezy.tar.gz
RUN rm perfektimprezy.tar.gz
# Run server
EXPOSE 80
CMD ["python", "index.py", "1>server.log", "2>server.log"]
As you can see on the last line I redirect stderr and stdout to a file. Now I run this container and shell into it
docker run -d -p 80:80 perfektimprezy
docker exec -it "... id of container ..." bash
And observe the following things:
The server is running and the website working
There is no /srv/server.log
ps aux | grep python yields:
root 1 1.6 3.2 54172 16240 ? Ss 13:43 0:00 python index.py 1>server.log 2>server.log
root 12 1.9 3.3 130388 16740 ? Sl 13:43 0:00 /usr/bin/python index.py 1>server.log 2>server.log
root 32 0.0 0.0 8860 388 ? R+ 13:43 0:00 grep --color=auto python
But there are no logs... HOWEVER, if I docker attach to the container I can see the app generating output in the console.
How do I properly redirect stdout/err to a file when using Docker?

When you specify a JSON list as CMD in a Dockerfile, it will not be executed in a shell, so the usual shell functions, like stdout and stderr redirection, won't work.
From the documentation:
The exec form is parsed as a JSON array, which means that you must use double-quotes (") around words not single-quotes (').
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, CMD [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: CMD [ "sh", "-c", "echo", "$HOME" ].
What your command actually does is executing your index.py script and passing the strings "1>server.log" and "2>server.log" as command-line arguments into that python script.
Use one of the following instead (both should work):
CMD "python index.py > server.log 2>&1"
CMD ["/bin/sh", "-c", "python index.py > server.log 2>&1"]

To use docker run in a shell pipeline or under shell redirection, making run accept stdin and output to stdout and stderr appropriately, use this incantation:
docker run -i --log-driver=none -a stdin -a stdout -a stderr ...
e.g. to run the alpine image and execute the UNIX command cat in the contained environment:
echo "This was piped into docker" |
docker run -i --log-driver=none -a stdin -a stdout -a stderr \
alpine cat - |
xargs echo This is coming out of docker:
emits:
This is coming out of docker: This was piped into docker

Just a complement, when using docker-compose, you could also try:
command: bash -c "script_or_command > /path/to/log/command.log 2>&1"

I personally use :
ENTRYPOINT ["python3"]
CMD ["-u", "-m", "swagger_server"]
The "-u" is the key :)

Related

I can't get env var in the Docker container

I've ran my Docker container using this command:
docker run --name test1 -d -e FLAG='***' rastasheep/ubuntu-sshd
Now, when I connect to it via SSH, I can't get my env there via printenv FLAG.
How can I fix this? When running with -it and sh, I can my get env via printenv FLAG.
Now, when I connect to it via SSH, I can't get my env there via
printenv FLAG. How can I fix this? When running with -it and sh, I can
my get env via printenv FLAG
You are doing two different things:
docker run -it -e FLAG='***' rastasheep/ubuntu-sshd sh will run a container in interactive mode with a shell, and this shell session will have the environment variable you passed on the command line. With docker run -d -e FLAG='***' rastasheep/ubuntu-sshd, a SSH daemon process will start with defined env vars.
when you connect in the container with SSH you will create a new shell session which does not have these environment variable set.
This can be observed when running a container, connecting to it using ssh and showing all processes and their environment variable:
docker run -d -p 2222:22 -e FLAG='test' rastasheep/ubuntu-sshd
ssh root#localhost -p 2222
...
We are now connected into the container, we can see the SSH daemon process (PID 1) and our SSH session process (PID 7):
root#788fa982c2d0:~# ps -xf
PID TTY STAT TIME COMMAND
1 ? Ss 0:00 /usr/sbin/sshd -D # <== does have the FLAG env var
7 ? Ss 0:00 sshd: root#pts/0 # <== no FLAG env var
Lets check it out, print our current process env var, and the env var of the SSH daemon process:
root#788fa982c2d0:~# printenv FLAG # Nothing
root#788fa982c2d0:~# cat /proc/1/environ # We see the FLAG env var!
[..]FLAG=test[...]
As pointed out by #Dmitrii, you can read Dockerize an SSH service for more details.
Try Using below Command:
docker exec <container-id> bash -c 'echo "$<variable-name>"'
As suggested by docs
you might need to create your own Dockerfile with following changes
Project
|--Dockerfile
|--entrypoint.sh
Dockerfile
FROM rastasheep/ubuntu-sshd
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["/usr/sbin/sshd", "-D"]
File: entrypoint.sh
#!/bin/bash
echo "export FLAG=$FLAG" >> /etc/profile
exec "$#"
Command:
docker build -t your-ubuntu-sshd .
docker run --name test1 -d -e FLAG='abc' -p 2222:22 your-ubuntu-sshd

docker exec with standard output logged in a file inside the docker container

I am currently running a cronjob from a host machine (Linux Redhat) executing a script in a docker container. The problem I have is that when I redirect the standard output to a file with path inside the docker container, the cronjob threw an exception basically saying that the path of the log file cannot be found. But if I change the output log file path to be a path that is on the host machine, it works fine.
Below does not work
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh > /path/in/docker/container/script.shout
But this one works
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh > /path/in/host/script.shout
How do I get the first cronjob working so I can have the output file in the docker container using the path in the docker container?
I don't want to run the cronjob as root and that's why I need sudo before docker exec. Please note, only root has access to the docker volume path in the host machine, which is why I can't use the docker volume path either.
Cron runs your command with a shell, so the output redirect is handled by the shell running on your host, not inside your container. To get shell commands like this to run inside the container, you need to run a shell as your docker command, and escape or quote your any of those shell options to avoid having them interpreted until you are inside the container. E.g.
0 9 * * 1-5 sudo docker exec -i container_name /bin/sh -c \
"/path/in/docker/container/script.sh > /path/in/docker/container/script.shout"
I would rather try and path the redirection path as a parameter to the script (so remove the '>'), and make the script itself redirect its output to that parameter file.
Since the script is executed in a docker container, it would see that path (as opposed to the cron job, which sees by default host paths)
We can use bach -c and put the redirect command between double quotes as in this command:
docker exec ${CONTAINER_ID} bash -c "./path/in/container/script.sh > /path/in/container/out"
And we have to be sure /path/in/container/script.sh is an executable file either by using this command from the container:
chmod +x /path/in/container/script.sh
or by using this command from the host machine:
docker exec ${CONTAINER_ID} chmod +x /path/in/container/script.sh
You can use tee: a program that reads stdin and writes the same to both stdout AND the file specified as an arg.
echo 'foo' | tee file.txt
will write the text 'foo' in file.txt
Your desired command becomes:
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh | tee /path/in/docker/container/script.shout
The drawback is that you also dump to stdout.
You may check this SO question for further possibilities and workarounds.

Enviroment variables in docker containers - how does it work?

I can't understand some thing, namely as we know we can pass to docker run argument -e SOME_VAR=13.
Then each process launched (for example using docker exec ping localhost -c $SOME_VAR=13) can see this variable.
How does it work ? After all, enviroment are about bash, we don't launched bash. I can't understand it. Can you explain me how -e does work without shell ?
For example, let's look at following example:
[user#user~]$ sudo docker run -d -e XYZ=123 ubuntu sleep 10000
2543e7235fa9
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo test
test
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo $XYZ
<empty row>
Why did I get <empty row> instead of 123 ?
The problem is that your $XYZ is getting interpolated in the host shell environment, not your container.
$ export XYZ=456
$ docker run -d -e XYZ=123 ubuntu sleep 10000
$ docker exec -it $(docker ps -ql) echo $XYZ
$ 456
$ docker exec -it $(docker ps -ql) sh -c 'echo $XYZ'
$ 123
You have to quote it so it's passed through as a string literal to the container. Then it works fine.
The environment is not specific to shells. Even ordinary processes have environments. They work the same for both shells and ordinary processes. This is because shells are ordinary processes.
When you do SOMEVAR=13 someBinary you define an environment variable called SOMEVAR for the new process, someBinary. You do this with -e in docker because you ask another process to start your process, the docker daemon.

How to docker exec a shell builtin of docker container specifically on Ubuntu docker image/container

thank you for reading my post.
Problem:
# docker ps
CONTAINER ID IMAGE COMMAND
35c8b832403a ubuntu1604:1 "sh -c /bin/sh"
# docker exec -i -t 35c8b832403a type type
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:262: starting container process caused "exec: \"type\": executable file not found in $PATH"
# Dockerfile
FROM ubuntu:16.04
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN apt-get update && apt-get -y upgrade
ENTRYPOINT ["sh", "-c"]
CMD ["/bin/bash"]
Description:
My objective is to get "type" shell builtin been execute in a way of writing docker exec as below
docker exec -i -t 35c8b832403a type type (FAILED)
NOT
docker exec -i -t 35c8b832403a sh -c "type type" (PASSED)
I have googling around, do some modification in the container (change /etc/profile, /etc/environment, bashrc) but failed.
From the docker documentation itself, it has state that:
COMMAND will run in the default directory of the container. It the
underlying image has a custom directory specified with the WORKDIR
directive in its Dockerfile, this will be used instead.
COMMAND should be an executable, a chained or a quoted command will
not work. Example: docker exec -ti my_container "echo a && echo
b" will not work, but docker exec -ti my_container sh -c "echo a &&
echo b" will.
But seem it IS POSSIBLE when I able to get the right output FROM DOCKER FEDORA (Dockerfile: FROM fedora:25)
# docker ps
CONTAINER ID IMAGE COMMAND
2a17b2338518 fedora25:1 "sh -c /bin/sh"
# docker exec -i -t 2a17b2338518 type type
type is a shell builtin
Question:
Is there any way to enable this on Ubuntu docker? Image/Container tweaks? Vagrantfile Configuration? Please help.
Others:
Using docker run, I able to get the right output because of the "ENTRYPOINT" in the Dockerfile. However the image need to be save instead of export.
Just in case, to be able to execute type as you expect, it would need to be part of the path. Being a shell builtin wouldn't help because as you said, you don't want to execute /bin/bash -c 'type type'
If you want to have type executed as a builtin shell command, this means you need to execute a shell /bin/bash or /bin/sh and then execute 'type type' on it, making it /bin/bash -c 'type type'
After all, as #Henry said, docker exec is a the full command that will be executed and there is no place for CMD or ENTRYPOINT on it.
CMD and ENTRYPOINT are meaningless if you run docker exec. The remaining arguments are taken as the command and executed inside the already existing container.
Maybe you wanted to use docker run?

What is the purpose of the "-i" and "-t" options for the "docker exec" command?

To be honest, I have always been confused about docker exec -it …, docker exec -i … and docker exec -t …, so I decide to do a test:
docker exec -it …:
# docker exec -it 115c89122e72 bash
root#115c89122e72:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
It works normally.
docker exec -i …:
# docker exec -i 115c89122e72 bash
^C
The command hangs and I have to use Ctl + c to interrupt it.
docker exec -t …:
# docker exec -t 115c89122e72 bash
root#115c89122e72:/# ls
^C
It enters the container successfully but hangs on executing the first command.
So it seems there is no point in having the docker exec -i … and docker exec -t … commands. Could anyone elaborate on why there exist -i and -t options for the docker exec command?
-i, --interactive keeps STDIN open even if not attached, which you need if you want to type any command at all.
-t, --tty Allocates a pseudo-TTY, a pseudo terminal which connects a user's "terminal" with stdin and stdout. (See container/container.go)
If you do an echo, only -t is needed.
But for an interactive session where you enter inputs, you need -i.
Since -i keeps stdin open, it is also used in order to pipe input to a detached docker container. That would work even with -d (detach).
See "When would I use --interactive without --tty in a Docker container?":
$ echo hello | docker run -i busybox cat
hello
-i keeps STDIN open even if not attached, what is the status of STDOUT in this case?
It is, for docker exec, the one set by docker run.
But, regarding docker exec, there is a current issue (issue 8755: Docker tty is not a tty with docker exec
unfortunately your discovery only amounts to a difference between the behaviour of tty in centos6 vs ubuntu:14.04. There is still not a functional tty inside the exec - just do ls -la /proc/self/fd/0 and see that it's a broken link pointing to a pts which doesn't exist.
the actual bug we're dealing with is that certain standard libraries assume that the symlinks in /proc/self/fds/ must be valid symlinks
The problem is that the tty is created outside on the host and there is no reference to it in the container like how /dev/console is setup in the primary container.
One options to fix this would be allocate and bind mount the devpts from the host in to the containers.
Note (Q4 2017): this should been fixed by now (docker 17.06-ce).
See PR 33007.
That PR now allows (since 17.06):
zacharys-pro:dev razic$ docker run --rm -t -d ubuntu bash
83c292c8e2d13d1b1a8b34680f3fb95c2b2b3fef71d4ce2b6e12c954ae50965a
zacharys-pro:dev razic$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83c292c8e2d1 ubuntu "bash" 2 seconds ago Up 1 second xenodochial_bardeen
zacharys-pro:dev razic$ docker exec -ti xenodochial_bardeen tty
/dev/pts/1
(before 17.06, tty was returning "not a tty")

Resources