Reading up on the Dockerfile documentation for ENTRYPOINT, I am having an issue trying to rewrite one of my commands:
As it runs today, without issues:
# Startup
ENTRYPOINT ["/etc/init.d/hook", "/run/apache2/apache2.pid", "/etc/init.d/apache2 start"]
According to various sources, I should fork my hook process using exec, so I have simple changed the entrypoint to
ENTRYPOINT ["exec", "/etc/init.d/hook", "/run/apache2/apache2.pid", "/etc/init.d/apache2 start"]
But now I receive the following error:
container_linux.go:247: starting container process caused "exec: \"exec\": executable file not found in $PATH"
Why can exec not be found? Is this not a bash builtin?
If I attach to the container, I can run exec without issue
$ docker exec -it $( docker ps | grep imagename | awk '{print $1}' ) bash
root#f704bfe5d6c6:/# exec echo hi
hi
How can I use exec in my ENTRYPOINT directive?
edit
Here is a Dockerfile that reproduces the error
FROM ubuntu:16.10
ENTRYPOINT ["exec", "echo", "hi"]
Try with ENTRYPOINT ["exec", "/etc/init.d/hook", "/run/apache2/apache2.pid", "/etc/init.d/apache2", "start"]
check the doc
https://docs.docker.com/engine/reference/builder/#/entrypoint
should also work
ENTRYPOINT /etc/init.d/hook /run/apache2/apache2.pid /etc/init.d/apache2 start
Interestingly, I can make this work by simply removing the parameters from an array
This will work as expected
ENTRYPOINT exec echo hi
While this will generate the error
ENTRYPOINT ["exec", "echo", "hi"]
Related
I have file template which has various variables. I would like to constitute these variables with values before I copy this file to
running container. This what I do right now:
export $(grep -v '^#' /home/user/.env | xargs) && \
envsubst < template.yaml > new_connection.yaml && \
docker cp new_connection.yaml docker_container:/app && \
rm new_connection.yaml
This is working, however I'm sure there is a way I can skip file creation/copy/remove steps and do just something like: echo SOME_TEXT > new_connection.yaml straight to the container. Could you help?
This seems like a good application for an entrypoint wrapper script. If your image has both an ENTRYPOINT and a CMD, then Docker passes the CMD as additional arguments to the ENTRYPOINT. That makes it possible to write a simple script that rewrites the configuration file, then runs the CMD:
#!/bin/sh
envsubst < template.yaml > /app/new_connection.yaml
exec "$#"
In the Dockerfile, COPY the script in and make it the ENTRYPOINT. (If your host system doesn't correctly preserve executable file permissions or Unix line endings you may need to do some additional fixups in the Dockerfile as well.)
COPY entrypoint.sh ./
ENTRYPOINT ["/app/entrypoint.sh"] # must be JSON-array form
CMD same command as the original image
When you run the container, you need to pass in the environment file, but there's a built-in option for this
docker run --env-file /home/user/.env ... my-image
If you want to see this working, any command you provide after the image name replaces the Dockerfile CMD, but it will still be passed to the ENTRYPOINT as arguments. So you can, for example, see the rewritten config file in a new temporary container:
docker run --rm --env-file /home/user/.env my-image \
cat /app/new_connection.yaml
Generally, I agree with David Maze and the comment section. You should probably build your image in such a way that it picks up env vars on startup and uses them accordingly.
However, to answer your question, you can pipe the output of envsubst to the running container.
$ echo 'myVar: ${FOO}' > env.yaml
$ FOO=bar envsubst < env.yaml | docker run -i busybox cat
myVar: bar
If you want to write that to a file, using redirection, you need to wrap it in a sh -c because otherwise the redirection is treated as redirect the container output to some path on the host.
FOO=bar envsubst < env.yaml | docker run -i busybox sh -c 'cat > my-file.yaml'
I did it here with docker run but you can do the same with exec.
FOO=bar envsubst < env.yaml | docker exec -i <container> cat
I was struggling to fix this issue from couple of days.
File 1 : docker-entrypoint.sh
#!/bin/bash
set -e
source /home/${HADOOP_INSTALL_USERNAME}/.bashrc
kinit -kt /home/${HADOOP_INSTALL_USERNAME}/keytab/dept-test-hadoop.${HADOOP_INSTALL_ENV}.keytab dept-test-hadoop
mkdir /tmp/test222222
exec "$#"
Dockerfile :
ENTRYPOINT ["/home/dept-test-hadoop/docker-entrypoint.sh"]
docker run command :
docker run -it hadoop-hive:v1 /bin/mkdir /tmp/test1
The challenge or what I am trying is to execute what ever the command that pass as command-line argument to docker run command.Please note these commands required kerberos authentication
1) I have noticed /tmp/test222222 but I could not see a directly like /tmp/test1 with above docker run command. I think my exec "$#" in docker-entrypoint.sh not executing. But I can confirm the script is executing as I can see the /tmp/test222222
2) Is there way that we can assign the values from environment variables ?
ENTRYPOINT ["/home/dept-test-hadoop/docker-entrypoint.sh"]
You container will exit as long as it creates the directory. You container life is the life of exec command or docker-entrypoint, so your container will die soon after exec "$#".
If you are looking for a way to create a directory from env then you can try this
#!/bin/bash
set -x
source /home/${HADOOP_INSTALL_USERNAME}/.bashrc
kinit -kt /home/${HADOOP_INSTALL_USERNAME}/keytab/dept-test-hadoop.${HADOOP_INSTALL_ENV}.keytab dept-test-hadoop
mkdir $MY_DIR
ls
exec "$#"
so now pass MY_DIR to env but keep the long process in mind.
docker run -it -e MY_DIR=abcd hadoop-hive:v1 "your_long_running_process_to_exec"
for example
docker run -it -e MY_DIR=abcd hadoop-hive:v1 "<hadoop command>"
If you run any process from ENV in exec then also you can try
#!/bin/sh
set -x
mkdir $MY_DIR
ls
exec ${START_PROCESS}
so you can pass during run time
docker run -it -e MY_DIR=abcd -e START_PROCESS=my_process hadoop-hive:v1
I created a image from minio/mc and I want cat file inside my container to make something like this.
docker run -it --entrypoint='sh cat hello.txt' my/miniomc > hello.txt
How to write that this cat is an sh command to execute ?
In my situation I can't use docker cp
Thanks for your help
The entrypoint parameter requires a binary file.
Try this :
docker run --rm -it --entrypoint=/bin/cat my/miniomc hello.txt > hello.txt
thank you for reading my post.
Problem:
# docker ps
CONTAINER ID IMAGE COMMAND
35c8b832403a ubuntu1604:1 "sh -c /bin/sh"
# docker exec -i -t 35c8b832403a type type
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:262: starting container process caused "exec: \"type\": executable file not found in $PATH"
# Dockerfile
FROM ubuntu:16.04
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN apt-get update && apt-get -y upgrade
ENTRYPOINT ["sh", "-c"]
CMD ["/bin/bash"]
Description:
My objective is to get "type" shell builtin been execute in a way of writing docker exec as below
docker exec -i -t 35c8b832403a type type (FAILED)
NOT
docker exec -i -t 35c8b832403a sh -c "type type" (PASSED)
I have googling around, do some modification in the container (change /etc/profile, /etc/environment, bashrc) but failed.
From the docker documentation itself, it has state that:
COMMAND will run in the default directory of the container. It the
underlying image has a custom directory specified with the WORKDIR
directive in its Dockerfile, this will be used instead.
COMMAND should be an executable, a chained or a quoted command will
not work. Example: docker exec -ti my_container "echo a && echo
b" will not work, but docker exec -ti my_container sh -c "echo a &&
echo b" will.
But seem it IS POSSIBLE when I able to get the right output FROM DOCKER FEDORA (Dockerfile: FROM fedora:25)
# docker ps
CONTAINER ID IMAGE COMMAND
2a17b2338518 fedora25:1 "sh -c /bin/sh"
# docker exec -i -t 2a17b2338518 type type
type is a shell builtin
Question:
Is there any way to enable this on Ubuntu docker? Image/Container tweaks? Vagrantfile Configuration? Please help.
Others:
Using docker run, I able to get the right output because of the "ENTRYPOINT" in the Dockerfile. However the image need to be save instead of export.
Just in case, to be able to execute type as you expect, it would need to be part of the path. Being a shell builtin wouldn't help because as you said, you don't want to execute /bin/bash -c 'type type'
If you want to have type executed as a builtin shell command, this means you need to execute a shell /bin/bash or /bin/sh and then execute 'type type' on it, making it /bin/bash -c 'type type'
After all, as #Henry said, docker exec is a the full command that will be executed and there is no place for CMD or ENTRYPOINT on it.
CMD and ENTRYPOINT are meaningless if you run docker exec. The remaining arguments are taken as the command and executed inside the already existing container.
Maybe you wanted to use docker run?
I want to do some simple logging for my server which is a small Flask app running in a Docker container.
Here is the Dockerfile
# Dockerfile
FROM dreen/flask
MAINTAINER dreen
WORKDIR /srv
# Get source
RUN mkdir -p /srv
COPY perfektimprezy.tar.gz /srv/perfektimprezy.tar.gz
RUN tar x -f perfektimprezy.tar.gz
RUN rm perfektimprezy.tar.gz
# Run server
EXPOSE 80
CMD ["python", "index.py", "1>server.log", "2>server.log"]
As you can see on the last line I redirect stderr and stdout to a file. Now I run this container and shell into it
docker run -d -p 80:80 perfektimprezy
docker exec -it "... id of container ..." bash
And observe the following things:
The server is running and the website working
There is no /srv/server.log
ps aux | grep python yields:
root 1 1.6 3.2 54172 16240 ? Ss 13:43 0:00 python index.py 1>server.log 2>server.log
root 12 1.9 3.3 130388 16740 ? Sl 13:43 0:00 /usr/bin/python index.py 1>server.log 2>server.log
root 32 0.0 0.0 8860 388 ? R+ 13:43 0:00 grep --color=auto python
But there are no logs... HOWEVER, if I docker attach to the container I can see the app generating output in the console.
How do I properly redirect stdout/err to a file when using Docker?
When you specify a JSON list as CMD in a Dockerfile, it will not be executed in a shell, so the usual shell functions, like stdout and stderr redirection, won't work.
From the documentation:
The exec form is parsed as a JSON array, which means that you must use double-quotes (") around words not single-quotes (').
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, CMD [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: CMD [ "sh", "-c", "echo", "$HOME" ].
What your command actually does is executing your index.py script and passing the strings "1>server.log" and "2>server.log" as command-line arguments into that python script.
Use one of the following instead (both should work):
CMD "python index.py > server.log 2>&1"
CMD ["/bin/sh", "-c", "python index.py > server.log 2>&1"]
To use docker run in a shell pipeline or under shell redirection, making run accept stdin and output to stdout and stderr appropriately, use this incantation:
docker run -i --log-driver=none -a stdin -a stdout -a stderr ...
e.g. to run the alpine image and execute the UNIX command cat in the contained environment:
echo "This was piped into docker" |
docker run -i --log-driver=none -a stdin -a stdout -a stderr \
alpine cat - |
xargs echo This is coming out of docker:
emits:
This is coming out of docker: This was piped into docker
Just a complement, when using docker-compose, you could also try:
command: bash -c "script_or_command > /path/to/log/command.log 2>&1"
I personally use :
ENTRYPOINT ["python3"]
CMD ["-u", "-m", "swagger_server"]
The "-u" is the key :)