I created a image from minio/mc and I want cat file inside my container to make something like this.
docker run -it --entrypoint='sh cat hello.txt' my/miniomc > hello.txt
How to write that this cat is an sh command to execute ?
In my situation I can't use docker cp
Thanks for your help
The entrypoint parameter requires a binary file.
Try this :
docker run --rm -it --entrypoint=/bin/cat my/miniomc hello.txt > hello.txt
Related
I have file template which has various variables. I would like to constitute these variables with values before I copy this file to
running container. This what I do right now:
export $(grep -v '^#' /home/user/.env | xargs) && \
envsubst < template.yaml > new_connection.yaml && \
docker cp new_connection.yaml docker_container:/app && \
rm new_connection.yaml
This is working, however I'm sure there is a way I can skip file creation/copy/remove steps and do just something like: echo SOME_TEXT > new_connection.yaml straight to the container. Could you help?
This seems like a good application for an entrypoint wrapper script. If your image has both an ENTRYPOINT and a CMD, then Docker passes the CMD as additional arguments to the ENTRYPOINT. That makes it possible to write a simple script that rewrites the configuration file, then runs the CMD:
#!/bin/sh
envsubst < template.yaml > /app/new_connection.yaml
exec "$#"
In the Dockerfile, COPY the script in and make it the ENTRYPOINT. (If your host system doesn't correctly preserve executable file permissions or Unix line endings you may need to do some additional fixups in the Dockerfile as well.)
COPY entrypoint.sh ./
ENTRYPOINT ["/app/entrypoint.sh"] # must be JSON-array form
CMD same command as the original image
When you run the container, you need to pass in the environment file, but there's a built-in option for this
docker run --env-file /home/user/.env ... my-image
If you want to see this working, any command you provide after the image name replaces the Dockerfile CMD, but it will still be passed to the ENTRYPOINT as arguments. So you can, for example, see the rewritten config file in a new temporary container:
docker run --rm --env-file /home/user/.env my-image \
cat /app/new_connection.yaml
Generally, I agree with David Maze and the comment section. You should probably build your image in such a way that it picks up env vars on startup and uses them accordingly.
However, to answer your question, you can pipe the output of envsubst to the running container.
$ echo 'myVar: ${FOO}' > env.yaml
$ FOO=bar envsubst < env.yaml | docker run -i busybox cat
myVar: bar
If you want to write that to a file, using redirection, you need to wrap it in a sh -c because otherwise the redirection is treated as redirect the container output to some path on the host.
FOO=bar envsubst < env.yaml | docker run -i busybox sh -c 'cat > my-file.yaml'
I did it here with docker run but you can do the same with exec.
FOO=bar envsubst < env.yaml | docker exec -i <container> cat
I can copy one file from a docker container to the server with
docker cp docker_session_name:/root/mydir/ .
I would like know to copy only files from mydir with a given extension, say, pdf
I don't think you can do this with docker cp command
To do this you can mount the directory inside the docker and then you can run the regular cp command with regex to copy it to another directory.
Mount:
docker run -d --name containerName -v myvol2:/app imageName:tag
Inside Container:
cp app/*.pdf /destination
it looks like you cant just run like in Linux (see similar thread)
Not like this:
docker cp docker_session_name:/root/mydir/*.pdf .
simple answer
use this script:
path="/root/mydir"
for file in $(docker exec docker_session_name sh -c "ls ${path}/*.pdf"); do
docker cp docker_session_name:${file} .
done
credits to this thread
cumbersome answer with easier use (no script)
you could however mount a bind mount between the host and the wanted path like so in the docker run command:
docker run -v /host/path/:/root/mydir/ my-image
then run cp with wildcard *.pdf from the host path of /host/path/ used in the docker run command
I was struggling to fix this issue from couple of days.
File 1 : docker-entrypoint.sh
#!/bin/bash
set -e
source /home/${HADOOP_INSTALL_USERNAME}/.bashrc
kinit -kt /home/${HADOOP_INSTALL_USERNAME}/keytab/dept-test-hadoop.${HADOOP_INSTALL_ENV}.keytab dept-test-hadoop
mkdir /tmp/test222222
exec "$#"
Dockerfile :
ENTRYPOINT ["/home/dept-test-hadoop/docker-entrypoint.sh"]
docker run command :
docker run -it hadoop-hive:v1 /bin/mkdir /tmp/test1
The challenge or what I am trying is to execute what ever the command that pass as command-line argument to docker run command.Please note these commands required kerberos authentication
1) I have noticed /tmp/test222222 but I could not see a directly like /tmp/test1 with above docker run command. I think my exec "$#" in docker-entrypoint.sh not executing. But I can confirm the script is executing as I can see the /tmp/test222222
2) Is there way that we can assign the values from environment variables ?
ENTRYPOINT ["/home/dept-test-hadoop/docker-entrypoint.sh"]
You container will exit as long as it creates the directory. You container life is the life of exec command or docker-entrypoint, so your container will die soon after exec "$#".
If you are looking for a way to create a directory from env then you can try this
#!/bin/bash
set -x
source /home/${HADOOP_INSTALL_USERNAME}/.bashrc
kinit -kt /home/${HADOOP_INSTALL_USERNAME}/keytab/dept-test-hadoop.${HADOOP_INSTALL_ENV}.keytab dept-test-hadoop
mkdir $MY_DIR
ls
exec "$#"
so now pass MY_DIR to env but keep the long process in mind.
docker run -it -e MY_DIR=abcd hadoop-hive:v1 "your_long_running_process_to_exec"
for example
docker run -it -e MY_DIR=abcd hadoop-hive:v1 "<hadoop command>"
If you run any process from ENV in exec then also you can try
#!/bin/sh
set -x
mkdir $MY_DIR
ls
exec ${START_PROCESS}
so you can pass during run time
docker run -it -e MY_DIR=abcd -e START_PROCESS=my_process hadoop-hive:v1
I can't understand some thing, namely as we know we can pass to docker run argument -e SOME_VAR=13.
Then each process launched (for example using docker exec ping localhost -c $SOME_VAR=13) can see this variable.
How does it work ? After all, enviroment are about bash, we don't launched bash. I can't understand it. Can you explain me how -e does work without shell ?
For example, let's look at following example:
[user#user~]$ sudo docker run -d -e XYZ=123 ubuntu sleep 10000
2543e7235fa9
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo test
test
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo $XYZ
<empty row>
Why did I get <empty row> instead of 123 ?
The problem is that your $XYZ is getting interpolated in the host shell environment, not your container.
$ export XYZ=456
$ docker run -d -e XYZ=123 ubuntu sleep 10000
$ docker exec -it $(docker ps -ql) echo $XYZ
$ 456
$ docker exec -it $(docker ps -ql) sh -c 'echo $XYZ'
$ 123
You have to quote it so it's passed through as a string literal to the container. Then it works fine.
The environment is not specific to shells. Even ordinary processes have environments. They work the same for both shells and ordinary processes. This is because shells are ordinary processes.
When you do SOMEVAR=13 someBinary you define an environment variable called SOMEVAR for the new process, someBinary. You do this with -e in docker because you ask another process to start your process, the docker daemon.
I am running Docker (1.10.2) on Windows. I created a script to echo 'Hello World' on my machine and stored it in C:/Users/username/MountTest. I created a new container and mounted this directory (MountTest) as a data volume. The command I ran to do so is shown below:
docker run -t -i --name mounttest -v /c/Users/sarin/MountTest:/home ubuntu /bin/bash
Next, I run the command to execute the script within the container mounttest.
docker exec -it mounttest sh /home/helloworld.sh
The result is as follows:
: not foundworld.sh: 2: /home/helloworld.sh:
Hello World
I get the desired output (echo Hello World) but I want to understand the reason behind the not found errors.
Note: This question might look similar to Run shell script on docker from shared volume, but it addresses permission related issues.
References:
The helloworld.sh file:
#!/bin/sh
echo 'Hello World'
The mounted volumes information is captured below.
Considering the default ENTRYPOINT for the 'ubuntu' image is sh -c, the final command executed on docker exec is:
sh -c 'sh /home/helloworld.sh'
It looks a bit strange and might be the cause of the error message.
Try simply:
docker exec -it mounttest /home/helloworld.sh
# or
docker exec -it mounttest sh -c '/home/helloworld.sh'
Of course, the docker exec should be done in a boot2docker ssh session, simalar to the shell session in which you did a docker run.
Since the docker run opens a bash, you should make a new boot2docker session (docker-machine ssh), and in that new boot2docker shell session, try the docker exec.
Trying docker exec from within the bash made by docker run means trying to do DiD (Docker in Docker). It is not relevant for your test.