How to pass backtick to Dockerfile CMD? - linux

I have a docker image running java application, its main class is dynamic, in a file called start-class. Traditionally, I started the application like this.
java <some_options_ignored> `cat start-class`
Now I want to run these applications in docker containers. This is my Dockerfile.
FROM openjdk:8
##### Ignored
CMD ["java", "`cat /app/classes/start-class`"]
I built the image and run the containers. The command actually executed was this.
$ docker ps --no-trunc | grep test
# show executed commands
"java '`cat /app/classes/start-class`"
Single quotes was automatically wrapped outside the backticks. How can I fix this??

You're trying to run a shell command (expanding a sub-command) without a shell (the json/exec syntax of CMD). You need to switch to the shell syntax (or explicitly run a shell with the exec syntax). That would look like:
CMD exec java `cat /app/classes/start-class`
Without the json formatting, docker will run
sh -c "exec java `cat /app/classes/start-class`"
The exec in this case will replace the shell in pid 1 with the java process to improve signal handling.

Related

How to redirtect terminal STDIN when running a script in Docker?

I have a script that consumes users input. It is run like this:
./script <<EOF
> input
> onther input
> more input
> EOF
I need to distribute the script as a Docker image.
I am able to run the script in two steps.
First, I get into the containers shell.
docker run -it my-docker-tag sh
And then, inside the shell, I execute the script itself.
Question: Is it possible to run the script in on shut (without having to navigate to the containers shall)?
I tried this:
docker run -it my-docker-tag ./script <<EOF
> input
> onther input
> more input
> EOF
But it fails with:
the input device is not a TTY
The Docker run reference notes:
Specifying -t is forbidden when the client is receiving its standard input from a pipe, as in:
$ echo test | docker run -i busybox cat
If you remove the -t option, shell pipes and redirections around a (foreground) docker run command will work as you expect.
# A heredoc as in the question should work too
sudo docker run --rm my-docker-tag ./script <script-input.txt

Docker keeping CMD alive to exec for inspect. sleep not suitable, ping not avail by default

CMD ['sleep', 100000]
gets stuck and becomes unresponsive for ctrl + c.
Any suggestions?
The issue is when the CMD is not running properly, it is usually easier to exec --it into the server and do those things manually to get them up and running properly.
Without a CMD, run will exit, and therefore exec won't be possible.
I've used sleep for this, but i saw a ping, but ping is not default in ubuntu 18, and perhaps there are better ways than installing it for this simple purpose.
You can provide an alternate command when you run the image. That can be anything you want -- a debugging command, an interactive shell, an alternate process.
docker run --rm myimage ls -l /app
docker run --rm -it myimage bash
# If you're using Compose
docker-compose run myservice bash
This generally gets around the need to "keep the container alive" so that you can docker exec into it. Say you have a container command that doesn't work right:
CMD ["my_command", "--crash"]
Without modifying the Dockerfile, you can run an interactive shell as above. When you get a shell prompt, you can run my_command --crash, and when it crashes, you can look around at what got left behind in the filesystem.
It's important that CMD be a complete command for this to work. If you have an ENTRYPOINT, it needs to use the JSON-array syntax and it needs to run the command that gets passed to it as command line parameters (often, it's a shell script that ends in exec "$#").

Pass file content to docker exec

I am learning with docker containers and I'd like to pass .sql file to database using docker exec.
How can I do that?
I am searching for about an hour now and found this:
cat file.sql | docker exec -it mariadb sh -c 'mysql -u<user> -p<pass>'
or this
docker exec -it mariadb sh -c 'mysql -u<user> -p<pass> "$(< /path/file.sql)"'
but neither of it worked. I think there is problem that I am passing it into sh -c and it tries to load that file from inside the container. How can I do it?
there's more than one way to do it, of course; most of your invocations are close, but if you execute docker with -t it will allocate a terminal for i/o and that will interfere with stream opearations.
My recent invocation from shell history was :
docker exec -i mysql mysql -t < t.sql
in my case of course mysql is the running container name. You'll note that I do not pass -t to the docker exec - I do however pass it to mysql command line program that I exec on the docker host, so don't get confused there.
The shell that interprets that command and executes docker is also the one that opens t.sql and redirects that file descriptor to docker's stdin, so t.sql here is in the current working directory of the host shell, not the container shell.
That having been said, here's why yours didn't work. In the first place, as I said, the use of exec -it allocates a terminal that interferes with the stdin stream that the bash shell set up from cat. In the second place, you're really close, but path/file.sql wasn't on the docker image so I'm guessing that threw a 'no such file or directory' because file.sql is on the host, not the container, yet it's referenced within the -c parameter to the container's sh execution. Of course, that would also need -t to be removed; in neither case does a terminal need to be allocated (you already have one, and it will already be the stdout for the docker execution as it will inherit the shell's un-redirected stdout).

Connecting to a specific shell instance in a docker container?

Let's say I have a running docker container my_container. I start a new shell session with:
docker exec -it my_container bash
And then I start a process (a Python script for example), and exit the container with cntrl-p then cntrl-q to keep the script running in the background. If I do this a few different times with a few different scripts, how do I reconnect to a specific shell instance so I can see the std out of my scripts? If I use docker attach my_container, I'm always placed into the first shell instance I initiated when I did my docker run command.
What I usually do is to start tmux inside the first shell. And then start any other processes inside a new window.
Although it is theoretically possible to do so, docker exec still has many issues and it is always better to avoid it for now.
This is a trivial mode, but may be it helps. Instead of "echo "..." substitude with your script names.
Run the container, then run your scripts directly with docker exec and redirect their output to different files.
docker exec -ti containerId /bin/bash -c 'echo "Hello script1" > /var/log/1.log'
docker exec -ti containerId /bin/bash -c 'echo "Hello script2" > /var/log/2.log'
Then you can look at the files by docker exec(uting) some other commands like cat, grep, tail or whatever you want:
docker exec -ti containerId /bin/tail -f /var/log/1.log
docker exec -ti containerId /bin/tail -f /var/log/2.log
Remind you could also use
docker logs containerId
to see the output redirect to /dev/stdout from commands running in the container, but, if I understood your need, in this case you would get the output from many scritps mixed in stdout.

docker container started in Detached mode stopped after process execution

I create my docker container in detached mode with the following command:
docker run [OPTIONS] --name="my_image" -d container_name /bin/bash -c "/opt/init.sh"
so I need that "/opt/init.sh" executed at container created. What I saw that the container is stopped after scripts finish executed.
How to keep container started in detached with script/services execution at container creation ?
There are 2 modes of running docker container
Detached mode - This mode you execute a command and will terminate container after the command is done
Foreground mode - This mode you run a bash shell, but will also terminate container after you exit the shell
What you need is Background mode. This is not given in parameters but there are many ways to do this.
Run an infinite command in detached mode so the command never ends and the container never stops. I usually use "tail -f /dev/null" simply because it is quite light weight and /dev/null is present in most linux images
docker run -d --name=name container tail -f /dev/null
Then you can bash in to running container like this:
docker exec -it name /bin/bash -l
If you use -l parameter, it will login as login mode which will execute .bashrc like normal bash login. Otherwise, you need to bash again inside manually
Entrypoint - You can create any sh script such as /entrypoint.sh. in entrypoint.sh you can run any never ending script as well
#!/bin/sh
#/entrypoint.sh
service mysql restart
...
tail -f /dev/null <- this is never ending
After you save this entrypoint.sh, chmod a+x on it, exit docker bash, then start it like this:
docker run --name=name container --entrypoint /entrypoint.sh
This allows each container to have their own start script and you can run them without worrying about attaching the start script each time
A Docker container will exit when its main process ends. In this case, that means when init.sh ends. If you are only trying to start a single application, you can just use exec to launch it at the end, making sure to run it in the foreground. Using exec will effectively turn the called service/application into the main process.
If you have more than one service to start, you are best off using a process manager such as supervisord or runit. You will need to start the process manager daemon in the foreground. The Docker documentation includes an example of using supervisord.

Resources