docker logs within a bash script doesn't work - linux

I'm experimenting a weird behaviour of Docker in a bash script.
Let's see these two examples:
logs-are-showed() {
docker rm -f mybash &>/dev/null
docker run -it --rm -d --name mybash bash -c "echo hello; tail -f /dev/null"
docker logs mybash
}
# usage:
# $ localtunnel 8080
localtunnel() {
docker rm -f localtunnel &>/dev/null
docker run -it -d --network host --name localtunnel efrecon/localtunnel --port $1
docker logs localtunnel
}
In the first function logs-are-showed the command docker logs returns me the logs of the container mybash
In the second function localtunnel the command docker logs doesn't return me anything.
After having called the localtunnel function, if I ask for the container logs from outside the script, it shows me the logs correctly.
Why does this happen?

Processes take time to react. They may be no logs right after starting a process - it has not written anything yet. Wait a bit.

Related

Running a docker logs command in background remotely over ssh

We have a use case where the docker remote execution is seperately executed on another server.
so users login to server A and submt an ssh command which runs a script on remote server B.
The script performs several docker commands like prune,build,run which are working fine.
I have this command at the end of the script which is supposed to write the docker logs in background to an efs file system which is mounted on both
servers A and B.This way users can access the logfile from server A without actually logging into server B (to prevent access to containers).
I have tried all available solutions related to this and nothing seems to be working for running a process in background remotely.
Any help is greatly appreciated.
The below code is the script on remote server.user calls this script from server A over ssh like ssh id#serverB-IP docker_script.sh
loc=${args[0]}
cd $loc
# Set parameters
imagename=${args[1]}
port=${args[2]}
desired_port=${args[3]}
docker stop $imagename && sleep 10 || true
docker build -t $imagename $loc |& tee build.log
docker system prune -f
port_config=$([[ -z "${port}" ]] && echo '' || echo -p $desired_port:$port)
docker run -d --rm --name $imagename $port_config $imagename
sleep 10
docker_gen_log $loc $imagename
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
}
docker_gen_log(){
loc=${args[0]}
cd $loc
imagename=${args[1]}
docker logs -f $imagename &> run.log &
}
If you're only running a single container like you show, you can just run it in the foreground. The container logs will go to the docker run stdout/stderr and you can collect them normally.
docker run --rm --name "$imagename" $port_config "$imagename" \
> "$loc/run.log" 2>&1
# also note no `-d` option
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
If you have multiple containers, or if you want the container to keep running if the script disconnects, you can collect logs at any point until the container is removed. For this sort of setup you'd want to explicitly docker rm the container.
docker run -d --name "$imagename" $port_config "$imagename"
# note, no --rm option
docker wait "$imagename" # actually container name
docker logs "$imagename" >"$loc/run.log" 2>&1
docker rm "$imagename"
This latter approach won't give you incremental logs while the container is running. Given that your script seems to assume the container will be finished within 10 seconds that's probably not a concern for you.

Enviroment variables in docker containers - how does it work?

I can't understand some thing, namely as we know we can pass to docker run argument -e SOME_VAR=13.
Then each process launched (for example using docker exec ping localhost -c $SOME_VAR=13) can see this variable.
How does it work ? After all, enviroment are about bash, we don't launched bash. I can't understand it. Can you explain me how -e does work without shell ?
For example, let's look at following example:
[user#user~]$ sudo docker run -d -e XYZ=123 ubuntu sleep 10000
2543e7235fa9
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo test
test
[user#user~]$ sudo docker exec -it 2543e7235fa9 echo $XYZ
<empty row>
Why did I get <empty row> instead of 123 ?
The problem is that your $XYZ is getting interpolated in the host shell environment, not your container.
$ export XYZ=456
$ docker run -d -e XYZ=123 ubuntu sleep 10000
$ docker exec -it $(docker ps -ql) echo $XYZ
$ 456
$ docker exec -it $(docker ps -ql) sh -c 'echo $XYZ'
$ 123
You have to quote it so it's passed through as a string literal to the container. Then it works fine.
The environment is not specific to shells. Even ordinary processes have environments. They work the same for both shells and ordinary processes. This is because shells are ordinary processes.
When you do SOMEVAR=13 someBinary you define an environment variable called SOMEVAR for the new process, someBinary. You do this with -e in docker because you ask another process to start your process, the docker daemon.

How to retain docker alpine container after "exit" is used?

Like for example if I use command docker run -it alpine /bin/sh
it starts a terminal after which I can install packages and all. Now when I use exit command it goes back to the terminal. (main one)
So how can I access the same container again?
When I run that command again, I get a fresh alpine.
Please help
The container lives as long as the specified run command process is still running. When you specify to run /bin/sh, once you exit, the sh process will die and so will you container.
If you want to keep your container running, you have to keep the process inside running. For your case (I am not sure what you want to acheive, I assume you are just testing), the following will keep it running
docker run -d --name alpine alpine tail -f /dev/null
Then you can sh into the container using
docker exec -it alpine sh
Pull an image
docker image pull alpine
See that image is there
docker image ls OR just docker images
see what is inside the alpine
docker run alpine ls -al
Now your question is how to stay with the shell
docker container run -it alpine /bin/sh
You are inside shell script command line. Some distribution may have bash shell.
docker exec -it 5f4 sh
/ # (<-- you can run linux command here!)
At this point, you can use command line of alpine and do
ls -al
type exit to come out-
You can run it in detached mode and it will keep running.
With exec command we can login again
docker container run -it -d alpine /bin/sh
verify that it is UP and copy the FIRST 2 -3 digits of the container ID
docker container ls
login with exec command
docker exec -it <CONTAINER ID or just 2-3 digits> sh
You will need to STOP otherwise it will keep running.
docker stop <CONTAINER ID>
Run Alpine in background
$ docker run --name alpy -dit alpine
$ docker ps
Attach to Alpine
$ docker attach alpy
You should use docker start, which allows you to start a stopped container. If you didn't name your container, you'll need to get it's name/id using docker ps.
For example,
$docker ps
CONTAINER ID IMAGE COMMAND
4c01db0b339c alpine bash
$docker start -i -a 4c01db0b339c
What you should do is below
docker run -d --name myalpine alpine tail -f /dev/null
This would make sure that your container doesn't die. Now whenever you need to install packages inside you just get inside the container using sh
docker exec -it myalpine /bin/sh
If for some reason your container dies, you can still start it again using
docker start myalpine

Docker tool in Jenkins container (with mounted Docker socket) is not finding a Docker daemon to connect to

I just started a Jenkins docker container with a mounted docker socket like the following:
docker run -d \
--publish 8080:8080 \
--publish 50000:50000 \
--volume /my_jenkins_home:/var/jenkins_home \
--volume /var/run/docker.sock:/var/run/docker.sock \
--restart unless-stopped \
--name my_jenkins_container \
company/my_jenkins:latest
Then I bash into the container like this:
docker exec -it my_jenkins_container bash
A tool 'docker' command in a Jenkins pipeline script has automatically installed a Docker binary at the following path: /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker
However, when I try to run Docker commands from that Docker binary (assuming that it will connect with the Docker socket that has been mounted at /var/run/docker.sock) it returns the following error:
$ /var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker images
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
How can I ensure that this Docker binary (the binary that has been automatically installed via the Jenkins' tool 'docker' command) runs its Docker commands by connecting to the mounted Docker socket at /var/run/docker.sock?
Short Answer:
The file permissions of the mounted Docker socket file had to be revised.
Long Answer:
When I simply tried to execute /path/to/dockerTool/bin/docker ps -a on the Docker container, it was producing an error.
$ docker exec -it my_jenkins_container bash -c "/var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker ps -a"
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Then, when I tried to execute /path/to/dockerTool/bin/docker ps -a with user=root, it worked fine.
$ docker exec -it --user=root my_jenkins_container bash -c "/var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker ps -a"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9dd56411efe company/my_jenkins:latest "/bin/tini -- /usr/lo" 49 seconds ago Up 49 seconds 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp my_jenkins_container
So it means I just needed to set the right permissions to the Docker socket. All I had to do was chgrp the socket file to the jenkins group so that the jenkins group/users can read/write to that socket file (the before & after of the chgrp command is included here):
$ docker exec -it my_jenkins_container bash -c "ls -l /var/run/docker.sock"
srw-rw---- 1 root 999 0 Jan 15 08:29 /var/run/docker.sock
$ docker exec -it --user=root my_jenkins_container bash -c "chgrp jenkins /var/run/docker.sock"
$ docker exec -it my_jenkins_container bash -c "ls -l /var/run/docker.sock"
srw-rw---- 1 root jenkins 0 Jan 15 08:29 /var/run/docker.sock
After that, executing /path/to/dockerTool/bin/docker ps -a as a non-root user worked fine
$ docker exec -it my_jenkins_container bash -c "/var/jenkins_home/tools/org.jenkinsci.plugins.docker.commons.tools.DockerTool/docker/bin/docker ps -a"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9dd56411efe company/my_jenkins:latest "/bin/tini -- /usr/lo" 3 minutes ago Up 3 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp my_jenkins_container

Issue shell commands on the remote server from local machine

The following command issued on a Mac terminal is failing the docker command on the remote shell.
However it works if I log in to the server and issue the command there with replacing ";" with "&&"
ssh -i "myKey.pem" user#host ‘docker stop $(docker ps -a -q --filter ancestor=name/kind); docker rm $(docker ps -a -q --filter ancestor=name/kind); docker rmi name/kind; docker build -t name/kind .; sudo docker run -it -d -p 80:80 name/kind’
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
I need to run this command form the local terminal because if is part of bigger command which first build the project locally and scp it to the server.
`$bigger-command && then-the-ssh-as-shown-above.
How do I go about it? Thanks
The best way to pass very complex commands to ssh is the create a script on the server side.
If you need to pass some parameters, proceed this way:
create a .sh file on your localhost
scp it to your remote host
run `ssh user#remotehost 'bash scriptfile.sh'
This should do the trick without giving you headaches about escapement.

Resources