How to Find The User Who Stopped Docker Container - linux

I want to know what is the user who stopped a docker container.
There are several user accounts on my server. I suspect that one of them sometimes stops the container.
How can I find the user that performed this operation?

You can use su -c history username to check command history of a user, I don't know how many users you have but you could loop through them and grep for commands taking docker containers down.

You can install GNU Accounting Utilities, to be able to see commands executed by users:
#centos
yum install psacct
# ubuntu:
apt-get install acct
#Also make sure that the cooresponding service is enabled:
/etc/init.d/psacct status
Then, after you realize that the container is stopped execute:
lastcomm --command docker
# or
lastcomm --command kill
to see which executed the above command(s).
You can use the above in combination with:
docker container logs <name-of-the-container>
to see what is the exact time on which the container was stopped. (E.g. you may see a message on the logs: "stopping service..") and match it with lastcomm output.
Other useful commands that come with the above package:sa, ac

Related

Can't run docker as a normal user

I can't run docker commands as my own user. But I know that the service is running because I can run commands as sudo:
$ docker ps
Cannot connect to the Docker daemon at unix:///run/user/1000/docker.sock. Is the docker daemon running?
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
(snip) (snip) (snip) 13 days ago Up 2 hours (healthy) 9000/tcp (snip)
I am successfully running a few containers, and they each work, but I have another not listed in 👆 that I need to run as my own user.
I am part of the docker group:
$ groups
docker www-data video tim
I'm not sure what else to check. I do have this:
$ echo $DOCKER_HOST
unix:///run/user/1000/docker.sock
Also:
$ uname -r
5.4.0-65-generic
$ docker --version
Docker version 19.03.6, build 369ce74a3c
This is on Ubuntu 18.04.5 LTS
As you followed all the post installation steps correctlly, as far as I can tell, my best guess is that has to do with the DOCKER_HOST environment variable.
Does it help if you unset DOCKER_HOST? (Perhaps you need to log out, so it has an effect.)
On my system, docker ps works with sudo, but once I set DOCKER_HOST=unix:///run/user/1000/docker.sock, I get the same error as you.
For some background, here is a question about the DOCKER_HOST variable. In essence, that variable should normally not be set.
Return to the default sock path (unix:///var/run/docker.sock), by unsetting DOCKER_HOST and removing an errant config files:
unset DOCKER_HOST
rm -r ~/.docker
The Docker Daemon must be restarted after creating the “docker” group:
sudo services docker restart
Then, ensure you add your current user to the group:
sudo usermod -a -G docker $USER
This will ensure your user has access to the socket file.
UPDATE: 12/2022
Recently had to do this on Ubuntu 22.04 LTS and ran into the login shell persisting the previous group.
Since the UI manages the login shell, a restart is either required, or you need to replace the process with exec. You can work around this issue, until you restart, by replacing your current shell process: (use $0 instead, if $SHELL doesn't match your preferred shell)
exec sudo -u $USER -E $SHELL

start docker container interactively

I have a very simple dockerfile with only one row, namely "FROM ubuntu". I created an image from this dockerfile by the command docker build -t ubuntu_ .
I know that I can create a new docker container from this image an run it interactively with the command
docker run -it my_new_container
I can later start this new container with the command
start my_new container
As I understand it, I should also be able to use this container it interactively by
start -i my_new container
But, it does not work. It just runs and exits. I don't get to the container's command prompt as I do when I use run. What am I doing wrong?
If i understood correctly, you want to see the logs from container in terminal, same as when you run the image with docker run. If that's the case, then try with
docker start -a my_docker_container
You can enter a running container with:
docker exec -it <container name> /bin/bash
example:
docker exec -it my_new_container /bin/bash
you can replace bash with sh if bash is not available in the container.
and if you need to explicitly use a UID , like root = UID 0, you can specify this:
docker exec -it -u 0 my_new_container /bin/bash
which will log you as root
Direct answer:
To run an interactive shell for a non-running container, first find the image that the container is based on.
Then:
docker container run -it [yourImage] bash
If your eventual container is based on an alpine image, replace bash with sh.
Technically, this will create a NEW container, but it gets the job done.
EDIT [preferred method]:
An even better way is to give the container something irrelevant to do. A nice solution from the VSCode docs is to put the following command into your service definition of the docker-compose.yml file:
services:
my-app-service:
command: ["sleep", "infinity"]
# other relevant parts of your service def...
The idea here is that you're telling your container to sleep for some time (infinite amount of time). Ironically, this your container will have to maintain this state, forcing the container to keep running.
This is how I run containers. Best wishes to whomever needs this nugget of info. We're all learning :)
You cannot get a shell into a container in its stopped state, or restart it directly with another entry point. If the container keeps exiting and you need to examine it, the only option I know of is to commit the container as a new image and then start a new container with such image, as per a related answer.
If you don't need that container anymore and just want it to stay up, you should run it with a process that will not exit. An example with an Ubuntu image would be (you don't need a Dockerfile for this):
docker run -d ubuntu --name carrot tail -f /dev/null
You will see that this container stays up and you can now run bash on it, to access the CLI:
docker exec -ti carrot bash
If the container has stopped for whatever reason, such as a machine restart, you can bring it back up:
docker start carrot
And it will continue to stay up again.

default user not added to docker group, have to do su $USER?

I have Ubuntu 18.04. and after installing docker i added my user to docker group with the command
sudo usermod -aG docker ${USER}
and logged in
su - ${USER}
and if I check id, my user is added to docker group.
But when I reopen the terminal i cant do docker commands without sudo unless i explicitly do su ${USER}
also, I can't find docker group with the default user.
What am I missing here?
#larsks already replied to the main question in a comment, however I would like to elaborate on the implications of that change (adding your default user to the docker group).
Basically, the Docker daemon socket is owned by root:docker, so in order to use the Docker CLI commands, you need either to be in the docker group, or to prepend all docker commands by sudo.
As indicated in the documentation of Docker, it is risky to follow the first solution on your personal workstation, because this just amounts to providing the default user with root permissions without sudo-like password prompt protection. Indeed, users in the docker group are de facto root on the host. See for example this article and that one.
Instead, you may want to follow the second solution, which can be somewhat simplified by adding to your ~/.bashrc file an alias such as:
alias docker="sudo /usr/bin/docker"
Thus, docker run --rm -it debian will be automatically expanded to sudo /usr/bin/docker run --rm -it debian, thereby preserving sudo’s protection for your default user.

Docker fails at first run after install. Error Post http://..... permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?

I'm following step one of this docker tutorial.
I have installed ubuntu version 14.04 on a virtual box vm.
I intentionally downgraded by docker version so that when I type "docker version" I get Client version: 1.5.0. This is because the server I intend to communicate with is on 1.5.0.
When trying the command "docker run hello-world" I get the response:
"Post http:///var/run/docker.sock/v1.17/containers/create: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?"
When running "sudo docker run hello-world" I get the response:
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
Can someone please explain to me what's happening and how can fix it?
Thanks.
Edit: I tried to follow the solution for Linux here
However,
I had tried to follow El Mesa's instructions in that post. However, when I got to running sudo docker -d I got an Error running DeviceCreate (createPool) dm_task_run failed. I don't think I need to start up a anything since I was just following the tutorial and the tutorial just did docker run hello-world immediately after installing docker
Pay attention to the text that immediately preceeds Are you trying to connect to a TLS-enabled daemon without TLS in the error message. In the question asked here it is permission denied, but it could also be no such file or directory (or possibly something else). The former is more likely to mean that the current user is lacking permissions to access docker, and the latter is more likely to mean that there is a problem with the docker service itself, including the possibility that it is not running at all.
So depending on what your situation is look for the answers on this and the
linked question page that focus on the respective problem area.
In my case (CentOS Linux release 7.1.1503 (Core), docker-1.7.1-108.el7.centos.x86_64) it was permission denied. I have added user to the docker group (sudo usermod -a -G docker user) but docker command still didn't work when I ran it under user, while it ran fine under sudo. What I forgot to do is log the user out and back in after adding it to the docker group, which is a step necessary for the group membership to take effect.
Restarting the machine will also solve this issue but it is a more drastic step and will work because it will imply log out / log in step. I would recommend trying to log out and back in before restarting because if it works it will give you more confidence that the group membership was the actual issue. And if it doesn't work you can always try restarting, though if it works after that it will probably work because restarting took care of some other underlying issue.
And one more thing in case you come across it and find yourself in doubt - when you first install docker and wish to add user to the docker group, you may notice (as I did in my case) that the "dockerroot" group exists but not "docker" group. Do not add user to the dockerroot group assuming that is the one you need. Instead create new docker group and add the user to it.
It may be that your docker daemon is not running.
I have ubuntu/docker on a desktop with wireless LAN.
It acts a bit finicky compared to the wired computers from which docker works OK, and duplicates the error message you reported:
$ docker run -it ubuntu:latest /bin/bash
FATA[0000] Post http:///var/run/docker.sock/v1.17/containers/create: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
However, after running:
sudo service docker start
It behaves correctly (at least until the host is rebooted):
$ docker run -it ubuntu:latest /bin/bash
root#2cea4e5f5028:/#
If the system is not starting the docker daemon on boot, as was the case here, then the docker daemon can be automatically started on boot by editing /etc/rc.local to do so. Add the line below immediately before the exit line. This will fork a new bash shell, wait 30 sec for the network setup, etc., to settle, and start the docker daemon. sudo is unnecessary here because /etc/rc.local runs as root.
( sleep 30; /usr/sbin/service docker start ) &

Automatically Start Services in Docker Container

I'm doing some initial tests with docker. At moment i have my images and I can put some containers running, with:
docker ps
I do docker attach container_id and start apache2 service.
Then from the main console I commit the container to the image.
After exiting the container, if I try to start the container or try to run one new container from the committed image, the service is always stopped.
How can create or restart one container with the services started, for example apache?
EDIT:
I've learned a lot about Docker since originally posting this answer. "Starting services automatically in Docker containers" is not a good usage pattern for Docker. Instead, use something like fleet, Kubernetes, or even Monit/SystemD/Upstart/Init.d/Cron to automatically start services that execute inside Docker containers.
ORIGINAL ANSWER:
If you are starting the container with the command /bin/bash, then you can accomplish this in the manner outlined here: https://stackoverflow.com/a/19872810/2971199
So, if you are starting the container with docker run -i -t IMAGE /bin/bash and if you want to automatically start apache2 when the container is started, edit /etc/bash.bashrc in the container and add /usr/local/apache2/bin/apachectl -f /usr/local/apache2/conf/httpd.conf (or whatever your apache2 start command is) to a newline at the end of the file.
Save the changes to your image and restart it with docker run -i -t IMAGE /bin/bash and you will find apache2 running when you attach.
An option that you could use would to be use a process manager such as Supervisord to run multiple processes. Someone accomplished this with sshd and mongodb: https://github.com/justone/docker-mongodb
I guess you can't. What you can do is create an image using a Dockerfile and define a CMD in that, which will be executed when the container starts. See the builder documentation for the basics (https://docs.docker.com/reference/builder/) and see Run a service automatically in a docker container for information on keeping your service running.
You don't need to automate this using a Dockerfile. You could also create the image via a manual commit as you do, and run it command line. Then, you supply the command it should run (which is exactly what the Dockerfile CMD actually does). You can also override the Dockerfiles CMD in this way: only the latest CMD will be executed, which is the command line command if you start the container using one. The basic docker run -i -t base /bin/bash command from the documentation is an example. If your command becomes too long you could create a convenience script of course.
By design, containers started in detached mode exit when the root process used to run the container exits.
You need to start a Apache service in FOREGROUND mode.
docker run -p 8080:80 -d ubuntu/apache apachectl -D FOREGROUND
Reference: https://docs.docker.com/engine/reference/run/#detached-vs-foreground
Try to add start script to entrypoint in dockerfile like this;
ENTRYPOINT service apache2 restart && bash

Resources