I'm running docker following this procedure:
$ sudo service docker start
$ docker pull epgg/eg
$ docker run -p 8081:80 --name eg -it epgg/eg bash
root#35f54d7d290f:~#
Notice at the last step it creates a root prompt root#35f54d7d290f:~#.
When I do
root#35f54d7d290f:~# exit
exit
The docker process end and the Apache inside the container is dead.
How can I exit the container safely, and how can I re-enter the docker
container prompt back.
When you run following command it performs 2 operations.
$ docker run -p 8081:80 --name eg -it epgg/eg bash
It creates a container named eg
It has only one purpose/process bash that you have overridden using cmd parameter.
That means when bash shell is over/complete/exit container has no objective to run & hence your docker container will also entered into stopped stage.
Ideally you should create a container to run apache-server as the main process (either by default entry-point or cmd).
$ docker run -p 8081:80 --name eg -d epgg/eg
And then using following command you can enter inside the running container.
$ docker exec -it eg bash
here name of your container is eg
(Note since you already have a container named "eg" you may want to remove it first)
$ docker rm -f eg
Since the purpose of any container is to launch/run any process/processes, the container stops/exits when that process is finished. So to keep running the container in the background, it must have an active process inside it.
You can use -d shorthand while running the container to detach
the container from the terminal like this:
docker run -d -it --name {container_name} {image}:{tag}
But this doesn't guarantee to run any process actively in the
background, so even in this case container will stop when the
process comes to an end.
To run the apache server actively in the background you need to use
-DFOREGROUND flag while starting the container:
/usr/sbin/httpd -DFOREGROUND (for centOS/RHEL)
This will run your apache service in background.
In other cases, to keep your services running in the detached mode
simply pass on the /bin/bash command, this will keep the bash
shell active in the background.
docker run -d -it --name {container_name} {image}:{tag} /bin/bash
Anyway, to come outside the running container without exiting the container and the process, simply press: Ctrl+P+Q.
To attach container again to the terminal use this:
docker attach {container_name}
Related
So I'm trying to run the following shell script which requires the container id/name of the container (in which the script would be run) dynamically.
One way could be to do docker ps and then getting the Container Id, but that won't be dynamic.
So is there a way to do this dynamically?
#!/bin/bash
docker exec <container id/name> /bin/bash -c "useradd -m <username> -p <password>"
You can give your container a specific name when running it using --name option.
docker run --name mycontainer ...
Then your exec command can use the specified name:
docker exec -it mycontainer ...
You can start your container and store the container id inside a variable like so:
container_id=$(docker run -it --rm --detach busybox)
Then you can use the container id in your docker exec command like so:
docker exec $container_id ls -la
or
docker stop $container_id
Note: Not using a (unique) name for the container but using an ID instead is inspired by this article on how to treat your servers/containers as cattle and not pets
I just figured out a way to do this that works for this. I'm constantly going into my container in bash, but each time I do it I have to look up the id of the running container - which is a pain. I use the --filter command like so:
docker ps -q --filter="NAME={name of container}"
Then the only thing that's output is the id of the container, which allows me to run:
docker exec -it $(docker ps -q --filter="NAME={name of container}") bash
...which is what I really want to do in this case.
You can filter by
id, name, label, exited, status, ancestor,
beforesince, volume, network, publishexpose,
health,isolation, or is-task
The documentation for filter is here.
Like for example if I use command docker run -it alpine /bin/sh
it starts a terminal after which I can install packages and all. Now when I use exit command it goes back to the terminal. (main one)
So how can I access the same container again?
When I run that command again, I get a fresh alpine.
Please help
The container lives as long as the specified run command process is still running. When you specify to run /bin/sh, once you exit, the sh process will die and so will you container.
If you want to keep your container running, you have to keep the process inside running. For your case (I am not sure what you want to acheive, I assume you are just testing), the following will keep it running
docker run -d --name alpine alpine tail -f /dev/null
Then you can sh into the container using
docker exec -it alpine sh
Pull an image
docker image pull alpine
See that image is there
docker image ls OR just docker images
see what is inside the alpine
docker run alpine ls -al
Now your question is how to stay with the shell
docker container run -it alpine /bin/sh
You are inside shell script command line. Some distribution may have bash shell.
docker exec -it 5f4 sh
/ # (<-- you can run linux command here!)
At this point, you can use command line of alpine and do
ls -al
type exit to come out-
You can run it in detached mode and it will keep running.
With exec command we can login again
docker container run -it -d alpine /bin/sh
verify that it is UP and copy the FIRST 2 -3 digits of the container ID
docker container ls
login with exec command
docker exec -it <CONTAINER ID or just 2-3 digits> sh
You will need to STOP otherwise it will keep running.
docker stop <CONTAINER ID>
Run Alpine in background
$ docker run --name alpy -dit alpine
$ docker ps
Attach to Alpine
$ docker attach alpy
You should use docker start, which allows you to start a stopped container. If you didn't name your container, you'll need to get it's name/id using docker ps.
For example,
$docker ps
CONTAINER ID IMAGE COMMAND
4c01db0b339c alpine bash
$docker start -i -a 4c01db0b339c
What you should do is below
docker run -d --name myalpine alpine tail -f /dev/null
This would make sure that your container doesn't die. Now whenever you need to install packages inside you just get inside the container using sh
docker exec -it myalpine /bin/sh
If for some reason your container dies, you can still start it again using
docker start myalpine
I wanted to create a new container with Node.js and start a bash-shell in it where I can interactively verify something.
Therefore I did docker run node /bin/bash but it exited instantly.
What did I do wrong?
You missed the -it: docker run -it <image-name> /bin/bash
--interactive, -i: Keep STDIN open even if not attached
--tty, -t: Allocate a pseudo-TTY
docker run reference
I am trying to get a shell inside the Docker container moul/phoronix-test-suite on Docker Hub using this command
docker run -t -i moul/phoronix-test-suite /bin/bash
but just after executing the command (binary file), the container stops and I get no shell into it.
[slazer#localhost ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0993189463e6 moul/phoronix-test-suite "phoronix-test-suite " 7 seconds ago Exited (0) 3 seconds ago kickass_shockley
It is a ubuntu:trusty container. How can I get a shell into it, so that I can send arguments to the command phoronix-test-suite?
docker run -t -i moul/phoronix-test-suite /bin/bash will not give you a bash (contrary to docker run -it fedora bash)
According to its Dockerfile, what it will do is execute
phoronix-test-suite /bin/bash
Meaning, it will pass /bin/bash as parameter to phoronix-test-suite, which will exit immediately. That leaves you no time to execute a docker exec -it <container> bash in order to open a bash in an active container session.
Have you tried restarting your docker? It might need to restart or even reboot the host.
I am running Docker (1.10.2) on Windows. I created a script to echo 'Hello World' on my machine and stored it in C:/Users/username/MountTest. I created a new container and mounted this directory (MountTest) as a data volume. The command I ran to do so is shown below:
docker run -t -i --name mounttest -v /c/Users/sarin/MountTest:/home ubuntu /bin/bash
Next, I run the command to execute the script within the container mounttest.
docker exec -it mounttest sh /home/helloworld.sh
The result is as follows:
: not foundworld.sh: 2: /home/helloworld.sh:
Hello World
I get the desired output (echo Hello World) but I want to understand the reason behind the not found errors.
Note: This question might look similar to Run shell script on docker from shared volume, but it addresses permission related issues.
References:
The helloworld.sh file:
#!/bin/sh
echo 'Hello World'
The mounted volumes information is captured below.
Considering the default ENTRYPOINT for the 'ubuntu' image is sh -c, the final command executed on docker exec is:
sh -c 'sh /home/helloworld.sh'
It looks a bit strange and might be the cause of the error message.
Try simply:
docker exec -it mounttest /home/helloworld.sh
# or
docker exec -it mounttest sh -c '/home/helloworld.sh'
Of course, the docker exec should be done in a boot2docker ssh session, simalar to the shell session in which you did a docker run.
Since the docker run opens a bash, you should make a new boot2docker session (docker-machine ssh), and in that new boot2docker shell session, try the docker exec.
Trying docker exec from within the bash made by docker run means trying to do DiD (Docker in Docker). It is not relevant for your test.