I wanted to create a new container with Node.js and start a bash-shell in it where I can interactively verify something.
Therefore I did docker run node /bin/bash but it exited instantly.
What did I do wrong?
You missed the -it: docker run -it <image-name> /bin/bash
--interactive, -i: Keep STDIN open even if not attached
--tty, -t: Allocate a pseudo-TTY
docker run reference
Related
I'm running docker following this procedure:
$ sudo service docker start
$ docker pull epgg/eg
$ docker run -p 8081:80 --name eg -it epgg/eg bash
root#35f54d7d290f:~#
Notice at the last step it creates a root prompt root#35f54d7d290f:~#.
When I do
root#35f54d7d290f:~# exit
exit
The docker process end and the Apache inside the container is dead.
How can I exit the container safely, and how can I re-enter the docker
container prompt back.
When you run following command it performs 2 operations.
$ docker run -p 8081:80 --name eg -it epgg/eg bash
It creates a container named eg
It has only one purpose/process bash that you have overridden using cmd parameter.
That means when bash shell is over/complete/exit container has no objective to run & hence your docker container will also entered into stopped stage.
Ideally you should create a container to run apache-server as the main process (either by default entry-point or cmd).
$ docker run -p 8081:80 --name eg -d epgg/eg
And then using following command you can enter inside the running container.
$ docker exec -it eg bash
here name of your container is eg
(Note since you already have a container named "eg" you may want to remove it first)
$ docker rm -f eg
Since the purpose of any container is to launch/run any process/processes, the container stops/exits when that process is finished. So to keep running the container in the background, it must have an active process inside it.
You can use -d shorthand while running the container to detach
the container from the terminal like this:
docker run -d -it --name {container_name} {image}:{tag}
But this doesn't guarantee to run any process actively in the
background, so even in this case container will stop when the
process comes to an end.
To run the apache server actively in the background you need to use
-DFOREGROUND flag while starting the container:
/usr/sbin/httpd -DFOREGROUND (for centOS/RHEL)
This will run your apache service in background.
In other cases, to keep your services running in the detached mode
simply pass on the /bin/bash command, this will keep the bash
shell active in the background.
docker run -d -it --name {container_name} {image}:{tag} /bin/bash
Anyway, to come outside the running container without exiting the container and the process, simply press: Ctrl+P+Q.
To attach container again to the terminal use this:
docker attach {container_name}
Like for example if I use command docker run -it alpine /bin/sh
it starts a terminal after which I can install packages and all. Now when I use exit command it goes back to the terminal. (main one)
So how can I access the same container again?
When I run that command again, I get a fresh alpine.
Please help
The container lives as long as the specified run command process is still running. When you specify to run /bin/sh, once you exit, the sh process will die and so will you container.
If you want to keep your container running, you have to keep the process inside running. For your case (I am not sure what you want to acheive, I assume you are just testing), the following will keep it running
docker run -d --name alpine alpine tail -f /dev/null
Then you can sh into the container using
docker exec -it alpine sh
Pull an image
docker image pull alpine
See that image is there
docker image ls OR just docker images
see what is inside the alpine
docker run alpine ls -al
Now your question is how to stay with the shell
docker container run -it alpine /bin/sh
You are inside shell script command line. Some distribution may have bash shell.
docker exec -it 5f4 sh
/ # (<-- you can run linux command here!)
At this point, you can use command line of alpine and do
ls -al
type exit to come out-
You can run it in detached mode and it will keep running.
With exec command we can login again
docker container run -it -d alpine /bin/sh
verify that it is UP and copy the FIRST 2 -3 digits of the container ID
docker container ls
login with exec command
docker exec -it <CONTAINER ID or just 2-3 digits> sh
You will need to STOP otherwise it will keep running.
docker stop <CONTAINER ID>
Run Alpine in background
$ docker run --name alpy -dit alpine
$ docker ps
Attach to Alpine
$ docker attach alpy
You should use docker start, which allows you to start a stopped container. If you didn't name your container, you'll need to get it's name/id using docker ps.
For example,
$docker ps
CONTAINER ID IMAGE COMMAND
4c01db0b339c alpine bash
$docker start -i -a 4c01db0b339c
What you should do is below
docker run -d --name myalpine alpine tail -f /dev/null
This would make sure that your container doesn't die. Now whenever you need to install packages inside you just get inside the container using sh
docker exec -it myalpine /bin/sh
If for some reason your container dies, you can still start it again using
docker start myalpine
I am trying to get a shell inside the Docker container moul/phoronix-test-suite on Docker Hub using this command
docker run -t -i moul/phoronix-test-suite /bin/bash
but just after executing the command (binary file), the container stops and I get no shell into it.
[slazer#localhost ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0993189463e6 moul/phoronix-test-suite "phoronix-test-suite " 7 seconds ago Exited (0) 3 seconds ago kickass_shockley
It is a ubuntu:trusty container. How can I get a shell into it, so that I can send arguments to the command phoronix-test-suite?
docker run -t -i moul/phoronix-test-suite /bin/bash will not give you a bash (contrary to docker run -it fedora bash)
According to its Dockerfile, what it will do is execute
phoronix-test-suite /bin/bash
Meaning, it will pass /bin/bash as parameter to phoronix-test-suite, which will exit immediately. That leaves you no time to execute a docker exec -it <container> bash in order to open a bash in an active container session.
Have you tried restarting your docker? It might need to restart or even reboot the host.
I am running Docker (1.10.2) on Windows. I created a script to echo 'Hello World' on my machine and stored it in C:/Users/username/MountTest. I created a new container and mounted this directory (MountTest) as a data volume. The command I ran to do so is shown below:
docker run -t -i --name mounttest -v /c/Users/sarin/MountTest:/home ubuntu /bin/bash
Next, I run the command to execute the script within the container mounttest.
docker exec -it mounttest sh /home/helloworld.sh
The result is as follows:
: not foundworld.sh: 2: /home/helloworld.sh:
Hello World
I get the desired output (echo Hello World) but I want to understand the reason behind the not found errors.
Note: This question might look similar to Run shell script on docker from shared volume, but it addresses permission related issues.
References:
The helloworld.sh file:
#!/bin/sh
echo 'Hello World'
The mounted volumes information is captured below.
Considering the default ENTRYPOINT for the 'ubuntu' image is sh -c, the final command executed on docker exec is:
sh -c 'sh /home/helloworld.sh'
It looks a bit strange and might be the cause of the error message.
Try simply:
docker exec -it mounttest /home/helloworld.sh
# or
docker exec -it mounttest sh -c '/home/helloworld.sh'
Of course, the docker exec should be done in a boot2docker ssh session, simalar to the shell session in which you did a docker run.
Since the docker run opens a bash, you should make a new boot2docker session (docker-machine ssh), and in that new boot2docker shell session, try the docker exec.
Trying docker exec from within the bash made by docker run means trying to do DiD (Docker in Docker). It is not relevant for your test.
I just started using Docker, and I like it very much, but I have a clunky
workflow that I'd like to streamline. When I'm iterating on my Dockerfile script
I will often test things out after a build by launching a
bash session, running some commands, finding out that such
and such package didn't get installed correctly, then
going back and tweaking my Dockerfile.
Let's say I have built my image and tagged it as buildfoo, I'd run it like
this:
$> docker run -t -i buildfoo
... enter some bash commands.. then ^D to exit
Then I will have a container running that I have to clean up. Usually I just nuke everything like this:
docker rm --force `docker ps -qa`
This works OK for me.. However, I'd rather not have to manually remove the
container.
Any tips gratefully accepted !
Some Additional Minor Details:
Running minimal centos 7 image and using bash as my shell.
Please use -rm flag of docker run command. --rm=true or just --rm.
It automatically remove the container when it exits (incompatible with -d). Example:
docker run -i -t --rm=true centos /bin/bash
or
docker run -i -t --rm centos /bin/bash
Even though the above still works, the command below makes use of Docker's newer syntax
docker container run -it --rm centos bash
I use the alias dr
alias dr='docker run -it --rm'
That gives you:
dr myimage
ls
...
exit
No more container running.