I have a docker image (MyBaseImage) where I have Open SSH server installed. Now in the docker file, I have the following.
#Download base image
FROM MyBaseImage
RUN service ssh start
I build the new image by typing
docker build .
Docker builds the image fine, giving the following information.
Step 1/2 : FROM MyBaseImage
---> 56f88e347f77
Step 2/2 : RUN service ssh start
---> Running in a1afe0c2ce71
* Starting OpenBSD Secure Shell server sshd [ OK ]
Removing intermediate container a1afe0c2ce71
---> 7879cebe8b6a
But when I run the new image by typing
docker run -it 7879cebe8b6a
Typing the following in the terminal of the container
service ssh status
gives
* sshd is not running
I have to then manually start the Open SSH server by typing service ssh start.
What could be the reason for it?
If you look at your build, you can see the ssh service start in an intermediate container which is deleted in the next build step:
---> Running in a1afe0c2ce71
* Starting OpenBSD Secure Shell server sshd [ OK ]
Removing intermediate container a1afe0c2ce71
To start a service in a Dockerfile, you should use either a CMD or ENTRYPOINT statement as the last line (depending on whether you might want to pass an argument in the docker run ... command, normally.
Generally, a service will start in the background as a daemon however, so having this as your last line:
CMD ["service", "ssh", "start"]
Will not work, as the container will exit as it has nothing do to
What you probably want (from the docker docs) is this:
CMD ["/usr/sbin/sshd", "-D"]
Which starts the service in the foreground so that the container will stay alive
This link has useful info about the difference between CMD & ENTRYPOINT, and also the difference between the exec & shell formats.
Depending on which distro of linux you are using command slightly changes.
If you are using ubuntu your start command should work.
But if your base image is centos/RHEL try service sshd start
Related
While the nginx might not be a good example for this case but there would be similar cases that we will need to run a process and access it inside a container without recreating/rerunning it.
I already run nginx as a container using following command:
docker run -d --name=my_container nginx:latest
I didn't active terminal or interactive mode during docker run command. Now, I want to run a bash (using docker exec ... ) in detach mode and then attach (using docker attach ... ) to it later.
As you know we can run new process inside a container, e.g:
docker exec -itd my_container bash
By this way a new bash process will run inside this container.
Now my question is that how to attach to this process later?
I tried to run following command but it just shows the nginx live log:
docker attach my_nginx2
If I understand your use case, you can do this :
docker run -itd --name=my_container nginx:latest bash -i -c 'nginx; bash -i'
this allows you to do :
docker attach my_container
you can detach from a container and leave it running using the CTRL-p CTRL-q key sequence.
I build a docker image which runs flask on an uWSGI server. I have a uwsgi.ini file with configuration. When I use
CMD ["uwsgi", "--ini", "uwsgi.ini"]
in the Dockerfile, everything works fine and the server runs, I can access my flask app via the browser. The only problem is that I cannot use the container's bash while it's running, as it is blocked by the uWSGI process. I found that appending an & should make the uWSGI run in the background. However, when I use
CMD ["uwsgi", "--ini", "uwsgi.ini", "&"]
I get an error saying
unable to load configuration from &
I get that when I try this, the uWSGI thinks I'm passing another argument that it should process. However, I cannot find any way to tell it that it is not the case. Using docker run with the -d argument also only detaches the container from the current terminal on the host, but when I use docker attach, I get a bash that I can't do anything with.
Is there a way to tell uWSGI explicitly that I want it to run in the background? Am I missing something?
You can execute a command on your container by using the exec command.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9fb488c77d55 nginx "/docker-entrypoint.…" 18 minutes ago Up 18 minutes 80/tcp distracted_brown
so here I have an nginx container image running in a container called distracted_brown. So I can ask the container to run a command using exec. In this case the command i want to run is the shell sh. I also pass the -it flag so i can run interactivly with STDIN and STDOUT
docker container exec -it distracted_brown sh
this will allow me shell access to the container where nginx is running as PID 1. As a side note you dont normally want to run your CMD process in the background as when PID 1 closes the container will close.
I have a container with a docker-compose like this
services:
app:
build:
context: app
restart: always
version: '3.5'
It launches a node app docker-compose run -d --name my-app app node myapp.js
the app is made to either run to completion or throw, and then the goal would be to have docker restart it in an infinite loop, regardless of the exit code. I'm unsure why but it doesn't restart it.
How can I debug this? I have no clue what exit code node is sending, nor do I know which exit code docker uses to decide to restart or not.
I am also on mac, haven't tested on linux yet. Edit: It does restart on linux, don't have another mac to see if the behavior is isolated to my mac only.
It is important to understand the following two concepts:
Ending your Node app doesn't mean the end of your container. Your container runs a shared process from your OS and your Node app is only a sub process of that. (Assuming your application runs with the Deamon)
The restart indicates the "starting" policy - it will never terminate and start your container again.
Having said that, what you need is a way you can really restart your container from within the application. The best way to do this is via Docker healthchecks:
https://docs.docker.com/engine/reference/builder/#healthcheck
Or, here are some answers on restarting a container from within the application.
Stopping docker container from inside
From Github Issue seems like it does not respect `--restart``, or from the #Charlie comment seems like its vary from platform to platform.
The docker-compose run command is for running “one-off” or “adhoc” tasks.The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
docker-compose run
Also if its like docker run -it then I am not seeing an option for restart=always but it should then respect ``restart` option in compose.
Usage:
run [options] [-v VOLUME...] [-p PORT...] [-e KEY=VAL...] [-l KEY=VALUE...]
SERVICE [COMMAND] [ARGS...]
Options:
-d, --detach Detached mode: Run container in the background, print
new container name.
--name NAME Assign a name to the container
--entrypoint CMD Override the entrypoint of the image.
-e KEY=VAL Set an environment variable (can be used multiple times)
-l, --label KEY=VAL Add or override a label (can be used multiple times)
-u, --user="" Run as specified username or uid
--no-deps Don't start linked services.
--rm Remove container after run. Ignored in detached mode.
-p, --publish=[] Publish a container's port(s) to the host
--service-ports Run command with the service's ports enabled and mapped
to the host.
--use-aliases Use the service's network aliases in the network(s) the
container connects to.
-v, --volume=[] Bind mount a volume (default [])
-T Disable pseudo-tty allocation. By default `docker-compose run`
allocates a TTY.
-w, --workdir="" Working directory inside the container
I can't start a container on Bluemix with Debian, Centos, Alpine or other. Is there a way or is it blocked?
Image from docker hub.
Is there any getting started for the run command ?
I suppose that i need a file for parameters.
Big difference between running a container in local docker vs in the container service is that all containers in the container service are effectively running -d (i.e. daemon/disconncted mode). If you're just using the base images you listed, most of those do not have a long running process in the container, and expect that you will be running it interactive.
Result of that is that in the container service, the container starts, then exits again because it's non-interactive, and there's no other process to keep it alive. You can try adding a wait as the "cmd" for it.
i.e. for your dockerfile:
FROM alpine
build that to your registry, then run with something like cf ic run --name alpinetest -m 512 registry.ng.bluemix.net/yourregistryhere/alpine sh -c "sleep 1000000"
Then, to get an interactive shell, you can exec into the container using cf ic exec -ti alpinetest /bin/sh
Obviously, to have it do something useful, you're probably going to want to put an actual server running in there as the foreground app, and set that as the CMD or ENTRYPOINT, but that'll give you access to it running to poke at.
I'm doing some initial tests with docker. At moment i have my images and I can put some containers running, with:
docker ps
I do docker attach container_id and start apache2 service.
Then from the main console I commit the container to the image.
After exiting the container, if I try to start the container or try to run one new container from the committed image, the service is always stopped.
How can create or restart one container with the services started, for example apache?
EDIT:
I've learned a lot about Docker since originally posting this answer. "Starting services automatically in Docker containers" is not a good usage pattern for Docker. Instead, use something like fleet, Kubernetes, or even Monit/SystemD/Upstart/Init.d/Cron to automatically start services that execute inside Docker containers.
ORIGINAL ANSWER:
If you are starting the container with the command /bin/bash, then you can accomplish this in the manner outlined here: https://stackoverflow.com/a/19872810/2971199
So, if you are starting the container with docker run -i -t IMAGE /bin/bash and if you want to automatically start apache2 when the container is started, edit /etc/bash.bashrc in the container and add /usr/local/apache2/bin/apachectl -f /usr/local/apache2/conf/httpd.conf (or whatever your apache2 start command is) to a newline at the end of the file.
Save the changes to your image and restart it with docker run -i -t IMAGE /bin/bash and you will find apache2 running when you attach.
An option that you could use would to be use a process manager such as Supervisord to run multiple processes. Someone accomplished this with sshd and mongodb: https://github.com/justone/docker-mongodb
I guess you can't. What you can do is create an image using a Dockerfile and define a CMD in that, which will be executed when the container starts. See the builder documentation for the basics (https://docs.docker.com/reference/builder/) and see Run a service automatically in a docker container for information on keeping your service running.
You don't need to automate this using a Dockerfile. You could also create the image via a manual commit as you do, and run it command line. Then, you supply the command it should run (which is exactly what the Dockerfile CMD actually does). You can also override the Dockerfiles CMD in this way: only the latest CMD will be executed, which is the command line command if you start the container using one. The basic docker run -i -t base /bin/bash command from the documentation is an example. If your command becomes too long you could create a convenience script of course.
By design, containers started in detached mode exit when the root process used to run the container exits.
You need to start a Apache service in FOREGROUND mode.
docker run -p 8080:80 -d ubuntu/apache apachectl -D FOREGROUND
Reference: https://docs.docker.com/engine/reference/run/#detached-vs-foreground
Try to add start script to entrypoint in dockerfile like this;
ENTRYPOINT service apache2 restart && bash