With Docker, you can define CMD and ENTRYPOINT together such that the former becomes the arguments to the latter:
ENTRYPOINT [CMD]
This way, the end-user doesn't have to know or care about the entrypoint.
How can this be accomplished with Azure Container Instance? The documentation suggests that it can't, providing only --command-line which seems to replace both.
The only way to overwrite ENTRYPOINT and CMD at run time is by using the --entrypoint flag when running a docker container. Do note that you can't overwrite the CMD without overwriting the ENTRYPOINT as well (unless the entrypoint is empty), and obviously the opposite is true too.
The article that you linked explains how --command-line can be used to overwrite ENTRYPOINT and CMD at run time, similarly to how you would do that with a docker run command.
Related
pm2-docker is supposed to run inside docker containers, so...according to http://pm2.keymetrics.io/docs/usage/docker-pm2-nodejs/:
To split each processes in his own Docker, you can use the –only [app-name] option:
CMD ["pm2-docker", "process.yml", "--only", "APP"]
What is this --only option for?
"Split each process in his own Docker". ? His own Docker what? Container?
So pm2-docker runs inside a container and spawns containers inside of it?
Apparently, you can specify several services on the process.yml file, and this flag will only start that one specified with --only. It has nothind to do with docker or containers since it's also a pm2 argument.
I have an application running inside Docker requires Huge-page to run .Now I tried following set of command for same.
CMD ["mkdir", "-p" ,"/dev/hugepages"]
CMD ["mount" ,"-t", "hugetlbfs" , "none", "/dev/hugepages"]
CMD ["echo 512" ,">" ,"/proc/sys/vm/nr_hugepages"]
CMD ["mount"]
But I don't see Hugepages gets mounted from mout command, why?
Could anyone please point me out, is it possible to do it?
There's a number of things at hand;
First of all, a Dockerfile only has a single command (CMD); what you're doing won't work; if you need to do multiple steps when the container is started, consider using an entrypoint script, for example this is the entrypoint script of the official mysql image
Second, doing mount in a container requires additional privileges. You can use --privileged but that is probably far too wide of a step, and gives far too much privileges to the container. You can try running the container with --cap-add SYS_ADMINin stead.
Alternative solution
A much cleaner solution could be to mount hugepages on the host, and give the container access to that device, e.g.;
docker run --device=/dev/hugepages:/dev/hugepages ....
I am experimenting with Docker and understanding concepts around use of volumes. I have a tomcat app which writes files to a particular volume.
I write a Dockerfile with ENTRYPOINT of "dosomething.sh"
The issue I have with entrypoint script is ..
In the "dosomething.sh", I could potentially have a malicious code to delete all files on the volume !!!
Is there a way to guard against it, especially because, I was planning on sharing this dockerfile and script with my dev team too and the care i have to take for production role out appears scary !
One thought is not to have an "ENTRYPOINT" at all for all the containers that have volumes.
Experienced folks,please advise on how you deal with this...
If you are using data volume container to isolate your volume, such container never run: they are created only (docker create).
That means you need to mount that data volume container into other containers for them to access that volume.
That mitigates a bit the dangerous entrypoint: a simple docker run would have access to nothing, since no -v mounting volume option would have been set.
Another approach is to at least have the script declared as CMD, not ENTRYPOINT (and for the ENTRYPOINT as [ "/bin/sh", "-c" ]. That way, it is easier to docker run with an alternative command (passed as parameter, overriding CMD), instead of having to always execute the script just because it is an ENTRYPOINT.
I would like to run a docker container that hosts a simple web application, however I do not understand how to design/run the image as a server. For example:
docker run -d -p 80:80 ubuntu:14.04 /bin/bash
This will start and immediately shutdown the container. Instead we can start it interactively:
docker run -i -p 80:80 ubuntu:14.04 /bin/bash
This works, but now I have to keep open the interactive shell for every container that is running? I would rather just start it and have it running in the background. A hack would be using a command that never returns:
docker run -d -p 80:80 {image} tail -F /var/log/kern.log
But now I cannot connect to the shell anymore, to inspect what is going on if the application is acting up.
Is there a way to start the container in the background (as we would do for a vm), in a way that allows for attaching/detaching a shell from the host? Or am I completely missing the point?
The final argument to docker run is the command to run within the container. When you run docker run -d -p 80:80 ubuntu:14.04 /bin/bash, you're running bash in the container and nothing more. You actually want to run your web application in a container and to keep that container alive, so you should do docker run -d -p 80:80 ubuntu:14.04 /path/to/yourapp.
But your application probably depends on some configuration in order to run. If it reads its configuration from environment variables, you can use the -e key=value arguments with docker run. If your application needs a configuration file to be in place, you should probably use a Dockerfile to set up the configuration first.
This article provides a nice complete example of running a node application in a container.
Users of docker tend to assume a container to be a complete a VM, while the docker design concept is more focused on optimal containerization rather than mimic the VM within a container.
Both are correct however some implementation details are not easy to get familiar with in the beginning. I am trying to summarize some of the implementational difference in a way that is easier to understand.
SSH
SSH would be the most straight-forward way to go inside a Linux VM (or container), however many dockerized templates do not have ssh server installed. I believe this is because of optimization & security reasons for the container.
docker attach
docker attach can be handy if working as out-of-the-box. However as of writing it is not stable - https://github.com/docker/docker/issues/8521. Might be associated with SSH set up, but not sure when it is completely fixed.
docker recommended practices (nsenter and etc)
Some alternatives (or best practices in some sense) recommended by Docker at https://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/
This practice basically separates out mutable elements out of a container and maps them to some places in a docker host so they can be manipulated from outside of container and/or persisted. Could be a good practice in production environment but not now when more docker related projects are around dev and staging environment.
bash command line
"docker exec -it {container id} bash" cloud be very handy and practical tool to get in to the machine.
Some basics
"docker run" creates a new container so previous changes will not be saved.
"docker start" will start an existing container so previous changes will still be in the container, however you need to find the correct container-id among many with a same image-id. Need to "docker commit" to suppress versions if wanted.
Ctrl-C will stop the container when exiting. You will want to append "&" at the end so the container can run background and gives you the prompt when hitting enter key.
To the original question, you can tail some file, like you mentioned, to keep the process running.
To reach the shell, instead of "attach", you have two options:
docker exec -it <container_id> /bin/bash
Or
run ssh daemon in the container, port map the ssh and then ssh to container.
I'm doing some initial tests with docker. At moment i have my images and I can put some containers running, with:
docker ps
I do docker attach container_id and start apache2 service.
Then from the main console I commit the container to the image.
After exiting the container, if I try to start the container or try to run one new container from the committed image, the service is always stopped.
How can create or restart one container with the services started, for example apache?
EDIT:
I've learned a lot about Docker since originally posting this answer. "Starting services automatically in Docker containers" is not a good usage pattern for Docker. Instead, use something like fleet, Kubernetes, or even Monit/SystemD/Upstart/Init.d/Cron to automatically start services that execute inside Docker containers.
ORIGINAL ANSWER:
If you are starting the container with the command /bin/bash, then you can accomplish this in the manner outlined here: https://stackoverflow.com/a/19872810/2971199
So, if you are starting the container with docker run -i -t IMAGE /bin/bash and if you want to automatically start apache2 when the container is started, edit /etc/bash.bashrc in the container and add /usr/local/apache2/bin/apachectl -f /usr/local/apache2/conf/httpd.conf (or whatever your apache2 start command is) to a newline at the end of the file.
Save the changes to your image and restart it with docker run -i -t IMAGE /bin/bash and you will find apache2 running when you attach.
An option that you could use would to be use a process manager such as Supervisord to run multiple processes. Someone accomplished this with sshd and mongodb: https://github.com/justone/docker-mongodb
I guess you can't. What you can do is create an image using a Dockerfile and define a CMD in that, which will be executed when the container starts. See the builder documentation for the basics (https://docs.docker.com/reference/builder/) and see Run a service automatically in a docker container for information on keeping your service running.
You don't need to automate this using a Dockerfile. You could also create the image via a manual commit as you do, and run it command line. Then, you supply the command it should run (which is exactly what the Dockerfile CMD actually does). You can also override the Dockerfiles CMD in this way: only the latest CMD will be executed, which is the command line command if you start the container using one. The basic docker run -i -t base /bin/bash command from the documentation is an example. If your command becomes too long you could create a convenience script of course.
By design, containers started in detached mode exit when the root process used to run the container exits.
You need to start a Apache service in FOREGROUND mode.
docker run -p 8080:80 -d ubuntu/apache apachectl -D FOREGROUND
Reference: https://docs.docker.com/engine/reference/run/#detached-vs-foreground
Try to add start script to entrypoint in dockerfile like this;
ENTRYPOINT service apache2 restart && bash