I'm trying to containerize application which consists of two services defined. One is basic server running on specific port and second one is SSH server. The first service creates users by executing standard unix commands (managing user and their ssh keys) and they can login to second ssh service. Each service is running in separate container and I use Docker Compose to make it all running together.
The issue is that I'm unable to define Docker volumes so that I can user created by first service can use second service. I need share files like /etc/passwd, /etc/shadow, /etc/group between both services.
What I tried:
1) Define volumes for each file
Not supported in Docker Compose (it can use only directories as volumes)
2) Replace /etc/passwd with symlink to copy in volume
as described here
3) Set whole /etc as volume
doesn't work (only some files get from image to volume)
is ugly
I would be thrilled with any suggestion or workaround which wouldn't require put both services in one container.
Related
I have two docker containers A and B. On container A a django application is running. On container B a WEBDAV Source is mounted.
Now I want to check from container A if a folder exists in container B (in the WebDAV mount destination).
What is the best solution to do something like that? Currently I solved it mounting the docker socket into the container A to execute cmds from A inside B. I am aware that mounting the docker socket into a container is a security risk for the host and the whole application stack.
Other possible solutions would be to use SSH or share and mount the directory which should be checked. Of course there are further possible solutions like doing it with HTTP requests.
Because there are so many ways to solve a problem like that, I want to know if there is a best practise (considering security, effort to implement, performance) to execute commands from container A in contianer B.
Thanks in advance
WebDAV provides a file-system-like interface on top of HTTP. I'd just directly use this. This requires almost no setup other than providing the other container's name in configuration (and if you're using plain docker run putting both containers on the same network), and it's the same setup in basically all container environments (including Docker Swarm, Kubernetes, Nomad, AWS ECS, ...) and a non-Docker development environment.
Of the other options you suggest:
Sharing a filesystem is possible. It leads to potential permission problems which can be tricky to iron out. There are potential security issues if the client container isn't supposed to be able to write the files. It may not work well in clustered environments like Kubernetes.
ssh is very hard to set up securely in a Docker environment. You don't want to hard-code a plain-text password that can be easily recovered from docker history; a best-practice setup would require generating host and user keys outside of Docker and bind-mounting them into both containers (I've never seen a setup like this in an SO question). This also brings the complexity of running multiple processes inside a container.
Mounting the Docker socket is complicated, non-portable across environments, and a massive security risk (you can very easily use the Docker socket to root the entire host). You'd need to rewrite that code for each different container environment you might run in. This should be a last resort; I'd consider it only if creating and destroying containers would need to be a key part of this one container's operation.
Is there a best practise to execute commands from container A in contianer B?
"Don't." Rearchitect your application to have some other way to communicate between the two containers, often over HTTP or using a message queue like RabbitMQ.
One solution would be to mount one filesystem readonly on one container and read-write on the other container.
See this answer: Docker, mount volumes as readonly
I'm trying to run a ghost docker image on Azure within a Linux Docker container. This is incredibly easy to get up and running using a custom Docker image for Azure Web App on Linux and pointing it at the official docker hub image for ghost.
Unfortunately the official docker image stores all data on the /var/lib/ghost path which isn't persisted across restarts so whenever the container is restarted all my content get's deleted and I end up back at a default ghost install.
Azure won't let me execute arbitrary commands you basically point it at a docker image and it fires off from there so I can't use the -v command line param to map a volume. The docker image does have an entry point configured if that would help.
Any suggestions would be great. Thanks!
Set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in appsettings and the home directory would be mapped from your outer kudo instance:
https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq
You have a few options:
You could mount a file share inside the Docker container by creating a custom image, then storing data there. See these docs for more details.
You could switch to the new container instances, as they provide volume support.
You could switch to the Azure Container Service. This requires an orchestrator, like Kubernetes, and might be more work than you're looking for, but it also offers more flexibility, provides better reliability and scaling, and other benefits.
You have to use a shared volume that map the content of the container /var/lib/ghost directory to a host directory. This way, your data will persist in your host directory.
To do that, use the following command.
$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
I never worked with Azure, so I'm not 100 percent sure the following applies. But if you interface docker via the CLI there is a good chance it applies.
Persistency in docker is handled with volumes. They are basically mounts inside the container's file system tree to a directory on the outside. From your text I understand that you want store the content of the inside /var/lib/ghost path in /home/site/wwwroot on the outside. To do this you would call docker like this:
$ docker run [...] -v /var/lib/ghost:/home/site/wwwroot ghost
Unfortunately setting the persistent storage (or bring your own storage) to a specific path is currently not supported in Azure Web Apps on Linux.
That's said, you can play with ssh and try and configure ghost to point to /home/ instead of /var/lib/.
I have prepared a docker image here: https://hub.docker.com/r/elnably/ghost-on-azure that adds the ssh capability the dockerfile and code can be found here: https://github.com/ahmedelnably/ghost-on-azure/tree/master/1/alpine.
try it out by configuring you web app to use elnably/ghost-on-azure:latest, browse to the site (to start the container) and go to the ssh page .scm.azurewebsites.net, to learn more about SSH check this link: https://aka.ms/linux-ssh.
I'm using the predefined build of Docker on Azure (Edge Channel) and one of the features is the logging feature. Checking with docker ps on the manager node I saw there is this editions_logger container (docker4x/logger-azure), which catches all the container logs and writes them to an Azure storage account.
How do I use this container directly to get the logs of my containers?
My first approach was to find the right storage and share and download the logs directly from the Azure portal.
The second approach was to connect to the container directly using docker exec -ti editions_logger cat /logmnt/xxx.log
Running docker service logs xxx throws only supported with experimental daemon
All approaches (not the third one though) seem quite over complicated. Is there a better way?
I checked both approaches on our cluster, but we found a fairly easy way to check the logs for now. The Azure OMS approach is really good and i can recommend it, but the setup is too huge for us at the moment. Also the logstash approach is good.
Luckily the tail command supports wildcards and using this we can view our logs nicely.
docker exec -ti editions_logger bash
cd /logmnt
tail -f service_name*
Thank you very much for the different approaches! Im looking forward to the new Swarm features (there is already the docker service logs command, so in the future it should be even easier to check the logs.)
Another way, we can use --volumes to store container logs to Host, then use Logstash to collect logs from the volumes.
In the host machine to open a fixed directory D, and mount the logs to the sub-directory of the D directory, then the mount D to Logstash. In this way, the Logstash container can collect all logs from other containers.
It works like this:
RedHat OpenShift runs docker containers as random user IDs.
This works fine for some containers, but the NGINX container requires file permissions to be set to world read/write/execute in order to work.
Is there a more correct way to build/run a container for use with OpenShift?
For example, does OpenShift provide any kind of process ownership groups or rules?
Here is the nginx image I pull down, and the chmod command we currently run to make it work in OpenShift.
registry-nginxinc.rhcloud.com/nginx/rhel7-nginx:1.9.2
RUN chmod -R 777 /var/log/nginx /var/cache/nginx/ \
&& chmod -R 777 /var/run \
&& chmod -R 777 /etc/nginx/*
References:
http://mailman.nginx.org/pipermail/nginx-devel/2015-November/007511.html
https://github.com/fsimorbrian/openshift-secure-routes
Why does this openshift route succeed in CDK but fail in RHEL7 Atomic?
Best practice is that you do not run your containers as root. Many Docker images out there, even some official images, ignore this and require you to run as root. Advice is generally that you should set up the image so that your application doesn't require root and can start up as a non root user you set up in the Dockerfile. Even this advice though isn't the most secure option for a couple of reasons.
The first is that they will say to use USER username, where username is obviously not root. For a platform that is hosting that image, that doesn't actually guarantee your application isn't running as root. This is because a named user such as username could be mapped to uid of 0 in the container and so still running with root privileges. To allow a platform to properly verify that your image isn't set up to run as root, you should use a uid instead of username. That should be anything except uid of 0.
The second problem is that although running as a specific non root user in your own Docker service instance may be fine, it isn't when you consider a multi tenant environment, be that for different users, or even different applications where it is important that the different applications can't access each other in any way.
In a multi tenant environment, the safest thing you can do is to run all applications owned by a specific account, or in different projects, as different users. One reason this is import is from the perspective of data access on persistent volumes. If everything was running as the same user and it managed to get access to persistent volumes it shouldn't, it could see data from other applications.
As far as OpenShift goes, by default it runs with the highest level of security to protect you. Thus, applications in one project are run with a different user to applications in another project.
You can reduce the security measures and override this if you have the appropriate privileges, but you should only make changes if you are comfortable that the application you are doing it for has a low risk profile. That is, you don't grab some arbitrary Docker image off the Internet you don't know anything about and let it run as root.
To learn more about changing the security context constraints around a specific application start by reading through:
https://docs.openshift.com/enterprise/latest/admin_guide/manage_scc.html
You can override the default and say that an image can run as the user it declares in the Dockerfile or even run it as root if need be.
The better way if want the best security is to construct the Docker image so that it can run as any user and not just a specific user.
The general guidelines for how to do this are:
Create a new user account in the container to run the application as. Make the primary group of this account be group ID 0. That is, its group will be that of root, but the user will not. It needs to be group ID 0 as that is what UNIX will default the group to if running as a user that has no entry in the UNIX passwd file.
Any directories/files that the application needs read access to should be readable/accessible by others, or readable/accessible by group root.
Any directories/files that the application needs write access to should be writable by group root.
The application should not require the ability to bind privileged ports. Technically you could workaround that by using Linux capabilities, but some build systems for Docker images, such as Docker Hub automated builds, appear not to support you using aspects of Linux capabilities and so you wouldn't be able to build images using those if needing to use setcap.
Finally, you will find that if using OpenShift Local (CDK) from Red Hat, or the all-in-one VM for OpenShift Origin, that none of this is required. This is because those VM images have purposely been set up to allow as the default policy the ability to run any image, even images wanting to run as root. This is purely so that it is easier to try out arbitrary images you download, but in a production system you really want to be running images in a more secure way, with the ability to run images as root off by default.
If you want to read more about some of the issues around running containers as root and the issues that can come up when a platform runs containers as an arbitrary user ID, have a look at the series of blog posts at:
http://blog.dscpl.com.au/2016/01/roundup-of-docker-issues-when-hosting.html
My Tomcat Container needs data that has to be well protected, i.e. passwords for database access and certificates and keys for Single Sign On to other systems.
I´ve seen some suggestions to use -e or -env-file to pass secret data to a container but this can be discovered with docker inspect (-env-file also shows all the properties of the file in docker inspect).
Another approach is to link a data container with the secrets to the service container but I don´t like the concept of having this data container in my registry (accessible for a broader range of people). I know I can set up a private registry, but I would need different registries for test and production and still everyone with access to the production registry could access the secret data.
I´m thinking about setting up my servers with a directory that contains the secret data and to mount the secret data into my containers. This would work nicely with test- and production servers having different secrets. But it creates a dependency of the containers to my specific servers.
So my question is: How do you handle secret data, what´s the best solution to that problem?
Update January 2017
Docker 1.13 now has the command docker secret with docker swarm.
See also "Why is ARG in a DOCKERFILE not recommended for passing secrets?".
Original answer (Sept 2015)
The notion of docker vault, alluded to by Adrian Mouat in his previous answer, was actively discussed in issue 1030 (the discussion continues on issues 13490).
It was for now rejected as being out of scope for docker, but also included:
We've come up with a simple solution to this problem: A bash script that once executed through a single RUN command, downloads private keys from a local HTTP server, executes a given command and deletes the keys afterwards.
Since we do all of this in a single RUN, nothing gets cached in the image. Here is how it looks in the Dockerfile:
RUN ONVAULT npm install --unsafe-perm
Our first implementation around this concept is available at dockito/vault.
To develop images locally we use a custom development box that runs the Dockito Vault as a service.
The only drawback is requiring the HTTP server running, so no Docker hub builds.
Mount the encrypted keys into container, then pass the password via pipe. The difficulty comes with the detach mode, which will hang while reading the pipe within the container. Here is a trick to work around:
cid=$(docker run -d -i alpine sh -c 'read A; echo "[$A]"; exec some-server')
docker exec -i $cid sh -c 'cat > /proc/1/fd/0' <<< _a_secret_
First, create the docker daemon with -i option, the command read A will hang waiting for the input from /proc/1/fd/0;
Then run the second docker command, reading the secret from stdin and redirect to the last hanging process.