In a simplified version:
There are two users on our shared Server System:
user_1 (me)
user_2
Docker is installed system-wide across users.
I user_1, created a docker container using standard docker run running my process. But user_2 has access to this container thereby he/she can not only view, but also stop and remove my container.
How can I prevent user_2 or other users from accessing this container.
Note: No users have root access through sudo.. Thanks!
Note: No users have root access through sudo
If users have access to the docker socket, they all have root access on the host. You've lost all security at that point. If you don't believe this, see what access you have in /host with:
docker run --privileged --net=host --pid=host -v /:/host debian /bin/bash
There are projects to limit access to the docker socket with authz plugins, including twistlock and open policy agent. There's quite a bit of setup needed for these, including revoking access to the socket from the filesystem and using tls keys to access an encrypted and authenticated port. You could also go the commercial route and use docker EE with UCP to manage users and their access rights.
Related
I want to know how I can add the local users of my server to a docker container. I don't need to import their files, I just need a username/password/privileges with new home directory in the docker container for every user in my system. For example, suppose my docker container contains the following users:
Host System:
admin: who has root access and rw access to all
bob: a regular non-sudo user
joe: another regular non-sudo user
Then the Docker Container must have users:
admin: who has root access and rw access to all
bob: a regular non-sudo user
joe: another regular non-sudo user
The Docker container and the system are both running linux, though the system is red hat and the container is ubuntu.
EDIT: I don't want to mount /etc/ files if possible, as this can create a two way security vulnerability as pointed out by #caveman
You would have to mount all relevant linux files using -v like /etc/passwd, /etc/shadow, /ect/group, and /etc/sudoers. Though I can't recommend this due to the security risks, if anyone gets root access in the container they can add users on the host or change passwords since he mount works both ways.
The list of files is not exhaustive, for example, you have to also make sure the shell exacutables exist within the container. When testing this I had to make a symbolic link from /usr/bin/zsh to /bin/bash for example since my user has the zsh shell configured which was not present in the docker image.
If you want to use these users to interact with mounted files, you also have to make sure that user namespace remapping is disabled, or specify that you want to use the same user namespace as the host with the --userns=host flag. Again, not recommended since it is a security feature, so use with care.
Note: Once you have done all this you can use su - {username} to switch to all your existing users. The -u options doesn't work since docker checks the /etc/passwd file before mounting and will give an error.
I have a docker container build on one system where uid = 1000 and gid = 300
When I then try to use that same container on the jenkins server these uid/gid are taken by different user and groups so when I try to execute commands in the container it fails due to permissions.
How can I run the container with the users mapped to the correct uid/gid
There are a few options to handle uid/gid mismatches with host volumes.
open permissions up on the host directory, often considered bad by the security team, for good reason, but also really easy to do.
build a new image for each host where it may run with different uid/gid
switch to named volumes instead. If you don't need direct access to the volume contents from the host, this is recommended.
fix permission problems with an entrypoint.
I go into more detail on some of these in my DockerCon talk here.
There's also another example of the entrypoint solution where I configured a Jenkins container to fix the docker gid to match the mounted docker socket: https://github.com/sudo-bmitch/jenkins-docker
I'm using docker-machine to configure a production docker environment also with docker swarm. When you create a new docker-machine the docker user default password is tcuser. You'll need this to link your machines from other sites configuring the .ssh/authorized_keys file.
When you begin launching services, you'll want to change the default docker user password, so you can protect your systems from unauthorized access (docker user is a sudoer!), but when you launch a passwd command and reboot the machine, you'll see your changes have been reverted.
The question: Where or how should I change the docker user password to keep the changes between reboots?
Boot2docker is an ISO image that starts your light machine with the docker daemon runing on it. It has two directories where it stores persistent information:
/var/lib/booot2docker: to store operative system configurations.
/var/lib/docker: to store docker daemon configurations.
These two folders are populated to tmpfs filesystems during startup, so it's here where you have to make your changes.
So, to change the password you have to login into the machine executing docker-machine ssh YourMachine. Then change the file /var/lib/boot2docker/profile and add this line to it:
echo "docker:YourNewPassword"|sudo chpasswd
Then you have to reboot the machine, for example, launching docker-machine restart YourMachine from your host. This way, the profile script will be executed on startup and it will reset your password to the one you choose on every reboot.
You can check that the old password tcuser no longer works, as expected.
P.S.: I still don't understand how it's been so difficult to find a solution to this issue in stackoverflow or Googling away... I really think it's important enough.
I see a /etc/docker/key.json on Fedora 23 machine. This file seems like a private key for authentication
https://github.com/docker/docker/issues/7667
At what time is it generated ( its not present in output of rpmls docker ), and how do I obtain a corresponding public key?
My usecase is to enable a non-root user to run docker ps command without sudo i.e. by the use of public/private keys.
What should I do?
You don't care about the key.json file, at least as far as I understand your question.
If you want to enable unprivileged users to connect to your Docker daemon using certificates for authentication, you will first need to enable a listening HTTP socket (either binding to localhost or to a public address if you to provide access to the daemon from somewhere other than the docker host), and then you will need to configure appropriate SSL certificates as described in the documentation.
You can also provide access to Docker by managing the permissions on the Docker socket (typically /var/run/docker.sock).
Note that giving someone access to docker is equivalent to giving them root access (because they can always run docker run -v /etc:/hostetc ... and then edit your sudoers configuration or passwd and shadow files, etc.
Up to now we use several linux users:
system_foo#server
system_bar#server
...
We want to put the system users into docker container.
linux user system_foo --> container system_foo
The changes inside the servers are not problem, but remote systems use these users to send us data.
We need to make ssh system_foo#server work. The remote systems can't be changed.
I would be very easy if there would be just one system per linux operating system (pass port 22 to the container). But there are several.
How can we change from the old scheme to docker containers and keep the service ssh system_foo#server available without changes at the remote site?
Please leave a comment if you don't understand the question. Thank you.
Let's remember however that having ssh support in a container is typically an anti-pattern (unless it's your container only 'concern' but then what would be the point of being able to ssh in. Refer to http://techblog.constantcontact.com/devops/a-tale-of-three-docker-anti-patterns/ for information about that anti-pattern
nsenter could work for you. First ssh to the host and then nsenter to the container.
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)`
nsenter --target $PID --mount --uts --ipc --net --pid
source http://jpetazzo.github.io/2014/06/23/docker-ssh-considered-evil/
Judging by the comments, you might be looking for a solution like dockersh. dockersh is used as a login shell, and lets you place every user that logins to your instance into an isolated container.
This probably won't let you use sftp though.
Note that dockersh includes security warnings in their README, which you'll certainly want to review:
WARNING: Whilst this project tries to make users inside containers
have lowered privileges and drops capabilities to limit users ability
to escalate their privilege level, it is not certain to be completely
secure. Notably when Docker adds user namespace support, this can be
used to further lock down privileges.
Some months ago, I helped my like this. It's not nice, but works. But
pub-key auth needs to be used.
Script which gets called via command in .ssh/authorized_keys
#!/usr/bin/python
import os
import sys
import subprocess
cmd=['ssh', 'user#localhost:2222']
if not 'SSH_ORIGINAL_COMMAND' in os.environ:
cmd.extend(sys.argv[1:])
else:
cmd.append(os.environ['SSH_ORIGINAL_COMMAND'])
sys.exit(subprocess.call(cmd))
file system_foo#server: .ssh/authorized_keys
command="/home/modwork/bin/ssh-wrapper.py" ssh-rsa AAAAB3NzaC1yc2EAAAAB...
If the remote system does ssh system_foo#server the SSH-Daemon at server executes the comand given in .ssh/authorized_keys. This command does a ssh to a different ssh-daemon.
In the docker container, there needs to run ssh-daemon which listens on port 2222.