for documentation purposes on our project I am looking for the following information:
We are using Docker to deploy various applications which require entropy for SSL/TLS and other stuff. These applications may use /dev/random, /dev/random, getrandom(2), etc.. I would like to know how these requests are handled in Docker containers as opposed to one virtual machine running all services (and accessing one shared entropy source).
So far I have (cursorily) looked into libcontainer and runC. Unfortunately I have not found any answers to my question, although I do have a gut feeling that these requests are passed through to the equivalent call on the host.
Can you lead me to any documentation supporting this claim, or did I get it wrong and these requests are actually handled differently?
A docker container is "chroot on steroids". Anyway, the kernel is the same between all docker containers and the host system. So all the kernel calls share the same kernel.
So we can do on our host (in any folder, as root):
mknod -m 444 urandom_host c 1 9
and in some linux chroot:
wget <alpine chroot> | tar -x <some folder>
chroot <some folder>
mknod -m 444 urandom_in_chroot c 1 9
and we can do
docker run -ti --rm alpine sh -l
mknod -m 444 urandom_in_docker c 1 9
Then all calls open(2) and read(2) by any program to any urandom_in_docker and urandom_in_chroot and urandom_host will go into the same kernel into the same kernel urandom module binded to special character file with major number 1 and minor number 9, which is according to this list the random number generator.
As for virtual machine, the kernel is different (if there is any kernel at all). So all the calls to any block/special character files are translated by different kernel (also maybe using different, virtualized architecture and different set of instructions). From the host the virtualmachine is visible as a single process (implementation depended) which may/or may not call the hosts /dev/urandom if the virtualized system/program calls /dev/urandom. In virtualization anything can happen, and that is dependent on particular implementation.
So, the requests to /dev/urandom in docker are handled the same way as on the host machine. As how urandom is handled in kernel, maybe here is a good start.
If you require entropy, be sure to use and install haveged.
Related
Im having issue with my script that calulates intergity on this version of ubunutu :
cyber#ubuntu:/$ hostnamectl
Static hostname: ubuntu
Icon name: computer-vm
Chassis: vm
Machine ID: 48d13c046d74421781e6c6f771f6ac31
Boot ID: 847b838897ac47eb932f6427361232d1
Virtualization: vmware
Operating System: Ubuntu 20.04.4 LTS
Kernel: Linux 5.13.0-51-generic
Architecture: x86-64
Im wondering if /sys/kernel/tracing/per_cpu/cpu45 is not by any chance an alive file ?
because calculating the hash of the files inside takes ifinite time.
If you want to check filesystem integrity, skip the whole /sys folder - it is an interface to the kernel.
Also it would be better if you also skip /proc (also kernel interface) and /dev (special or device files) folders. F.e - you can read from /dev/zero or /dev/urandom forever. Network mounts can give you a lot of bright moments too.
Also your script can freeze on reading pipes - it there is enough permissions it can read from a pipe forever.
If I was building such a script, I'll start from the mounts, checked their filesystems and scanned only needed ones. For example if a mount is tmpfs - it's contents is located in RAM and will be wiped after reboot.
And you totally should check it out -
https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
I have a tcpdump application in a CentOS container. I was trying to run tcpdump as nonroot. Following this forum post: https://askubuntu.com/questions/530920/tcpdump-permissions-problem (and some other documentation that reinforced this), I tried to use setcap cap_net_admin+eip /path/to/tcpdump in the container.
After running this, I tried to run tcpdump as a different user (with permissions to tcpdump) and I got "Operation Not Permitted". I then tried to run it as root which had previously been working and also got, "Operation Not Permitted". After running getcap, I verified that the permissions were what they should be. I thought it may be my specific use case so I tried running the setcap command against several other executables. Every single executable returned "Operation Not Permitted" until I ran setcap -r /filepath.
Any ideas on how I can address this issue, or even work around it without using root to run tcpdump?
The NET_ADMIN capability is not included in containers by default because it could allow a container process to modify and escape any network isolation settings applied on the container. Therefore explicitly setting this permission on a binary with setcap is going to fail since root and every other user in the container is blocked from that capability. To run a container with this, you would need to add this capability onto the container with the command used to start your container. e.g.
docker run --cap-add NET_ADMIN ...
However, I believe all you need is NET_RAW (setcap cap_net_raw) which is included in the default capabilities. From man capabilities:
CAP_NET_RAW
* Use RAW and PACKET sockets;
* bind to any address for transparent proxying.
I am trying to make a chroot'ed, sandboxed build-environement, which creates itself from a Git checkout before proceeding with building the application. One of the requirements is that the developers doing the git checkout and invoking the build should not need admin privileges on the host machine.
unshare -r chroot
works fine - except there is no /proc which again means a lot of standeard stuff wont work.
Various methods to create /proc I have found with mount require sudo rights.
Docker does this but the developers have to be in the "docker" group which effectively gives them uncontrolled root access - then rather give them sudo rights.
I have found the "proot" which does some kind of emulation to do this. This, however, has some performance penalties.
You also need a mount namespace which will give you the ability to perform recursive bind mounts (and plain bind mount where there are no child mounts). pivot_root and the ability to mount tmpfs, so use unshare -rm.
With a pid namesapce you can also mount fresh instances of procfs.
I ended up using bubblewrap (bwrap). For a few things using ttys, I had to let it run with pseudo uid 0 to work.
If I should do it now I would use podman I think.
I have developed and maintain a web application which acts as a front end for some scrips in cgi-bin which in turn call C programs on the server. The web server is Apache2, hosted both on my office Linux box for testing and on Amazon ECC for the real deployment.
My problem is that I'm off travelling, mostly without any internet connection and with only a small portable linux machine, yet I want to develop the next release of the web pages, scripts, data sets and programs. Testing static web pages is no issue but testing pages which call server-side cgi-bin scripts is always problematic, so my idea is to put a minimal http server on the portable linux box (ubuntu 14.04) which will allow the server and client to be on the same machine without any internet (and maybe with just a socket) in-between.
Of course I can and do test scripts and programs directly, but this does not exercise features such as handling top-bit set characters in $POST_DATA or setting and retrieving cookies so would inevitably result in some divergence of code-base.
So:
Is this way sensible or is there a better or simpler means to do what I want?
If it is sensible, what hppt server would you recommend? I thought of miniWeb but have no experience of it.
PS: I'm expert in the the (maths of the) server-side programs but have much less experience as an apache sysadmin.
For many things this is sufficient:
python3 -mhttp.server --cgi
Unfortunately, it's so minimal, that it doesn't support stuff like setting the HTTP Status: https://bugs.python.org/issue10487
I'm not using lighttpd because I don't want to have to write a configuration file. Another minimal server that can be used is mini-httpd:
sudo apt install mini-httpd
/usr/sbin/mini_httpd -D -p 8000 -c 'cgi-bin/*'
The -D option keeps the server in the foreground instead of daemonizing it. The -p option is the port and -c is a pattern for my cgi scripts.
I also found that the built-in webserver of busybox can handle cgi scripts just fine:
busybox httpd -p 8000 -f
Up to now we use several linux users:
system_foo#server
system_bar#server
...
We want to put the system users into docker container.
linux user system_foo --> container system_foo
The changes inside the servers are not problem, but remote systems use these users to send us data.
We need to make ssh system_foo#server work. The remote systems can't be changed.
I would be very easy if there would be just one system per linux operating system (pass port 22 to the container). But there are several.
How can we change from the old scheme to docker containers and keep the service ssh system_foo#server available without changes at the remote site?
Please leave a comment if you don't understand the question. Thank you.
Let's remember however that having ssh support in a container is typically an anti-pattern (unless it's your container only 'concern' but then what would be the point of being able to ssh in. Refer to http://techblog.constantcontact.com/devops/a-tale-of-three-docker-anti-patterns/ for information about that anti-pattern
nsenter could work for you. First ssh to the host and then nsenter to the container.
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)`
nsenter --target $PID --mount --uts --ipc --net --pid
source http://jpetazzo.github.io/2014/06/23/docker-ssh-considered-evil/
Judging by the comments, you might be looking for a solution like dockersh. dockersh is used as a login shell, and lets you place every user that logins to your instance into an isolated container.
This probably won't let you use sftp though.
Note that dockersh includes security warnings in their README, which you'll certainly want to review:
WARNING: Whilst this project tries to make users inside containers
have lowered privileges and drops capabilities to limit users ability
to escalate their privilege level, it is not certain to be completely
secure. Notably when Docker adds user namespace support, this can be
used to further lock down privileges.
Some months ago, I helped my like this. It's not nice, but works. But
pub-key auth needs to be used.
Script which gets called via command in .ssh/authorized_keys
#!/usr/bin/python
import os
import sys
import subprocess
cmd=['ssh', 'user#localhost:2222']
if not 'SSH_ORIGINAL_COMMAND' in os.environ:
cmd.extend(sys.argv[1:])
else:
cmd.append(os.environ['SSH_ORIGINAL_COMMAND'])
sys.exit(subprocess.call(cmd))
file system_foo#server: .ssh/authorized_keys
command="/home/modwork/bin/ssh-wrapper.py" ssh-rsa AAAAB3NzaC1yc2EAAAAB...
If the remote system does ssh system_foo#server the SSH-Daemon at server executes the comand given in .ssh/authorized_keys. This command does a ssh to a different ssh-daemon.
In the docker container, there needs to run ssh-daemon which listens on port 2222.