I have a docker based application written in Java which calls a shell script to collect data. I want to add a few commands in this script to collect host machine/VM data like below :
firewall-cmd --list-all >> firewall.txt
journalctl >> journal.log
hostnamectl >> hostname-config.txt
iptables-save >> iptables.txt.
As these commands/resources are not directly accessible to the container, Is there any way I can achieve this? Basically what I am looking for is a way to access/run commands on host from inside the container. If yes, please answer with examples associated with any of the above commands.
A principal design goal of Docker is that processes in containers can't directly run commands on the host and can't directly access the host's filesystem, network configuration, init system, or other details.
If you want to run a detailed low-level diagnostic tool on this system, it needs to run directly on the host system, and probably as root. It can't run in a container, virtual machine, or other isolation system.
Related
I came across this blog: using go as a scripting language and tried to create a custom image that I can use to run golang scripts i.e.
FROM golang:1.15
RUN go get github.com/erning/gorun
RUN mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
RUN echo ':golang:E::go::/go/bin/gorun:OC' | tee /proc/sys/fs/binfmt_misc/register
It fails with error:
mount: /proc/sys/fs/binfmt_misc: permission denied.
ERROR: Service 'go_saga' failed to build : The command '/bin/sh -c mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc' returned a non-zero code: 32
It's readonly file system so can't change the permissions as well. The task I'm trying to achieve here is well documented here. Please help me with following questions:
Is that even possible i.e. mount /proc/sys/fs/binfmt_misc and write to the file: /proc/sys/fs/binfmt_misc/register ?
If Yes, how to do that ?
I guess, it would be great, if we could run golang scripts in the container.
First a quick disclaimer that I haven't done this binfmt trick to run go scripts. I suppose it might work, but I just use go run when I want to run something on the fly.
There's a lot to unpack in this. Container isolation runs an application with a shared kernel in an isolated environment. The namespaces, cgroups, and security settings are designed to prevent one container from impacting other containers or the host.
Why is that important? Because /proc/sys/fs/binfmt_misc is interacting with the kernel, and pushing a change to that would be considered a container escape since you're modifying the underlying host.
The next thing to cover is building an image vs running a container. When you build an image with the Dockerfile, you are defining the image filesystem and some metadata (labels, entrypoint, exposed ports, etc). Each RUN command executes that command inside a temporary container, based on the previous step's result, and when the command finishes it captures the changes to the container filesystem. When you mount another filesystem, that doesn't change the underlying container filesystem, so even if you could, the mount command would be a noop during the image build.
So if this is possible, you'll need to do it inside the container rather than during build time, that container will need to be privileged since doing things like mounting filesystems and modifying /proc requires access not normally given to containers, and you'll be modifying the host kernel in the process. You'd need to make the container entrypoint run the mount and register the binfmt_misc entry, and figure out what to do if the entry is already setup/registered, but possibly to a different directory in another container.
As an aside, when dealing with binfmt_misc and containers, the F flag is very important, though in your use case it's important that you don't have it. Typically you need the F flag so the binary is found on the host filesystem rather than searched for within the container filesystem namespace. The typical use case of binfmt_misc and containers is configuring the host to be able to run containers for different architectures, e.g. Docker Desktop can run amd64, arm64, and a bunch of other platforms today using this.
In the end, if you want to run a container as a one off to run a go command as a script, I'd skip the binfmt misc trick and make an entrypoint that does a go run instead. But if you're using the container for longer run processes where you want to periodically run a go file as a script, you'll need to do that in the container, and as a privileged container that has the ability to escape to the host.
I want to start the following docker container and have terminal access to it:
docker run -it docker:5000/builds/build-lnx64-centos7:latest /bin/bash
The problem is that inside the terminal I can not find any of the files in my file system. No ~/Desktop and similar directories.
Question: how to access the file system of my local PC from within the docker container?
By default, containers cannot see the file system of their host.
If you want to achieve this, you will have to explicitly "mount" whatever directories you want to see using the -v flag, like this:
docker run -v ~/Desktop:/host-desktop -it docker:5000/builds/build-lnx64-centos7:latest /bin/bash
If you run that command, you will see the contents of your desktop in the container's file system, at /host-desktop.
You really would not want your container's to be able to see the entire host file system. That would be dangerous, especially if the container has write permission. You should always only "mount" the exact files/directories you want the container to access.
For the most part, any project I have worked on that uses docker does "volume mounting" so that the container can write files and the developer can easily access them on the host (e.g. selenium tests taking screenshots) or so the developer can edit source code and the container will see the update and hot-reload (e.g. nodejs development). When doing the latter (hot-reload example), it is usually wise to mount in read-only mode.
See the docs for more details: https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v---read-only
I have a running docker container with some service running inside it. Using that service, I want to pull a file from the host into the container.
docker cp won't work because that command is run from the host. I
want to trigger the copy from the container
mounting host filesystem paths into the container is not possible without stopping the container. I cannot stop the container. I can, however, install other things inside this Ubuntu container
I am not sure scp is an option since I don't have the login/password/keys to the host from the running container
Is it even possible to pull/copy a file into a container from a service running inside the container? What are my possibilities here? ftp? telnet? What are my options?
Thanks
I don't think you have many options. An idea is that if:
the host has a web server (or FTP server) up and running
and the file is located in the appropriate directory (so that it can be served)
maybe you can use wget or curl to get the file. Keep in mind that you might need credentials though...
IMHO, if what you are asking for is doable, it is a security hole.
Pass the host path as a parameter to your docker container, customize the docker image to read the file from the path(read above in parameter) and use the file as required.
You could validate the same in docker entry point script.
In my application user uploads his jmeter test plan (*.jmx file) and I need to execute it on my server. I want to verify that the jmx file does not contain any code that can harm my server. Are there any plugins, tools that can help me?
JMeter is very flexible and there is no way to stop the user from doing the harm as for example:
It is possible do delete any file or folder using Beanshell or JavaScript
It is possible to read any file and send it over to anyone via email
It is possible to fork too many processes or kick off too much threads and put your server on its knees by overloading it
So there is no any guaranteed way to verify a JMeter test, the best thing you can do is running it in isolated mode like:
Create a user with a very limited permissions set before executing the test and execute the test as this user
Use container mechanism like:
Windows Containers
Linux Containers
FreeBSD Jails
After looking through solutions like chroot, FreeBSD Jails and dockers, we choosed Dockers. The advantages we found were:
very easy setup and cool documentation
the docker starts in less than a second and there are lots of actions you can do with container - copy file into container, mount directory, run process inside container, etc.
I've created one container with jmeter in it. Every time I want to run some jmeter file I start the container, copy the jmx file into the container and run jmeter inside the container. Note that I call jmeter.sh outside of container and get the jmeter output into console again outside of container. When jmeter process is over, I stop the container.
Some commands I have used:
docker create --name container_name -it my_image_with_jmeter //create container from an image. my_image_with_jmeter is the name of the image I've created
docker start container_name
docker cp /path/to/main/server/file container_name:/path/to/container/ //copy file from main server to container
docker exec -it container_name /usr/local/jmeter/jmeter.sh // run jmeter inside container
docker stop container_name
I just created a secure Docker Registry and ran it on a remote VM (using docker run ...). I then ran docker ps and saw that it is in fact running. I exited the machine and then SSHed back in. Again, I ran docker ps and verified it "survived" me exiting the SSH session.
This has me wondering: do Docker containers actually run as Linux services? If not, is there any way of getting them to run as traditional (upstart- or systemd-based) services? Is there even any reason/merit to do so?
The docker engine runs as a daemon.
That is mentioned in "Host integration":
As of Docker 1.2, restart policies are the built-in Docker mechanism for restarting containers when they exit. If set, restart policies will be used when the Docker daemon starts up, as typically happens after a system boot. Restart policies will ensure that linked containers are started in the correct order.
If restart policies don’t suit your needs (i.e., you have non-Docker processes that depend on Docker containers), you can use a process manager like upstart, systemd or supervisor instead.
That involves (when a container runs with some options) some security fraught, by the way: see issue 14767 and issue 6401:
The container (with --net host option) is the host when it comes to the network stack so any services running on the host are accessible to the container. It just so happens that you communicate to upstart ( and others ) this way.
This feature is a runtime only option, just like the --privileged flag, therefore an image cannot request this, it must be explicitly set at runtime.