Obtaining Linux host metrics from within a docker container - linux

Looking for some recommendations for how to report linux host metrics such as cpu and memory utilization and disk usage stats from within a docker container. The host will contain a number of docker containers. One thought was to run Top and other basic linux commands from the outside the container and push them into a container folder that has the appropriate authorization so that they can be consumed. Another thought was to use the docker api to run docker stats for the containers but not sure this is the best as it may not report on other processes running on the host that are not containerized. A third option would be to somehow execute something like TOP and other commands on the host from within the container, this option being the most ideal for my situation. I was just looking for some proven design patterns that others have used. Also, I don’t have the ability to install a bunch of tools on the host as this would be a customer host which I don’t have control as to what is already installed.

You may run your container in privileged mode, but be aware that it this could compromise the host security as your container will no longer be in a sandboxed environment.
docker run -d --privileged --pid=host alpine:3.8 sh
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host. Additional information about running with --privileged is available on the Docker Blog.
https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
Good reference: https://security.stackexchange.com/a/218379

Related

Docker in Docker [duplicate]

This question already has answers here:
Is it ok to run docker from inside docker?
(5 answers)
Closed 4 years ago.
We have app and which will spin the short term (short span) docker containers. Right now, it runs in Ubunut16.04 server (VM) and we installed docker, and nodejs in same server. We have nodejs app which runs in same server so whenever a request comes in, then the nodejs app will spin up a docker container and execute a user input inside the docker container. Once after the docker finish its job or if it runs out of admin defined resources then the docker container will be forcefully killed (docker kill) and removed (docker rm).
Now my question is, is it best practices to run the Ubunte16.04 docker container and run nodjes app and the short term docker containers inside the Ubunuter16.04 docker container.
In short run a docker inside other docker container.
Docker-in-Docker is generally considered fragile and hard to maintain and using it isn’t a best practice. https://hub.docker.com/_/docker/ has a little discussion on this.
A straightforward (but potentially dangerous) way to rearrange this is to give the server process access to the host’s Docker socket, with docker run -v /var/run/docker.sock:/var/run/docker.sock. Then it could launch its own Docker containers as needed. Note that if you do this, these sub-containers’ docker run -v options refer to the host’s filesystem, not the calling container’s filesystem, so if you’re trying to use the filesystem to transfer data this can get tricky. Also note that being able to run any Docker command this way gives unlimited access to the host, so you need to be extremely careful about how you launch containers.
A larger redesign would be to introduce some sort of message-queueing system; I’ve successfully used RabbitMQ in the past but there are many other options. Instead of the server process launching a subprocess directly, it writes a message to a queue. Instead of the workers being short-lived processes that start and stop frequently, you have a long-lived worker that reads jobs off the queue and does them. This puts you in a much more established Docker space where nothing needs to dynamically start and stop containers, and you can easily test the Node-Rabbit-worker stack in a non-Docker environment.

Is it possible to prevent RedHat/CentOS Docker host root access from within a container?

I want to provide a minimal CentOS/RedHat VM to a staff member to log into using a non-root user account. I made the docker socket available to the user to run docker 1.12 cli commands via chgrping the socket and adding the account into the docker group.
Assuming we leave the TCP API, and all CaaS/PaaS products out of this question, on a VM, is it possible using SELinux, manipulation of seccomp and/or linux capabilities or anything else (including GRSec/PAX) to prevent the use of Docker containers to access the root user on the Docker host?
This post appears not to turn up a definitive.
If you are exposing your host's Docker socket to the container, then you've essentially given them root privileges to the host.
If you are trying to provide an isolated Docker environment within a container, you should use Docker-in-Docker. See the dind tagged images for the docker image. This is how Play with Docker works.

Kubernetes Docker OS parameters vs Host OS parameters

I am running NGINX and Tomcat on Docker containers (container OS is Red Hat linux) and deployed through Kubernetes pods. Host OS is Red Hat Linux.
My query is which OS parameter will be effective - host OS or container OS? During performance tuning do I need to tune both OS or host OS parameters are effective.
Example of some parameters I am referring to are ulimit - n (open files), net.ipv4.tcp.* , fs.file-max, etc.
As Crazykev already mentioned, you can set ulimits using the respective docker run flags.
Parameters like net.ipv4.tcp.* are kernel parameters. Docker containers are run in the same Linux kernel as the host system; for this reason, parameters set on the host will also be effective in the container.
Usually, you will not be able to set these parameters from inside a container. You can (not saying you should) start a container with the --privileged flag, which might (untested) give you access to setting kernel parameters from within the container. The Kubernetes docs also describe how to start privileged containers.
In Docker container, and I'm not sure if it could be called as OS...
By the way, some of your referring example may not support set directly in docker container for safety or other issues. You may need to find more manual in docker docs.(for example, ulimit, docker run --ulimit nofile=262144:262144)
Kubernetes right now does not support adding ulimit there is an issue in kubernetes open for that.
Similar question which asks about setting ulimit is answered here.

How to scale application with multiple exposed ports and multiple volume mounted by using docker swarm?

I have one Java based application(Jboss version 6.1 Community) with heavy traffic on it. Now I want to migrate this application deployments using docker and docker-swarm for clustering.
Scenario
My application needs two ports exposed from docker container one is web port(i.e.9080) and another one is databases connection port(i.e.1521) and there are few things like logs directory for each container mounted on host system.
Simple Docker example
docker run -it -d --name web1 -h "My Hostname" -p 9080:9080 -p 1521:1521 -v /home/web1/log:/opt/web1/jboss/server/log/ -v /home/web1/license:/opt/web1/jboss/server/license/ MYIMAGE
Docker with Swarm example
docker service create --name jboss_service --mount type=bind,source=/home/web1/license,destination=/opt/web1/jboss/server/license/ --mount type=bind,source=/home/web1/log,destination=/opt/web1/jboss/server/log/ MYIMAGE
Now if I scale/replicate above service to 2 or 3, which host port it will bind and which mount directory will it bind for the newly created containers ??
Can anyone help me to get how scale and replication service will work in this type of scenario ?
I also gone through --publish and --name global but nothing help me in my case.
Thank you!
Supporting stateful containers is still immature in the Docker universe.
I'm not sure this is possible with Docker Swarm (if it is I'd like to know) and it's not a simple problem to solve.
I would suggest you review the Statefulset feature that comes in the latest version of Kubernetes:
https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
It supports the creation of a unique volume for each container in a scale-up event. As for port handling that is part of Kubernetes nornal Service feature that implements container load balancing.
I would suggest building your stack into a docker-compose v3 file, which could be run onto an swarn-cluster.
Instead publishing those ports, you should expose them. That means, the ports are NOT available onto the hostsystem directly, but in the docker-network. Every Composefile got it's own network by default, eg: 172.18.0.0/24. Each Container got's an own ip and makes that Service available other the specified port.
If you scale up to 3 Containers you will got:
172.18.0.1:9080,1521
172.18.0.2:9080,1521
172.18.0.3:9080,1521
You would need a Loadbalancer to access those Services. I do use Jwilder/Nginx if you prefer a container approach. I also can recommand Rancher which comes with an internal Loadbalancer.
In Swarm-mode you have to use the overlay network driver and create the network, otherwise it would just be accessible from the local host itself.
Related to logging, you should redirect your log file to stdout and catch them with an logging driver (fluentd, syslog, graylog2)
For persistent Storage you should have a look at flocker! However Databases might not support those storage implementations. EG: MYsql doesnot support them, mongodb does work with a flocker volume.
It seems like you have to read alot.. :)
https://docs.docker.com/

Docker security concerns using unofficial images

How to ensure, that docker container will be secure, especially when using third party containers or base images?
Is it correct, when using base image, it may initiate any services or mount arbitrary partitions of host filesystem under the hood, and potentially send sensitive data to attacker?
So if I use third party container, which Dockerfile proves the container to be safe, should I traverse the whole linked list of base images (potentially very long) to ensure the container is actually safe and does what it intends of doing?
How to ensure the trustworthy of docker container in a systematic and definite way?
Consider Docker images similar to android/iOS mobile apps. You are never quite sure if they are safe to run, but the probability of it being safe is higher when it's from an official source such as Google play or App Store.
More concretely Docker images coming from Docker hub go through security scans details of which are undisclosed as yet. So chances of a malicious image pulled from Docker hub are rare.
However, one can never be paranoid enough when it comes to security. There are two ways to make sure all images coming from any source are secure:
Proactive security: Do security source code review of each Dockerfile corresponding to Docker image, including base images which you have already expressed in question
Reactive security: Run Docker bench, open sourced by Docker Inc., which runs as a privileged container looking for runtime known malicious activities by containers.
In summary, whenever possible use Docker images from Docker hub. Perform security code reviews of DockerFiles. Run Docker bench or any other equivalent tool that can catch malicious activities performed by containers.
References:
Docker security scanning formerly known as Project Nautilus: https://blog.docker.com/2016/05/docker-security-scanning/
Docker bench: https://github.com/docker/docker-bench-security
Best practices for Dockerfile: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
Docker images are self-contained, meaning that unless you run them inside a container with volumes and network mode they have no way of accessing any network or memory stack of your host.
For example if I run an image inside a container by using the command:
docker run -it --network=none ubuntu:16.04
This will start the docker container ubuntu:16.04 with no mounting to host's storage and will not share any network stack with host. You can test this by running ifconfig inside the container and in your host and comparing them.
Regarding checking what the image/base-image does, a conclusion from above said is nothing harmful to your host (unless you mount your /improtant/directory_on_host to container and after starting container it removes them).
You can check what an image/base-image conatins after running by checking their dockerfile(s) or docker-compose .yml files.

Resources