Reserve CPU and Memory for Linux Host in Docker - linux

I am running several docker containers via docker-compose on a server.
Problem is, that the load of the containers for some reason always crashes my server after a while...
I can only find resources and answered questions on how to limit a containers cpu/memory usage, but what I want to achieve is giving all containers in total let's say a CPU or Memory usage of like 85% and reserve the rest for the Linux Host so that the server itself doesn't crash.
Does anyone have an idea how to achieve this?

You could use docker-machine, I guess... Then you would define a VM within which all containers would run, and you limit the VM's total memory, leaving the rest for the host.
Otherwise, Docker is running as a native process on the machine, and there isn't a way to place a total limit on "all Docker processes"

The best idea I have right now is to set the cpu limit of each service/container so that sum never reaches 85% but in the long run you should investigate why the server crashes. Maybe it is a cooling or PSU issue?

Related

Limit a process's CPU and memory usage, with Docker perhaps?

Are there anyways to run a process inside a Docker container without building the container plus all the other isolations (IO, etc)?
My end goal is not to build an isolated environment, but rather limit CPU and Memory usage (ability to malloc). And using VM instances is just a too bit overhead. The ulimit, systemd, cpulimit, and other Linux tools doesn't seem to provide a good solution here. For example it seems that systemd only kills the process if RES/VIRT exceeds threshold.
Docker seems to do the trick without performance degradation, but are there any simple methods to run e.g. a python script without all the extra hassle and configurations?
Or are there any other ways to limit CPU and Mem usage?

Limit resources usage to a process on Linux without killing it

I need to run a Bach script on a Linux machine and need to limit the resources usage (RAM and CPU).
I am using cgroups but it does kill the process when exceeding the limits, but I dont want that, I just want the process to keep running with the maximum amout of memory and CPU I gave to it without it being killed.
Any solution for that? Or is it possibile to configure cgroups for the case?
Thank you

how to find reason for 100% CPU utilization on AWS EC2?

I've fleet of EC2 instances : A and B (both are in same AWS account, same Linux OS version, same region, but different AZ and under different load balances ).
when i give same load to fleet of EC2 instances A and B ; both behave differently.
EC2 A works normally with average CPU utilization upto 60% ; on other hand EC2 B shows spike in CPU utilization upto 100% then it start again from 0 and same effort found in other instances in fleet.
Anyone experienced this in past?
ssh to the host B, see the system activity via top, look for the process consuming most of the CPU.
also you can inspect the process with "lsof" command or
ps -fp "PID of the process"
After analysis it was found that couple of security patches getting executed; which was causing these spikes.
This has happened to me twice now with a MS Server instance running EC2. In both cases it was the MS Update Service which had taken 100% of CPU and burnt off all my CPU credits.
The only way to get back on and fix it was to set "Instance T2/T3 Unlimited" and stop/disable the MS Update Service.

what is the performance impact if we put nodejs process inside docker container?

My backend is a nodejs application running in ubuntu linux. It needs to spawn a nodejs sub process when there is a request from client. The sub process usually takes less than 20 seconds to finish. There is a need to manage these processes if there are many concurrent requests come in. I am thinking to move spawn process inside docker container. That means a new docker container will be created to run the process if there is a request from client. In this way, I can use kubernetes to manage these docker containers. I am not sure whether this is a good design. Whether put the process inside docker container cause any performance issue.
The reason I am thinking to use docker container instead of spawn is that kubernetes offers all the features to manage these containers. Such as, auto scale if there are too many requests, limit the cpu and memory of the docker container, scheduler, monitoring, etc. I have to implement these logic if I use spawn.
You can easily measure the overhead yourself: get any basic docker image (e.g. a Debian base image) and run
time bash -c true
time docker run debian bash -c true
(Run each a few times and ignore the first runs.)
This will give you the startup and cleanup costs. During actual runtime, there is negligible/no further overhead.
Kubernetes itself may add some more overhead - best measure that too.
From the Docker documentation on network settings:
Compared to the default bridge mode, the host mode gives significantly better networking performance since it uses the host’s native networking stack whereas the bridge has to go through one level of virtualization through the docker daemon. It is recommended to run containers in this mode when their networking performance is critical, for example, a production Load Balancer or a High Performance Web Server.
So, answers which say there is no significant performance difference are incorrect as the Docker docs themselves say there is.
This is just in the case of network. There may or may not be impacts in accessing disk, memory, CPU, or other kernel resources. I'm not an export on Docker, but there are other good answers to this question around, for example here, and blogs detailing Docker-specific performance issues.
Ultimately, it will depend on exactly what your application does as to how it is impacted. The best advice will always be that, if you're highly concerned about performance, you should set your own benchmarks and do your own testing in your environment. That doesn't answer your question because there is no generic answer. Importantly, though, "there's virtually no impact" does not appear to be correct.
docker is in fact just wrapper on core functionaity of linux itself so there is no significant impact - it is just separaing process in container. so question is more about levels of virtualisation in your host. If it is linux in windows, or docker on windows it can affect your app somehow and virtualisation is a heavy way then. docker let you separate dependencies without almost any impact on performance.

Unable to locate the memory hog on openvz container

i have a very odd issue on one of my openvz containers. The memory usage reported by top,htop,free and openvz tools seems to be ~4GB out of allocated 10GB.
when i list the processes by memory usage or use ps_mem.py script, i only get ~800MB of memory usage. Similarily, when i browse the process list in htop, i find myself unable to pinpoint the memory hogging offender.
There is definitely a process leaking ram in my container, but even when it hits critical levels and i stop everything in that container (except for ssh, init and shells) i cannot reclaim the ram back. Only restarting the container helps, otherwise the OOM starts kicking in in the container eventually.
I was under the assumption that leaky process releases all its ram when killed, and you can observe its misbehavior via top or similar tools.
If anyone has ever experienced behavior like this, i would be grateful for any hints. The container is running icinga2 (which i suspect for leaking ram) , although at most times the monitoring process sits idle, as it manages to execute all its scheduled checks in more than timely manner - so i'd expect the ram usage to drop at those times. It doesn't though.
I had a similar issue in the past and in the end it was solved by the hosting company where I had my openvz container. I think the best approach would be to open a support ticket to your hoster, explain them the problem and ask them to investigate. Maybe they use some outdated kernel version or they did changes on the server that have impact on your ovz container.

Resources