Limit resources usage to a process on Linux without killing it - linux

I need to run a Bach script on a Linux machine and need to limit the resources usage (RAM and CPU).
I am using cgroups but it does kill the process when exceeding the limits, but I dont want that, I just want the process to keep running with the maximum amout of memory and CPU I gave to it without it being killed.
Any solution for that? Or is it possibile to configure cgroups for the case?
Thank you

Related

Can process exporter tells us what all processes are running on which CPU core?

I am supposed to make a drill down for my CPU metrics. Right now, I am using 2 Exporters i.e., Linux node exporter and Process exporter.
With the help of linux node exporter I am able to derive my overall CPU utilization and CPU core wise utilization.
With the help of linux process exporter, I am able to derive what all processes are running and how much memory they are taking, their iops, start time, etc.
As per my usage and requirement, I have made a drill down on my CPU Utilization dashboard which can let me know what all processes are running that are leading to high utilization, but...
I am not able to find the metric in neither of the two exporters which can help me know what all processes are running on which CPU core and how much of that core is being utilized by that process.
Basically, per core usage by processes is what I am looking for.
Is this possible? Can I achieve this goal with the help of either of the two exporters??

Limit a process's CPU and memory usage, with Docker perhaps?

Are there anyways to run a process inside a Docker container without building the container plus all the other isolations (IO, etc)?
My end goal is not to build an isolated environment, but rather limit CPU and Memory usage (ability to malloc). And using VM instances is just a too bit overhead. The ulimit, systemd, cpulimit, and other Linux tools doesn't seem to provide a good solution here. For example it seems that systemd only kills the process if RES/VIRT exceeds threshold.
Docker seems to do the trick without performance degradation, but are there any simple methods to run e.g. a python script without all the extra hassle and configurations?
Or are there any other ways to limit CPU and Mem usage?

Reserve CPU and Memory for Linux Host in Docker

I am running several docker containers via docker-compose on a server.
Problem is, that the load of the containers for some reason always crashes my server after a while...
I can only find resources and answered questions on how to limit a containers cpu/memory usage, but what I want to achieve is giving all containers in total let's say a CPU or Memory usage of like 85% and reserve the rest for the Linux Host so that the server itself doesn't crash.
Does anyone have an idea how to achieve this?
You could use docker-machine, I guess... Then you would define a VM within which all containers would run, and you limit the VM's total memory, leaving the rest for the host.
Otherwise, Docker is running as a native process on the machine, and there isn't a way to place a total limit on "all Docker processes"
The best idea I have right now is to set the cpu limit of each service/container so that sum never reaches 85% but in the long run you should investigate why the server crashes. Maybe it is a cooling or PSU issue?

Node process memory usage exceeds --max_old_space_size

I'm having trouble limiting a node processes memory usage.
I am running an application on AWS and the servers crash when the memory usage exceeds what is available. I have tried using the --max_old_space_size flag but the process seems to still use more memory than the value I specify in the flag and the server crashes.
I'm fine with the process failing I just really need the server not to crash.
I know that the flag is working because if I specify --max_old_space_size=1 the node script is killed immediately.
My question is, are there other ways to limit the memory usage of a node process (and its sub processes?) I'm running on ubuntu and have heard cgroups could maybe achieve this? Or am I doing something wrong? Are there other flags for node processes that would limit overall memory usage?

why Linux keeps killing Apache?

I have a long running apache webserver with lots of requests after sometime I find the apache server stopped with
Killed line at the end
what can I do to solve this problem or prevent the system from killing the apache instance ??
Linux usually kills processes when resources like memory are getting low. You might want to have a look at the memory consumption of your apache process over time.
You might find some more details here:
https://unix.stackexchange.com/questions/136291/will-linux-start-killing-my-processes-without-asking-me-if-memory-gets-short
Also you can monitor your processes using the MMonit software, have a look here: https://serverfault.com/questions/402834/kill-processes-if-high-load-average
The is a utility top which shows the process consumption (e.g. Mem, CPU, User etc), you can use it to keep an eye on apache process.

Resources