How to limit gearman job server memory allocate? - gearman

how to limit gearman job server memory allocate? it seems that gearman job server will not limit its own memory usage. Is there any way to limit the gearman job server not bigger than 1G?

You are right, the gearman server does not currently have an option to limit its memory usage.
However, here are two workarounds:
Run gearmand inside a virtual machine like Virtualbox.
If you are running Linux, try using the ulimit command to limit memory usage.
$ ulimit -Sv 1000000
$ gearmand [the rest of your command line parameters]
where:
-S Specifies the soft limit for the given resource
-v Specifies virtual memory allocation, in kbytes

Related

How to limit the resources of process submitted by non-slurm on CentOS

We have a cluster with 1 node, 200 CPU cores and 2T RAM. The server are shared by 15+ people and required to submitted job by slurm.(computing node and login node are on same machine). But some people are unwilling to do so!
So, is there a way to limit the resources of user's process submitted by cmd, but no by slurm?
For example, a no-slurm job shouled be restricted with CPU:2, RAM:4G;
$ resource-consuming-program # job submitted by cmd should be restricted.
$ cat slurmjob.sh
#!/bin/sh
#SBATCH -J TEST
#SBATCH --cpus-per-task=1
#SBATCH --mem=700G
# We recommend using SLURM to run resource-consuming job.
resource-consuming-program
$ sbatch slurmjob.sh # job submitted by SLURM won't be restricted.
All in all, we just want to limit which tasks that are not submitted by SLURM. Thanks. ☺️
Here is a ad-hoc solution to your problem: https://unix.stackexchange.com/questions/526994/limit-resources-cpu-mem-only-in-ssh-session. The idea there is to constrain users in a cgroup whenever they are in an SSH session.
Other than that, there is a tool called Arbiter2 that was created for the purpose of controlling usage resources on login nodes.

how to get rid of kswapd0 process running in linux

Frequently facing the issue of the kswapd0 running in one of the linux machines, what could be the reason for that, by looking more at the issue, understood that it will be because of the less memory, I tried the below options to avoid it:
echo 1 > /proc/sys/vm/drop_caches
cat /proc/sys/vm/drop_caches
sudo cat /proc/sys/vm/swappiness
sudo sysctl vm.swappiness=60
but it does not yield fruitful results, what could be the best method to avoid it, or its something some action needs to be taken on the RAM memory of the machine, Any suggestions on this ?
Every time we observe , all the running apps are killed automatically and kswapd0 occupies the complete cpu and memory.

Soft virtual memory limit (ulimit -v)

I have a linux user with soft virtual memory limit (ulimit -v) set to aroud 5GB.
Having this in mind I try to do:
get all user processes with ps -u -o pid --no-heading;
for each pid, open file in /proc/pid/status;
get VmSize parameter and sum them up over all pids.
After doing so, my sum of VmSizes is 22 GB, which is not something to expect.
My question is: Is my assumption about ulimit -v >= sum of VmSizes correct? If not, what does soft limit actually mean? Is it possible to get over soft limit for specific user and still be okay with it?
Btw, ulimit -v -H is set to unlimited of it makes any difference.
The virtual memory limit is per process, not per user.

Find out what is causing a memory leak in a application

I have a Linux (CentOS) server on which I run a game server on which recently started leaking memory after an update. How can I find out what is causing the memory leak in the server?
Memory profiling
Use Perf tool to check the leaks.
Run the last command for all the processes running in the application and tally the results to find out what is causing memory leak.
A sample usage of probes with perf could be to check libc's malloc() and free() calls:
$ perf probe -x /lib64/libc.so.6 malloc
$ perf probe -x /lib64/libc.so.6 free
Added new event:
probe_libc:malloc (on 0x7eac0)
A probe has been created. Now, let's record the global usage of malloc and free across all the system during 4 second:
$ perf record -e probe_libc:malloc -agR sleep 4
$ perf record -e probe_libc:free -agR sleep 4
Let's record the usage of malloc and free across all any process during 4 second:
$ perf stat -e probe_libc:free -e probe_libc:malloc -ag -p $(pgrep $process_name$) sleep 4
Output:
Performance counter stats for process id '1153':
11,312 probe_libc:free
11,644 probe_libc:malloc
4.001091828 seconds time elapsed
If there is increase in difference between malloc and free count for every time perf command is run, it is a hint of memory leak.
$ perf record -e probe_libc:free -e probe_libc:malloc -agR sleep 2
Run the above command to check for the whole application.
Later run,
$ perf report
to get the report of the above run.

Linux: memory usage summary for program

I need some command line utility able to run specified command and measure process group memory usage at peak and average (RSS, virtual and shared). As I understand that should be a combination of ptrace(2) and libprocps, but I can't find anything similar.
Any ideas?
/usr/bin/time -f "max RSS: %MKb" <command>
See man time for more details.

Resources