LINUX: How to lock the pages of a process in memory - linux

I have a LINUX server running a process with a large memory footprint (some sort of a database engine). The memory allocated by this process is so large that part of it needs to be swapped (paged) out.
What I would like to do is to lock the memory pages of all the other processes (or a subset of the running processes) in memory, so that only the pages of the database process get swapped out. For example I would like to make sure that i can continue to connect remotely and monitor the machine without having the processes impacted by swapping. I.e. I want sshd, X, top, vmstat, etc to have all pages memory resident.
On linux there are the mlock(), mlockall() system calls that seem to offer the right knob to do the pinning. Unfortunately, it seems to me that I need to make an explicit call inside every process and cannot invoke mlock() from a different process or from the parent (mlock() is not inherited after fork() or evecve()).
Any help is greatly appreciated. Virtual pizza & beer offered :-).

It has been a while since I've done this so I may have missed a few steps.
Make a GDB command file that contains something like this:
call mlockall(3)
detach
Then on the command line, find the PID of the process you want to mlock. Type:
gdb --pid [PID] --batch -x [command file]
If you get fancy with pgrep that could be:
gdb --pid $(pgrep sshd) --batch -x [command file]

Actually locking the pages of most of the stuff on your system seems a bit crude/drastic, not to mention being such an abuse of the mechanism it seems bound to cause some other unanticipated problems.
Ideally, what you probably actually want is to control the "swappiness" of groups of processes so the database is first in line to be swapped while essential system admin tools are the last, and there is a way of doing this.

While searching for mlockall information I ran across this tool. You may be able to find it for your distribution. I only found the man page.
http://linux.die.net/man/8/memlockd

Nowadays, the easy and right way to tackle the problem is cgroup.
Just restrict memory usage of database process:
1. create a memory cgroup
sudo cgcreate -g memory:$test_db -t $User:$User -a $User:$User
2. limit the group's RAM usage to 1G.
echo 1000M > /sys/fs/cgroup/memory/$test_db/memory.limit_in_bytes
or
echo 1000M > /sys/fs/cgroup/memory/$test_db/memory.soft_limit_in_bytes
3. run the database program in the $test_db cgroup
cgexec -g memory:$test_db $db_program_name

Related

Which tool or command gives very accurate memory usage status in Linux?

I have been asked in my project to profile memory usage of a C++ application that runs on Linux for an embedded like device. We need to know this in order to decide how much RAM we need.
I have done some research and found many tools or commands to find the max memory usage of a process when it is running.
Here are those:
top
Command: top -p $Pid
ps
Command: ps -o rss=$pid
pmap
Command: pmap -x $pid
valgrind -massif
valgrind --tool=massif --pages-as-heap=yes program
smaps
Used the following link: Script
Linux system monitor app
But I get different memory usage in each of those. I have tried to understand in depth, but left me confused which is close enough to trust. So someone with experience could share which one they use and also why we have these many ways to measure memory which gives different results.
VM, RSS and Shared parts are having different values in all of them.
Thanks
You can get the maximum resident set size of the process during its lifetime, in Kilobytes by using the following command:
/usr/bin/time -f %M
Followed by the execution of your C++ binary.

ulimit Linux connection limit

I have a question about ulimit:
ulimit -u unlimited
ulimit -n 60000
If I execute these in a screen, will they forever be kept as a setting on the screen until I kill the screen or do I have to run it every time I run the program?
What I want to do is irrelevant, I just want to know if they will be kept as a setting within the screen.
ulimit is a bash builtin. It invokes the setrlimit(2) system call.
That syscall modifies some limit in its -shell- process (likewise the cd builtin calls chdir(2) and modifies the working directory of your shell process).
In a bash shell, $$ expands to the pid of that shell process. So you can use ps $$ (and even compose it, e.g. like in touch /tmp/foo$$ or cat /proc/$$/status)
So the ulimit applies to your shell and stay the same until you do another ulimit command (or until your shell terminates).
The limits of your shell process (and also its working directory) are inherited by every process started by fork(2) from your shell. These processes include those running your commands in that same shell. Notice that changing the limit (or the working directory) of some process don't affect those of the parent process. Notice that execve(2) don't change limits or working directories.
Limits (and working directory) are properties of processes (not of terminals, screens, windows, etc...). Each process has its own : limits and working directory, virtual address space, file descriptor table, etc... You could use proc(5) to query them (try in some shell to run cat /proc/self/limits and cat /proc/$$/maps and ls -l /proc/self/cwd /proc/self/fd/). See also this. Limits (and working directory) are inherited by child process started with fork(2) which has its own copy of them (so limits are not shared, but copied ... by fork).
But if you start another terminal window, it is running another shell process (which has its own limits and working directory).
See also credentials(7). Be sure to understand how fork(2) and execve(2) work, and how your shell uses them (for every command starting a new process, practically most of them).
You mention kill(1) in some comments. Be sure to read its man page (and every man page mentioned here!). Read also kill(2) and signal(7).
A program can call by itself setrlimit(2) (or chdir(2)) but that won't affect the limits (or working directory) of its parent process (often your shell). Of course it would affect future fork-ed child processes of the process running that program.
I recommend reading ALP (a freely downloadable book about Linux programming) which has several chapters explaining all that. You need several books to explain the details.
After ALP, read intro(2), be aware of existing syscalls(2), play with strace(1) and your own programs (writing a small shell is very instructive; or study the code of some existing one), and read perhaps Operating Systems: Three Easy pieces.
NB. The screen(1) utility manages several terminals, each having typically its shell process. I don't know if you refer to that utility. Read also about terminal emulators, and the tty demystified page.
The only way to really kill some screen is with a hammer, like this:
(image of a real hammer hitting a laptop found with Google, then cut with gimp, and put temporarily on my web server; the original URL is probably https://www.istockphoto.com/fr/photo/femme-daffaires-stress%C3%A9-%C3%A0-lordinateur-crash-arrive-et-d%C3%A9truisent-le-moniteur-gm172788693-5836585 and I understand the license permits me to do that.)
Don't do that, you'll be sorry.
Apparently, you are talking of sending a signal (with kill(1) or killall(1) or pkill(1) to some process running the screen(1) program, or to its process group. It is not the same.

Using the Top command with ps and kill

for my Computing Controlled Assessment I am looking into some of the basic commands for the Linux OS Debian. For the final question I have to write a short essay on using the top command along with ps and kill to investigate misbehaving system. The question asks to use help from PC specialists (or just any experienced Debian users). So if anyone could give any information on how a specialist could use these commands and anything helpful in general on these commands. Remember I'm here for information and not an answer. Thanks
top is used for displaying a list of processes, and by default, is sorted by the amount of CPU usage it's using - so in your case, it's a handy tool to see if a specific process is taking up most of the CPU usage and causing the system to run slower. It also displays the process ID (PID) as well as the user running it. Think of it like the Linux equivalent of Task Manager in Windows.
ps is similar to top, but instead of constantly refreshing, it spews out all of the current processes running on the server, as well as the PID (important). Usually this is used as ps aux, or to be more specfic you could use this with grep to search for a specific process, e.g. ps aux | grep httpd to display the current Apache processes running.
kill is used to kill process running on the system, so if you had a script on the system taking up most of the resources and you want to forcefully kill the process, you'd use kill. You can also use the killall command to kill all processes with a matching string, e.g. killall httpd.
The steps I'd take to investigate a misbehaving system would be to:
1) Use top or ps to locate the process taking up the most resources, and remember the process ID.
2) If I wanted to kill the process, I'd use: kill <process ID>.
If you need anything else clarifying or explaining - feel free to comment!
EDIT: https://serverfault.com/ - This may be the best place to post future questions like this.
Best way to learn about this commands is to read man (manual) pages. To discover information about top just type:
$ man top
in command line and enjoy. Similarly you can display man pages for most unit command line tools using:
$ man <command>

Limit the percentage of CPU a process tree is allowed to use?

Can I limit the percentage of CPU a running process and all its current and future children can use, combined? I've heard about the cpulimit tool, but that seems to ignore child processes.
Edit: So, the answer I found requires cpulimit to run constantly untill we want the limit to stay in effect, since it is doing the limiting by actively sending suspend and then continue signals to the process. Are there perhaps other ways to achieve this limiting effect, perhaps without the need for such a secondary process running in the background?
Yes!
Just as I was writing this question, found out that I was trying an old version of cpulimit.
The new version supports limiting child processes too.
$ cpulimit -h
Usage: cpulimit [OPTIONS...] TARGET
OPTIONS
-l, --limit=N percentage of cpu allowed from 0 to 400 (required)
-v, --verbose show control statistics
-z, --lazy exit if there is no target process, or if it dies
-i, --include-children limit also the children processes
-h, --help display this help and exit
TARGET must be exactly one of these:
-p, --pid=N pid of the process (implies -z)
-e, --exe=FILE name of the executable program file or path name
COMMAND [ARGS] run this command and limit it (implies -z)
Report bugs to <marlonx80#hotmail.com>.
I've been researching this problem for the last few days, and I found at least two more options: cgroups and CPU affinity
Given that this topic has been viewed more than 2k times, and it's been difficult to find a good source of information, let my post my notes here for future reference.
cgroups
Caveats: you can't use cgroups inside of Docker and you need root access for a one-time setup.
There is cgroups v1 and v2. As of 2020-04-22, only Red Hat has switched to v2 by default and you can't use both at the same time. I.e., you're probably on v1.
You need root to create a cgroup directory / configure your system to create one at startup and delegate access to your non-root user, like so:
v1: mkdir /sys/fs/cgroup/cpu/<directory>/ && chown -R user /sys/fs/cgroup/cpu/<directory>/ (this is specific to restricting CPU usage - there are other cgroup 'controllers' that use different directories; a process can be in multiple cgroups)
v2: mkdir /sys/fs/cgroup/unified/<directory> && chown -R user /sys/fs/cgroup/unified/<directory> (this is a unified cgroup hierarchy, and you can control all cgroup 'controllers' via a single cgroup; a process can be in only one cgroup, and that cgroup must not contain other cgroups - i.e., a leaf cgroup)
Configure the cgroup by writing to control files in this tree:
Configure CPU quota using cpu.cfs_quota_us and cpu.cfs_period_us, e.g., echo 10000 > cpu.cfs_quota_us
Add a process to the new cgroup by writing its pid to the cgroup.procs control file. (All subprocesses are automatically in the same cgroup.)
This link has more info:
https://drill.apache.org/docs/configuring-cgroups-to-control-cpu-usage/
CPU affinity
You can only use CPU affinity to limit CPU usage to an integer number of logical CPUs (aka cpu cores), not to a specific percentage. On today's multi-core systems, that may be good enough.
The documentation for this feature is at $ man sched_setaffinity. Note that cgroups also support setting CPU affinity through the cpuset controller.

How can I record what process or kernel activity is using the disk in GNU/Linux?

On a particular Debian server, iostat (and similar) report an unexpectedly high volume (in bytes) of disk writes going on. I am having trouble working out which process is doing these writes.
Two interesting points:
Tried turning off system services one at a time to no avail. Disk activity remains fairly constant and unexpectedly high.
Despite the writing, do not seem to be consuming more overall space on the disk.
Both of those make me think that the writing may be something that the kernel is doing, but I'm not swapping, so it's not clear to me what Linux might try to write.
Could try out atop:
http://www.atcomputing.nl/Tools/atop/
but would like to avoid patching my kernel.
Any ideas on how to track this down?
iotop is good (great, actually).
If you have a kernel from before 2.6.20, you can't use most of these tools.
Instead, you can try the following (which should work for almost any 2.6 kernel IIRC):
sudo -s
dmesg -c
/etc/init.d/klogd stop
echo 1 > /proc/sys/vm/block_dump
rm /tmp/disklog
watch "dmesg -c >> /tmp/disklog"
CTRL-C when you're done collecting data
echo 0 > /proc/sys/vm/block_dump
/etc/init.d/klogd start
exit (quit root shell)
cat /tmp/disklog | awk -F"[() \t]" '/(READ|WRITE|dirtied)/ {activity[$1]++} END {for (x in activity) print x, activity[x]}'| sort -nr -k2
The dmesg -c lines clear your kernel log . The logger is then shut off, manually (using watch) dumped to a disk (the memory buffer is small, which is why we need to do this). Let it run for about five minutes or so, and then CTRL-c the watch process. After shutting off the logging and restarting klogd, analyze the results using the little bit of awk at the end.
If you are using a kernel newer than 2.6.20 that is very easy, as that is the first version of Linux kernel that includes I/O accounting. If you are compiling your own kernel, be sure to include:
CONFIG_TASKSTATS=y
CONFIG_TASK_IO_ACCOUNTING=y
Kernels from Debian packages already include these flags, so there is no need for recompiling your kernel. Standard utility for accessing I/O accounting data in real time is iotop(1). It gives you a complete list of processes managed by I/O scheduler, and displays per process statistics for read, write and total I/O bandwidth used.
You may want to investigate iotop for Linux. There are some Solaris versions floating around, but there is a Debian package for example.
You can use the UNIX-command lsof (list open files). That prints out the process, process-id, user for any open file.
You could also use htop, enabling IO_RATR column. Htop is an exelent top replacement.
Brendan Gregg's iosnoop script can (heuristically) tell you about currently using the disk on recent kernels (example iosnoop output).
You could try to use SystemTap , it has a lot of examples , and if I'm not mistaken , it shows how to do this sort of thing .
I've recently heard about Mortadelo, a Filemon clone, but have not checked it out myself yet:
http://gitorious.org/mortadelo

Resources