Getting Linux process resource usage (cpu,disk,network) - linux

I want to use the /proc to find the resource usage of a particular process every second. The resources include cputime, disk usage and network usage. I looked at /proc/pid/stat , but I am not sure whether I am getting the required details. I want all 3 resource usage and I want to monitor them every second.

Some newer kernels have /proc/<pid_of_process>/io file. This is where IO stats are.
It is not documented in man proc, but you can try to figure out the numbers yourself.
Hope it helps.
Alex.

getrusage() accomplishes cpu, memory and disk etc.
man 2 getrusage
I don't know about network.

checkout glances.
It's got cpu disk and network all on one screen. It's not per process but it's better than looking at 3 separate tools.

Don't think there is a way to get the disk and network information on a per process basis.
The best you can have is the global disk and network, and the per process CPU time.
All documented in man proc

netstat -an
Shows all connections to the server including the source and destination ips and ports if you have proper permissions.
ac
Prints statistics about users' connect time

The best way to approach problems like this is to look up the source code of tools that perform similar monitoring and reporting.
Although there is no guarantee that they are using /proc directly, they will lead you to an efficient way to tackle the problem.
For your case, top(1), iotop(8) and nethogs(8) come to mind.

You can use SAR
-x report statistics for a given process.
See this for more details:
http://www.linuxcommand.org/man_pages/sar1.html
Example:
sar -u -r -b 1 -X PID | grep -v Average | grep -v Linux

You can use top
SYNOPSIS
top -hv|-bcHiOSs -d delay -n limit -u|U user -p PID -o champ -w [columns]
This is a screen capture of top in a terminal

Related

Linux / CentOS - Need to monitor a specific process

Please help.
So, I'm given a task to monitor a particular process in CentOS.
There are certain requirements.
Can't monitor using PID, because once process is killed or dead, solution is of no use.
It'll be great if I could know how much is the consumption of each thread of a process.
I've researched enough but with no success.
Thanks in advance.
I am uncertain what exactly you are trying to achieve, but this is how I would proceed:
Suggested Approaches
Multiple Process IDs per process name
top -H -p $(pgrep <process_name> | paste -s -d, -)
Single Process ID per process name
top -H -p $(pgrep <process_name>)
Further reading
Reuse command output
Thread Monitoring with top
Turn stdout into comma-separated string
Suggestion
Maybe think about implementing a solution like Prometheus with Node Exporter or Zabbix or Nagios.

How to check memory used by a process in Linux?

How do I check the memory used by a process at same instance. Input provided is process NAME or ID. Need to know the actual memory used by the process at runtime.
What command should I enter to get result.
You have multiple options to find out the actual memory consumption for eg.
top (memory/cpu stats), free ( used/unused memory),mpstat ( cpu stats). Although,
I'd like to mention the following as it suits your requirement.
top p process_id
Try this one:
ps -e -orss=,args= | sort -b -k1,1n
You can use general and for particular user,
General: top
For Particular User: top -U username

LINUX: How to lock the pages of a process in memory

I have a LINUX server running a process with a large memory footprint (some sort of a database engine). The memory allocated by this process is so large that part of it needs to be swapped (paged) out.
What I would like to do is to lock the memory pages of all the other processes (or a subset of the running processes) in memory, so that only the pages of the database process get swapped out. For example I would like to make sure that i can continue to connect remotely and monitor the machine without having the processes impacted by swapping. I.e. I want sshd, X, top, vmstat, etc to have all pages memory resident.
On linux there are the mlock(), mlockall() system calls that seem to offer the right knob to do the pinning. Unfortunately, it seems to me that I need to make an explicit call inside every process and cannot invoke mlock() from a different process or from the parent (mlock() is not inherited after fork() or evecve()).
Any help is greatly appreciated. Virtual pizza & beer offered :-).
It has been a while since I've done this so I may have missed a few steps.
Make a GDB command file that contains something like this:
call mlockall(3)
detach
Then on the command line, find the PID of the process you want to mlock. Type:
gdb --pid [PID] --batch -x [command file]
If you get fancy with pgrep that could be:
gdb --pid $(pgrep sshd) --batch -x [command file]
Actually locking the pages of most of the stuff on your system seems a bit crude/drastic, not to mention being such an abuse of the mechanism it seems bound to cause some other unanticipated problems.
Ideally, what you probably actually want is to control the "swappiness" of groups of processes so the database is first in line to be swapped while essential system admin tools are the last, and there is a way of doing this.
While searching for mlockall information I ran across this tool. You may be able to find it for your distribution. I only found the man page.
http://linux.die.net/man/8/memlockd
Nowadays, the easy and right way to tackle the problem is cgroup.
Just restrict memory usage of database process:
1. create a memory cgroup
sudo cgcreate -g memory:$test_db -t $User:$User -a $User:$User
2. limit the group's RAM usage to 1G.
echo 1000M > /sys/fs/cgroup/memory/$test_db/memory.limit_in_bytes
or
echo 1000M > /sys/fs/cgroup/memory/$test_db/memory.soft_limit_in_bytes
3. run the database program in the $test_db cgroup
cgexec -g memory:$test_db $db_program_name

Limit the memory and cpu available for a user in Linux

I am a little concerned with the amount of resources that I can use in a shared machine. Is there any way to test if the administrator has a limit in the amount of resources that I can use? And if does, to make a more complete question, how can I set up such limit?
For process related limits, you can have a look in /etc/security/limits.conf (read the comments in the file, use google or use man limits.conf for more information). And as jpalecek points out, you may use ulimit -a to see (and possibly modify) all such limits currently in effect.
You can use the command quota to see if a disk quota is in effect.
You can try running
ulimit -a
to see what resource limits are in effect. Also, if you are allowed to change such limits, you can change them by the ulimit command, eg.
ulimit -c unlimited
lifts any limit for a size of a core file a process can make.
At the C level, the relevant functions (actually syscalls(2)...) could be setrlimit(2) and setpriority(2) and sched_setattr(2). You probably would want them to be called from your shell.
See also proc(5) -and try cat /proc/self/limits and sched(7).
You may want to use the renice(1) command.
If you run a long-lasting program (for several hours) not requiring user interaction, you could consider using some batch processing. Some Linux systems have a batch or at command.

How can I record what process or kernel activity is using the disk in GNU/Linux?

On a particular Debian server, iostat (and similar) report an unexpectedly high volume (in bytes) of disk writes going on. I am having trouble working out which process is doing these writes.
Two interesting points:
Tried turning off system services one at a time to no avail. Disk activity remains fairly constant and unexpectedly high.
Despite the writing, do not seem to be consuming more overall space on the disk.
Both of those make me think that the writing may be something that the kernel is doing, but I'm not swapping, so it's not clear to me what Linux might try to write.
Could try out atop:
http://www.atcomputing.nl/Tools/atop/
but would like to avoid patching my kernel.
Any ideas on how to track this down?
iotop is good (great, actually).
If you have a kernel from before 2.6.20, you can't use most of these tools.
Instead, you can try the following (which should work for almost any 2.6 kernel IIRC):
sudo -s
dmesg -c
/etc/init.d/klogd stop
echo 1 > /proc/sys/vm/block_dump
rm /tmp/disklog
watch "dmesg -c >> /tmp/disklog"
CTRL-C when you're done collecting data
echo 0 > /proc/sys/vm/block_dump
/etc/init.d/klogd start
exit (quit root shell)
cat /tmp/disklog | awk -F"[() \t]" '/(READ|WRITE|dirtied)/ {activity[$1]++} END {for (x in activity) print x, activity[x]}'| sort -nr -k2
The dmesg -c lines clear your kernel log . The logger is then shut off, manually (using watch) dumped to a disk (the memory buffer is small, which is why we need to do this). Let it run for about five minutes or so, and then CTRL-c the watch process. After shutting off the logging and restarting klogd, analyze the results using the little bit of awk at the end.
If you are using a kernel newer than 2.6.20 that is very easy, as that is the first version of Linux kernel that includes I/O accounting. If you are compiling your own kernel, be sure to include:
CONFIG_TASKSTATS=y
CONFIG_TASK_IO_ACCOUNTING=y
Kernels from Debian packages already include these flags, so there is no need for recompiling your kernel. Standard utility for accessing I/O accounting data in real time is iotop(1). It gives you a complete list of processes managed by I/O scheduler, and displays per process statistics for read, write and total I/O bandwidth used.
You may want to investigate iotop for Linux. There are some Solaris versions floating around, but there is a Debian package for example.
You can use the UNIX-command lsof (list open files). That prints out the process, process-id, user for any open file.
You could also use htop, enabling IO_RATR column. Htop is an exelent top replacement.
Brendan Gregg's iosnoop script can (heuristically) tell you about currently using the disk on recent kernels (example iosnoop output).
You could try to use SystemTap , it has a lot of examples , and if I'm not mistaken , it shows how to do this sort of thing .
I've recently heard about Mortadelo, a Filemon clone, but have not checked it out myself yet:
http://gitorious.org/mortadelo

Resources