I have to get the cpu usage on Linux from a file located on my hard drive. But after some research about it, I didn't find a proper file which informs me about the CPU usage.
The best solution would be to find the file which contains these information and updated frequently.
If you know one file which make this, it would be great, thank you.
This sort of information is available under the /proc filesystem. See man proc.
In particular:
/proc/loadavg gives load averages;
/proc/stat gives information on the amount of time the system spent in user/kernel mode, etc (thanks #Mat!)
/proc/[pid]/stat contains information on CPU times for the given process.
man proc will give further details.
You could use /proc/loadavg for determining the load from a file.
Related
I am trying to debug a performance of my program. What would be ideal is to have a way to see in detail when was the thread doing useful work, when was it blocked by page faults, when was it executing some memory writes and reads, etc...
I would simply like to have a detailed understanding of whats going on. Is it possible?
The linux kernel sources come with the perf tool that can measure a large number of performance counter, all of those you listed included, and can print statistics about it, annotate symbols, instructions and source lines with them (if debug symbols are available), and can track any process or also logical cpu cores.
Your Linux distribution will have the tool probably in a standalone package. Some hardening options of the kernel may limit what information root or non-root users can collect with it.
You can use perf and visualizing a perf output file graphically with hotspot
I like to know, in which file the value for the CPU architecture is stored on Linux e.g. x86_64.
I know several methods (shell commands) in order to access this value like lscpu or uname -a.
Furthermore I'm aware of the Qt-Method QSysInfo::currentCpuArchitecture(). And they all provide the required information.
But I like to create an OS-Interface which retrieves the information of the underlying operating system (in my case Linux) only by using "Linux-Tools" like information from files in /proc/....
I also know that I can run a shell command in my program by using popen() and access the results by the std streamer classes. That's no problem. But unfortunately we don't like run shell commands in our software.
I have looked in several files like
/proc/version, /proc/cpuinfo, /proc/devices or in files in subfolders of /proc.
But unfortunately it seems to me as if I always overlooked this piece of information. I'm sure that it has to be in a file because the method QSysInfo::currentCpuArchitecture() access this information, too.
So if somebody knows where this information is placed on Linux I would be happy if he or she let me know.
With kind regards
According man lscpu
lscpu gathers CPU architecture information from sysfs,
/proc/cpuinfo ...
looking for the information under /proc and cpuinfo was the right idea, since the information is there, but not in the format you were expecting. It is somehow "hidden" in line flags. You like to look for lm (long mode).
cat /proc/cpuinfo | grep "flags\| lm "
If the flag for long mode is set you are on x86_64.
Thanks / Credits to
What do the flags in /proc/cpuinfo mean?
CPUID, bit 29 (LM)
How the information is gathered and processed you can find in sys-utils/lscpu.c, in example from line 369-378.
Other CPU modes are
Real Mode, 16 bit CPU, Intel 8086 or 8080 emulation mode, all x86 CPU start in this mode after reset
Protected Mode, 32-bit CPU
In the /proc/cpuinfo, I find a strange parameter "cpu MHz" which is changing all the time. I want to study how it works. Does there exist some syscall that can help me get this parameter? I wish this syscall can help me know how to calculate the cpu MHz.
No, there is no syscall that will tell you the current speed (MHz) of your CPU. If you want to know the value without writing a kernel module for it, you can read the /proc/cpuinfo file, which is there exactly for this purpose (making this info available to user space programs). There also is a good post here which lists more ways to obtain such information.
If you want to know how the values are calculated you can look at the source code of the Linux kernel. In particular, the fs/proc/cpuinfo.c file is a good starting point.
you might find the information in the /sys filesystem easier to parse:
$ grep . /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq
/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq:900014
/sys/devices/system/cpu/cpu1/cpufreq/scaling_cur_freq:900016
/sys/devices/system/cpu/cpu2/cpufreq/scaling_cur_freq:883064
/sys/devices/system/cpu/cpu3/cpufreq/scaling_cur_freq:862357
under /sys/devices/system/cpu/cpu*/ you will find more interesting properties of each CPU on your system, all of them in an easy-to-parse format (usually just a single line)
How can I find out how big a Linux process's page table is, along with any other variable-size process accounting?
If you are really interested in the page tables, do a
$ cat /proc/meminfo | grep PageTables
PageTables: 24496 kB
Since Linux 2.6.10, the amount of memory used by a single process' page tables has been exposed via the VmPTE field of /proc/<pid>/status.
Not sure about Linux, but most UNIX variants provide sysctl(3) for this purpose. There is also the sysctl(8) command line utility.
Hmmm, back in Ye Olden Tymes, we used to call nlist(3) to get the system address for the data we were interested in, then open /dev/kmem, seek to the address, then read the data. Not sure if this works in Linux, but it might be worth typing "man 3 nlist" and seeing what comes back.
You should describe your problem, and not ask about details. If you fork too much (especially with a process which has a large address space) there are all kind of things which go wrong (including out of memory), hitting a pagetable maximum size is IMHO not a realistic problem.
Thad said, I would also be interested to read a process pagetable share in Linux.
As a simple rule of thumb you can however asume that each process occopies a share in the pagetable which is equal to its virtual size, for example 6 bytes for each page. So for example if you have a Oracle Database with 8GB SGA and 500 Processes sharing it, each of the process will use 14MB pagetable, which results in 7GB pagetables+8GB SGA. (sample numbers from http://kevinclosson.wordpress.com/2009/07/25/little-things-doth-crabby-make-%E2%80%93-part-ix-sometimes-you-have-to-really-really-want-your-hugepages/)
I'm trying to find the best way to use 'top' as semi-permanent instrumentation in the development of a box running embedded Linux. (The instrumentation will be removed from the final-test and production releases.)
My first pass is to simply add this to init.d:
top -b -d 15 >/tmp/toploop.out &
This runs top in "batch" mode every 15 seconds. Let's assume that /tmp has plenty of spaceā¦
Questions:
Is 15 seconds a good value to choose for general-purpose monitoring?
Other than disk space, how seriously is this perturbing the state of the system?
What other (perhaps better) tools could be used like this?
Look at collectd. It's a very light weight system monitoring framework coded for performance.
We use sysstat to monitor things like this.
You might find that vmstat and iostat with a delay and no repeat counter is a better option.
I suspect 15 seconds would be more than adequate unless you actually want to watch what's happening in real time, but that doesn't appear to be the case here.
As far as load, on an idling PIII 900Mhz w/ 768MB of RAM running Ubuntu (not sure which version, but not more than a year old) I have top updating every 0.5 seconds and it's about 2% CPU utilization. At 15s updates, I'm seeing 0.1% CPU utilization.
depending upon what exactly you want, you could use the output of uptime, free, and ps to get most, if not all, of top's information.
If you are looking for overall load, uptime is probably sufficient. However, if you want specific information about processes, you are adventurous, and have the /proc filessystem enabled, you may want to write your own tools. The primary benefit in this environment is that you can focus on exactly what you want and minimize the load introduced to the system.
The proc file system gives your application read access to the kernel memory that keeps track of many of the interesting variables. Reading from /proc is one of the lightest ways to get this information. Additionally, you may be able to get more information than provided by top. I've done this in the past to get amount of time spent in user and system by this process. Additionally, you can use this to get information about the number of file descriptors open by the process. You might also use this to get detailed information about how the network system is working.
Much of this information is pre-processed by other applications which can be used if you get the information you need. However, it is rather straight-forward to read the raw information. Do a man proc for more information.
Pity you haven't said what you are monitoring for.
You should decide whether 15 seconds is ok or not. Feel free to drop it way lower if you wish (and have a fast HDD)
No worries unless you are running a soft real-time system
Have a look at tools suggested in other answers. I'll add another sugestion: "iotop", for answering a "who is thrashing the HDD" questions.
At work for system monitoring during stress tests we use a tool called nmon.
What I love about nmon is it has the ability to export to XLS and generate beautiful graphs for you.
It generates statistics for:
Memory Usage
CPU Usage
Network Usage
Disk I/O
Good luck :)