How do I extract current process CPU usage by path/command and print it to the console - linux

I'd like to get current process CPU/memory usage% by process name/path and print it to the console.
the command should output one number and not provide an ongoing data flow like 'ps'.
ps -p PID doesn't work as:
I don't have the process number (I do have process path)
It doesn't print the current measurement once to the console
So for example it should look something like:
$command -getCPU | grep procesPath

You actually do know the PID if you know the process path as it is formatted like this : /proc/<pid>.
You can calculate the CPU usage with this method. It involves several steps though.

Related

Calculating CPU usage of a program in Linux

I want to calculate CPU usage % for a given program in Linux. Lets say I want to calculate how much amount of CPU is being used by oracle. When I do ps -elf | grep oracle I get multiple process. How can I get the cumulative result.
You cannot do simple ps -ef|grep oracle because -ef will output full information of all processes, including the command path. If you have any path containing string oracle (in this case), it will be selected, finally, it will make your calculation incorrect.
I would do with pgrep and ps to pick out the right processes you want, and list only CPU usage, finally do the sum:
ps -fho' %C' -p $(pgrep -d, oracle )|awk '{s+=($0+0)}END{printf "CPU Usage:%.2f%%",s}'
pgrep -d, oracle will list out the processes whose name contains oracle; you can use -x to do exact match, if you are sure what process name you want to search. This will output all pid in a csv format, like 123,234
ps -fho '%C' -p '123,234' will output only CPU usage for the given pids, without header, each usage percentage in a line
The final awk script will sum the value up, and print. The output should look like
CPU Usage:xx.xx%

How to get output of top command

I'm wondering how to get the output of the 'top' command. I'm looking to obtain the number of currently running processes.
I tried putting it to a text file and got garbage. I'm unsure on how to approach this. My assignment is to determine the number of processes considered by the short-term scheduler for process allocation (processes currently ready to run) at any given time.
Use top in batch mode:
top -bn1 > output.txt
See also this previous answer.
Easier than parsing top you could count the lines in the process list:
ps -ef | wc -l

Bash: How to record highest memory/cpu consumption during execution of a bash script?

I have a function in a bash script that executes a long process called runBatch. Basically runBatch takes a file as an argument and loads the contents into a db. (runBatch is just a wrapper function for a database command that loads the content of the file)
My function has a loop that looks something like the below, where I am currently recording start time and elapsed time for the process to variables.
for batchFile in `ls $batchFilesDir`
do
echo "Batch file is $batchFile"
START_TIME=$(($(date +%s%N)/1000000))
runBatch $batchFile
ELAPSED_TIME=$(($(($(date +%s%N)/1000000))-START_TIME))
IN_SECONDS=$(awk "BEGIN {printf \"%.2f\",${ELAPSED_TIME}/1000}")
done
Then I am writing some information on each batch (such as time, etc.) to a table in a html page I am generating.
How would I go about recording the highest memory/cpu usage while the runBatch is running, along with the time, etc?
Any help appreciated.
Edit: I managed to get this done. I added a wrapper script around this script that runs this script in the background. I pass it's PID with $! to another script in the wrapper script that monitors the processes CPU and Memory usage with top every second. I compile everything into a html page at the end when the PID is no longer alive. Cheers for the pointers.
You should be able to get the PID of the process using $!,
runBatch $batchFile &
myPID=$!
and then you can run a top -b -p $myPID to print out a ticking summary of CPU.
Memory:
cat /proc/meminfo
Next grep whatever you want,
Cpu, it is more complicated - /proc/stat expained
Average load:
cat /proc/loadavg
For timing "runBatch" use
time runBatch
like
time sleep 10
Once you've got the pid of your process (e.g. like answered here) you can use (with watch(1) & cat(1) or grep(1)) the proc(5) file system, e.g.
watch cat /proc/$myPID/stat
(or use /proc/$myPID/status or /proc/$myPID/statm, or /proc/$myPID/maps for the address space, etc...)
BTW, to run batch jobs you should consider batch (and you might look into crontab(5) to run things periodically)

Get the load, cpu usage and time of executing a bash script

I have a bash script that I plan to run every 5 or 15 mins using crontab based on the load it puts on server.
I can find time of running the script, but load, memory usage and CPU usage I am not sure how to find.
Can someone help me?
Also any suggestions of rough benchmark that will help me decide if the script puts too much load and should be run every 15 mins and not 5 mins.
Thanks in Advance!
You can use "top -b", top gives the CPU usage, memory usage etc,
Insert these lines in your script, this will process in background and will terminate the process as soon as your testing overs.
ssh server_name "nohup top -b -d 0.5 >> file_name &"
\top process will run in background because of &, -d 0.5 will give you the cpu status at every 0.5 secs, redirect the output in file_name for latter analysis.
for killing the process after your test, insert following in your script,
ssh server_name "kill \`ps -elf | grep 'top -b' | grep -v grep | sed 's/ */ /g' |cut -d ' ' -f4\`"
Your main testing script should be between top command and command for killing top.
I presumed you are running the script from client side, if not ignore "ssh server_name".
If you are running it from client side, because of "ssh", you will be asked for the password, for avoiding this follow these 3 simple steps
This will definitely solve the issue.
You can check following utilities
pidstat for CPU load, man page
pmap for memory load, man page
Although you might need to make measurements also for the child processes of your executable, in order to collect summarizing information
For memory, use free -m. Your actual memory available is the second number next to +/- buffers/cache (in megabytes with -m) (source).
For CPU, it's a bit more complicated. Start by looking at cat /proc/stat | grep 'cpu ' (note the space). You'll see something like this:
cpu 2255 34 2290 22625563 6290 127 456
The columns are from left to right, "user, nice, system, idle". CPU usage is usually calculated as (user+nice+system) / (user+nice+system+idle). However, these numbers show the number of "time units" that the CPU has spent doing that thing since boot, and thus are always increasing. If you were to do the aforementioned calculation, you'd get the CPU usage average since boot. To get a point-in-time usage, you have to take 2 samples, find their difference, and calculate the usage from that. To be clear, that will be the average CPU usage between your samples. (source)

Get CPU usage in shell script?

I'm running some JMeter tests against a Java process to determine how responsive a web application is under load (500+ users). JMeter will give the response time for each web request, and I've written a script to ping the Tomcat Manager every X seconds which will get me the current size of the JVM heap.
I'd like to collect stats on the server of the % of CPU being used by Tomcat. I tried to do it in a shell script using ps like this:
PS_RESULTS=`ps -o pcpu,pmem,nlwp -p $PID`
...running the command every X seconds and appending the results to a text file. (for anyone wondering, pmem = % mem usage and nlwp is number of threads)
However I've found that this gives a different definition of "% of CPU Utilization" than I'd like - according to the manpages for ps, pcpu is defined as:
cpu utilization of the process in "##.#" format. It is the CPU time used divided by the time the process has been running (cputime/realtime ratio), expressed as a percentage.
In other words, pcpu gives me the % CPU utilization for the process for the lifetime of the process.
Since I want to take a sample every X seconds, I'd like to be collecting the CPU utilization of the process at the current time only - similar to what top would give me
(CPU utilization of the process since the last update).
How can I collect this from within a shell script?
Use top -b (and other switches if you want different outputs). It will just dump to stdout instead of jumping into a curses window.
The most useful tool I've found for monitoring a server while performing a test such as JMeter on it is dstat. It not only gives you a range of stats from the server, it outputs to csv for easy import into a spreadsheet and lets you extend the tool with modules written in Python.
User load: top -b -n 2 |grep Cpu |tail -n 1 |awk '{print $2}' |sed 's/.[^.]*$//'
System load: top -b -n 2 |grep Cpu |tail -n 1 |awk '{print $3}' |sed 's/.[^.]*$//'
Idle load: top -b -n 1 |grep Cpu |tail -n 1 |awk '{print $5}' |sed 's/.[^.]*$//'
Every outcome is a round decimal.
Off the top of my head, I'd use the /proc filesystem view of the system state - Look at man 5 proc to see the format of the entry for /proc/PID/stat, which contains total CPU usage information, and use /proc/stat to get global system information. To obtain "current time" usage, you probably really mean "CPU used in the last N seconds"; take two samples a short distance apart to see the current rate of CPU consumption. You can then munge these values into something useful. Really though, this is probably more a Perl/Ruby/Python job than a pure shell script.
You might be able to get the rough data you're after with /proc/PID/status, which gives a Sleep average for the process. Pretty coarse data though.
also use 1 as iteration count, so you will get current snapshot without waiting to get another one in $delay time.
top -b -n 1
This will not give you a per-process metric, but the Stress Terminal UI is super useful to know how badly you're punishing your boxes. Add -c flag to make it dump the data to a CSV file.

Resources