CPU % usage of all pid [duplicate] - linux

This question already has answers here:
How to get overall CPU usage (e.g. 57%) on Linux [closed]
(6 answers)
Closed 9 years ago.
I can't obtain CPU% usage of all the pid, without know any program names.
I feel I am close to the solution, this is what I've done so far:
for line in $(pgrep -f chrome); \
do echo -n $line" - "; \
ps -p $line -o %cpu | sed -n 2p | sed 's/ //'; done
In this example I obtain only all chrome pid.. in next step I want all executing pid.

You can do this easily with the top command alone.
To order by CPU percentage (descending), you could use top -o -cpu

If you don't want to use top for some reason, couple of other ways I can think of doing this.
> ps -e -o "%p-%C"
Or if you wanted to do it in a script, something like (alternatively could just parse ps again or check /proc/pid/stat for cpu usage)
#!/bin/bash
shopt -s extglob
for line in /proc/+([0-9]); do
echo -n "${line##*/}- "
ps -p "${line##*/}" -o %cpu | sed -n 2p | sed 's/ //'
done
Where
shopt -s extglob Turns on extended file globing in bash
+([0-9]) Matches any files containing 1 or more digits
${line##*/} Strips everything before and including the last / character

Related

Limit number of parallel jobs in bash [duplicate]

This question already has answers here:
Bash: limit the number of concurrent jobs? [duplicate]
(14 answers)
Closed 1 year ago.
I want to read links from file, which is passed by argument, and download content from each.
How can I do it in parallel with 20 processes?
I understand how to do it with an unlimited number of processes:
#!/bin/bash
filename="$1"
mkdir -p saved
while read -r line; do
url="$line"
name_download_file_sha="$(echo $url | sha256sum | awk '{print $1}').jpeg"
curl -L $url > saved/$name_download_file_sha &
done < "$filename"
wait
You can add this test :
until [ "$( jobs -lr 2>&1 | wc -l)" -lt 20 ]; do
sleep 1
done
This will maintain maximum 21 instance of curl in parallel .
And wait until you reach 19 or a lower value to start another one .
If you are using GNU sleep , you can do sleep 0.5 , to optimize the wait time
So you code will be
#!/bin/bash
filename="$1"
mkdir -p saved
while read -r line; do
until [ "$( jobs -lr 2>&1 | wc -l)" -lt 20 ]; do
sleep 1
done
url="$line"
name_download_file_sha="$(echo $url | sha256sum | awk '{print $1}').jpeg"
curl -L $url > saved/$name_download_file_sha &
done < "$filename"
wait
xargs -P is the simple solution. It gets somewhat more complicated when you want to save to separate files, but you can use sh -c to add this bit.
: ${processes:=20}
< $filename xargs -P $processes -I% sh -c '
line="$1"
url_file="$line"
name_download_file_sha="$(echo $url_file | sha256sum | awk "{print \$1}").jpeg"
curl -L $url > saved/$name_download_file_sha
' -- %
Based on triplee's suggestions, I've lower-cased the environment variable and changed its name to 'processes' to be more correct.
I've also made the suggested corrections to the awk script to avoid quoting issues.
You may still find it easier to replace the awk script with cut -f1, but you'll need to specify the cut delimeter if it's spaces (not tabs).

Is it possible to do watch logfile with tail -f and pipe updates/changes over netcat to another local system? [duplicate]

This question already has answers here:
Piping tail output though grep twice
(2 answers)
Closed 4 years ago.
There is a file located at $filepath, which grows gradually. I want to print every line that starts with an exclamation mark:
while read -r line; do
if [ -n "$(grep ^! <<< "$line")" ]; then
echo "$line"
fi
done < <(tail -F -n +1 "$filepath")
Then, I rearranged the code by moving the comparison expression into the process substitution to make the code more concise:
while read -r line; do
echo "$line"
done < <(tail -F -n +1 "$filepath" | grep '^!')
Sadly, it doesn't work as expected; nothing is printed to the terminal (stdout).
I prefer to write grep ^\! after tail. Why doesn't the second code snippet work? Why putting the command pipe into the process substitution make things different?
PS1. This is how I manually produce the gradually growing file by randomly executing one of the following commands:
echo ' something' >> "$filepath"
echo '!something' >> "$filepath"
PS2. Test under GNU bash, version 4.3.48(1)-release and tail (GNU coreutils) 8.25.
grep is not line-buffered when its stdout isn't connected to a tty. So it's trying to process a block (usually 4 KiB or 8 KiB or so) before generating some output.
You need to tell grep to buffer its output by line. If you're using GNU grep, this works:
done < <(tail -F -n +1 "$filepath" | grep '^!' --line-buffered)
^^^^^^^^^^^^^^^

Bash: if statement always succeeding

I have the following if statement to check if a service, newrelic-daemon in this case, is running...
if [ $(ps -ef | grep -v grep | grep newrelic-daemon | wc -l) > 0 ]; then
echo "New Relic is already running."
The problem is it's always returning as true, i.e. "New Relic is already running". Even though when I run the if condition separately...
ps -ef | grep -v grep | grep newrelic-daemon | wc -l
... it returns 0. I expect it to do nothing here as the value returned is =0 but my IF condition says >0.
Am I overlooking something here?
You are trying to do a numeric comparison in [...] with >. That doesn't work; to compare values as numbers, use -gt instead:
if [ "$(ps -ef | grep -v grep | grep -c newrelic-daemon)" -gt 0 ]; then
The quotation marks around the command expansion prevent a syntax error if something goes horribly wrong (e.g. $PATH set wrong and the shell can't find grep). Since you tagged this bash specifically, you could also just use [[...]] instead of [...] and do without the quotes.
As another Bash-specific option, you could use ((...)) instead of either form of square brackets. This version is more likely to generate a syntax error if anything goes wrong (as the arithmetic expression syntax really wants all arguments to be numbers), but it lets you use the more natural comparison operators:
if (( "$(ps -ef | grep -v grep | grep -c newrelic-daemon)" > 0 )); then
In both cases I used grep -c instead of grep | wc -l; that way I avoided an extra process and a bunch of interprocess I/O just so wc can count lines that grep is already enumerating.
But since you're just checking to see if there are any matches at all, you don't need to do either of those; the last grep will exit with a true status if it finds anything and false if it doesn't, so you can just do this:
if ps -ef | grep -v grep | grep -q newrelic-daemon; then
(The -q keeps grep from actually printing out the matching lines.)
Also, if the process name you're looking for is a literal string instead of a variable, my favorite trick for this task is to modify that string like this, instead of piping through an extra grep -v grep:
if ps -ef | grep -q 'newrelic[-]daemon'; then
You can pick any character to put the square brackets around; the point is to create a regular expression pattern that matches the target process name but doesn't match the pattern itself, so the grep process doesn't find its own ps line.
Finally, since you tagged this linux, note that most Linux distros ship with a combination ps + grep command called pgrep, which does this for you without your having to build a pipeline:
if pgrep newrelic-daemon >/dev/null; then
(The MacOS/BSD version of pgrep accepts a -q option like grep, which would let you do without the >/dev/null redirect, but the versions I've found on Linux systems don't seem to have that option.)
There's also pidof; I haven't yet encountered a system that had pidof without pgrep, but should you come across one, you can use it the same way:
if pidof newrelic-daemon >/dev/null; then
Other answers have given you more details. I would do what you are trying to do with:
if pidof newrelic-daemon >/dev/null; then
echo "New Relic is already running."
fi
or even
pidof newrelic-daemon >/dev/null && echo "New Relic is already running."
If you want to compare integers with test you have to use the -gt option. See:
man test
or
man [
#Stephen: Try(change [ to [[ into your code along with fi which will complete the if block completely):
if [[ $(ps -ef | grep -v grep | grep newrelic-daemon | wc -l) > 0 ]]; then
echo "New Relic is already running."
fi

Bash - Command call ported to variable with another variable inside

I believe this is a simple syntax issue on my part but I have been unable to find another example similar to what i'm trying to do. I have a variable taking in a specific disk location and I need to use that location in an hdparm /grep command to pull out the max LBA
targetDrive=$1 #/dev/sdb
maxLBA=$(hdparm -I /dev/sdb |grep LBA48 |grep -P -o '(?<=:\s)[^\s]*') #this works perfect
maxLBA=$(hdparm -I $1 |grep LBA48 |grep -P -o '(?<=:\s)[^\s]*') #this fails
I have also tried
maxLBA=$(hdparm -I 1 |grep LBA48 |grep -P -o '(?<=:\s)[^\s]*')
maxLBA=$(hdparm -I "$1" |grep LBA48 |grep -P -o '(?<=:\s)[^\s]*')
Thanks for the help
So I think here is the solution to your problem. I did basically the same as you but changed the way I pipe the results into one another.
grep with regular expression to find the line containing LBA48
cut to retrieve the second field when the resulting string is divided by the column ":"
then trim all the leasding spaces from the result
Here is my resulting bash script.
#!/bin/bash
target_drive=$1
max_lba=$(sudo hdparm -I "$target_drive" | grep -P -o ".+LBA48.+:.+(\d+)" | cut -d: -f2 | tr -d ' ')
echo "Drive: $target_drive MAX LBA48: $max_lba"

How to get the command line args passed to a running process on unix/linux systems?

On SunOS there is pargs command that prints the command line arguments passed to the running process.
Is there is any similar command on other Unix environments?
There are several options:
ps -fp <pid>
cat /proc/<pid>/cmdline | sed -e "s/\x00/ /g"; echo
There is more info in /proc/<pid> on Linux, just have a look.
On other Unixes things might be different. The ps command will work everywhere, the /proc stuff is OS specific. For example on AIX there is no cmdline in /proc.
This will do the trick:
xargs -0 < /proc/<pid>/cmdline
Without the xargs, there will be no spaces between the arguments, because they have been converted to NULs.
Full commandline
For Linux & Unix System you can use ps -ef | grep process_name to get the full command line.
On SunOS systems, if you want to get full command line, you can use
/usr/ucb/ps -auxww | grep -i process_name
To get the full command line you need to become super user.
List of arguments
pargs -a PROCESS_ID
will give a detailed list of arguments passed to a process. It will output the array of arguments in like this:
argv[o]: first argument
argv[1]: second..
argv[*]: and so on..
I didn't find any similar command for Linux, but I would use the following command to get similar output:
tr '\0' '\n' < /proc/<pid>/environ
You can use pgrep with -f (full command line) and -l (long description):
pgrep -l -f PatternOfProcess
This method has a crucial difference with any of the other responses: it works on CygWin, so you can use it to obtain the full command line of any process running under Windows (execute as elevated if you want data about any elevated/admin process). Any other method for doing this on Windows is more awkward ( for example ).
Furthermore: in my tests, the pgrep way has been the only system that worked to obtain the full path for scripts running inside CygWin's python.
On Linux
cat /proc/<pid>/cmdline
outputs the commandline of the process <pid> (command including args) each record terminated by a NUL character.
A Bash Shell Example:
$ mapfile -d '' args < /proc/$$/cmdline
$ echo "#${#args[#]}:" "${args[#]}"
#1: /bin/bash
$ echo $BASH_VERSION
5.0.17(1)-release
Another variant of printing /proc/PID/cmdline with spaces in Linux is:
cat -v /proc/PID/cmdline | sed 's/\^#/\ /g' && echo
In this way cat prints NULL characters as ^# and then you replace them with a space using sed; echo prints a newline.
Rather than using multiple commands to edit the stream, just use one - tr translates one character to another:
tr '\0' ' ' </proc/<pid>/cmdline
ps -eo pid,args prints the PID and the full command line.
You can simply use:
ps -o args= -f -p ProcessPid
In addition to all the above ways to convert the text, if you simply use 'strings', it will make the output on separate lines by default. With the added benefit that it may also prevent any chars that may scramble your terminal from appearing.
Both output in one command:
strings /proc//cmdline /proc//environ
The real question is... is there a way to see the real command line of a process in Linux that has been altered so that the cmdline contains the altered text instead of the actual command that was run.
On Solaris
ps -eo pid,comm
similar can be used on unix like systems.
On Linux, with bash, to output as quoted args so you can edit the command and rerun it
</proc/"${pid}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null; echo
On Solaris, with bash (tested with 3.2.51(1)-release) and without gnu userland:
IFS=$'\002' tmpargs=( $( pargs "${pid}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
Linux bash Example (paste in terminal):
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
## recover into eval string that assigns it to argv_recovered
eval_me=$(
printf "argv_recovered=( "
</proc/"${!}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH
Solaris Bash Example:
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
pargs "${!}"
ps -fp "${!}"
declare -p tmpargs
eval_me=$(
printf "argv_recovered=( "
IFS=$'\002' tmpargs=( $( pargs "${!}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH
If you want to get a long-as-possible (not sure what limits there are), similar to Solaris' pargs, you can use this on Linux & OSX:
ps -ww -o pid,command [-p <pid> ... ]
try ps -n in a linux terminal. This will show:
1.All processes RUNNING, their command line and their PIDs
The program intiate the processes.
Afterwards you will know which process to kill

Resources