How to pipe all the output of "ps" into a shell script for further processing? - linux

When I run this command:
ps aux|awk {'print $1,$2,$3,$11'}
I get a listing of the user, PID, CPU% and the actual command.
I want to pipe all those listings into a shell script to calculate the CPU% and if greater than, say 5, then to kill the process via the PID.
I tried piping it to a simple shell script, i.e.
ps aux|awk {'print $1,$2,$3,$11'} | ./myscript
where the content of my script is:
#!/bin/bash
# testing using positional parameters
echo "$1 $2 $3 $4"
But I get a blank output. Any idea how to do this?
Many thanks!

If you use awk, you don't need an additional bash script. Also, it is a good idea to reduce the output of the ps command so you don't have to deal with extra information:
ps acxho user,pid,%cpu,cmd | awk '$3 > 5 {system("echo kill " $2)}'
Explanation
The extra ps flags I use:
c: command only, no extra arguments
h: no header, good for scripting
o: output format. In this case, only output the user, PID, %CPU, and command
The awk command compare the %CPU, which is the third column, with a threshold (5). If it is over the threshold, then issue the system command to kill that process.
Note the echo in the command. Once you are certain the scripts works the way you like, then remove the word echo from the command to execute it for real.

Your script needs to read its input
#!/bin/bash
while read a b c d; do
echo $a $b
done

I think you can get it using xargs command to pass the AWK output to your script as arguments:
ps aux|awk {'print $1,$2,$3,$11'} | xargs ./myscript
Some extra info about xargs: http://en.wikipedia.org/wiki/Xargs

When piping input from one process to another in Linux (or POSIX-compliant systems) the output is not given as arguments to the receiving process. Instead, the standard output of the first process is piped into the standard input of the other process.
Because of this, your script cannot work. $1...$n accesses variables that have been passed as arguments to it. As there are none it won't display anything. Instead, you have to read the standard input into variables with the read command (as pointed out by William).

The pipe '|' redirects the standard output of the left to the standard input of the right. In this case, the output of the ps goes to the input of awk, then the output of awk goes to the stdin of the script.
Therefore your scripts needs to read its STDIN.
#!/bin/bash
read var1 var2 var3 ...
Then you can do whatever you want with those variables.
More info, type in bash: help read

If I well understood your problem, you want to kill every process that exceeds X% of the CPU (using ps aux).
Here is the solution using AWK:
ps aux | grep -v "%CPU" | awk '{if ($3 > XXX) { print "Killing process with PID "$2", called "$4", consuming "$3"% and launched by "$1; system( "kill -9 " $2 );}}' -
Where XXX is your threshold (% of CPU).
It also prints related info to the killed process, if it is not desired just remove the print statement.
You can add some filters like: do not remove root's process...

Try putting myscript in front like this:
./myscript `ps aux|awk {'print $1,$2,$3,$11'}`

Related

Store result of "ps -ax" for later iterating through it

When I do
ps -ax|grep myApp
I get the one line with PID and stuff of my app.
Now, I'ld liked to process the whole result of ps -ax (without grep, so, the full output):
Either store it in a variable and grep from it later
Or go through the results in a for loop, e.g. like that:
for a in $(ps -ax)
do
echo $a
done
Unfortunally, this splits with every space, not with newline as |grep does it.
Any ideas, how I can accomplish one or the other (grep from variable or for loop)?
Important: No bash please, only POSIX, so #!/bin/sh
Thanks in advance
Like stated above, while loop can be helpful here.
One more useful thing is --no-headers argument which makes ps skip the header.
Or - even better - specify the exact columns you need to process, like ps -o pid,command --no-header ax
The overall code would look like
processes=`ps --no-headers -o pid,command ax`
echo "$processes" | while read pid command; do
echo "we have process with pid $pid and command line $command"
done
The only downside to this approach is that commands inside while loop will be executed in subshell so if you need to export some var to the parent process you'll have to do it using inter-process communication stuff.
I usually dump the results into temp file created before while loop and read them after the loop is finished.
I found a solution by removing the spaces while executing the command:
result=$(ps -aux|sed 's/ /_/g')
You can also make it more filter friendly by removing duplicated spaces:
result=$(ps -aux| tr -s ' '|sed 's/ /_/g')

How to show only sleeping processes

How can I display only Sleeping (S) processes from the /proc file?
I want to only display the processes which are Sleeping using the /proc/status directory.
I tried using egrep but it doesn't seem to work properly.
There is a file in /proc/$PID/ called status, you can grep it like so
status=sleeping
for pid in /proc/[0-9]*; {
state=$(grep $status $pid/status)
[[ $state ]] && echo ${pid//'/proc/'/}
}
Or using variable substitution
pids=( $(grep -l $status /proc/*/status) ); echo ${pids[#]//[!0-9]/}
To list pid of all processes use:
ps -eo s,pid
It shows Process State and their PID's
To filter out only sleeping processes you can use awk:
ps h -eo s,pid | awk '{ if ($1 == "S") print $2; }'
If you don't want the full filename path, but exactly and only the PID's, try awk.
$: awk '/sleeping/{ $0=FILENAME; gsub(/[^0-9]/, ""); print $0 }' /proc/[0-9]*/status
It's a single, efficient process that runs across all files and outputs only a set of PID's suitable for capture into an array.
Of course, you could also get the full path this way if you wanted that -
$: awk '/sleeping/{ print FILENAME }' /proc/[0-9]*/status
Or use sed
$: sed -n '/sleeping/F' /proc/[0-9]*/status
But these basically just do the same thing as KamilCuk suggested with grep -l sleeping /proc/[0-9]*/status.
If you just really wanted a reasonably efficient bash-only version, here's a retool of Ivan's:
$: for proc in /proc/[0-9]*/status
do case "$(<$proc)" in
*sleeping*) echo "${proc//[^0-9]/}" ;;
esac
done
How can I display only Sleeping (S) processes from the /proc file?
Read more about proc(5). Every utility querying about processes go thru /proc/ (because there is no other way to interact with the kernel to query process state).
However, you should consider using pgrep(1). You want to run:
pgrep --runstates S
but you might need to compile procps-ng from source code (because you need version 3.3.16)

referencing stdout in a command that has been piped into

I want to make a simple dmenu command that reads a file of commands and names. Then takes the names and displays them using dmenu then takes dmenu's output and runs the associated command using the file again.
I got to the point where dmenu displays the names, but I don't really know where to go from there. Learning bash is a really daunting task to me and I don't really know where to start with this seemingly simple script/command.
here is the file:
Pushbullet
google-chrome-stable --app=https://www.pushbullet.com
Steam
steam
Chrome
google-chrome-stable
Libre Office
libreoffice
Transmission
transmission-qt
Audio Control Panel
sudo pavucontrol & bluberry
and here is what I have so far for my command:
awk 'NR % 2 != 0' /home/rocco/programlist | dmenu | ??(grep -l "stdout" /home/rocco/programlist....)
It was my thinking that I could somehow pipe into grep or awk with the name of the application then get the line number then add one and pipe that into sh.
Thanks
I have no experience with dmenu but if I understand how it works correctly, this should do what you want. Wrapping a command in $(…) returns the output as a variable, which we can pass on to another command.
#!/bin/bash
plist="/home/rocco/programlist"
# pipe every second line to dmenu
selected=$(awk 'NR % 2 != 0' "$plist" | dmenu)
# search for the selected item, get the command after it
cmd=$(grep -A1 "$selected" "$plist" | tail -n 1)
# run the command
$cmd
Worth mentioning a mistake in your question. dmenu sends to stdout, or standard output, but the next program in line would be reading stdin, or standard input. In any case, grep can't take patterns on standard input, which is why I've saved to a variable instead of trying to pipe it somewhere.
Assuming you have programlist.txt in the working directory you can use:
awk 'NR%2 !=0' programlist.txt |dmenu |awk '{system("grep --no-group-separator -A 1 '"'"'"$0"'"'"' programlist.txt");}' |awk '{if(NR==2){system($0);}}'
Note the quoting of the $0 in the first awk envocation. This is necessary to get names with spaces in them like "Libre Office"

Redirecting linux cout to a variable and the screen in a script

I am currently trying to make a script file that runs multiple other script files on a server. I would like to display the output of these script to the screen IN ADDITION to passing it into grep so I can do error testing. currently I have written this:
status=$(SOMEPROCESS | grep -i "SOMEPROCESS started completed correctly")
I do further error handling below this using the variable status, so I would like to display SOMEPROCESS's output to the screen for error reference. This is a read only server and I can not save the output to a log file.
You need to use the tee command. It will be slightly fiddly, since tee outputs to a file handle. However you could create a file descriptor using pipe.
Or (simpler) for your use case.
Start the script without grep and pipe it through tee SOMEPROCESS | tee /my/safely/generated/filename. Then use tail -f /my/safely/generated/filename | grep -i "my grep pattern separately.
You can use process substituion together with tee:
SOMEPROCESS | tee >(grep ...)
This will use an anonymous pipe and pass /dev/fd/... as file name to tee (or a named pipe on platforms that don't support /dev/fd/...).
Because SOMEPROCESS is likely to buffer its output when not talking to a terminal, you might see significant lag in screen output.
I'm not sure whether I understood your question exactly.
I think you want to get the output of SOMEPROCESS, test it, print it out when there are errors. If it is, I think the code bellow may help you:
s=$(SOMEPROCESS)
grep -q 'SOMEPROCESS started completed correctly' <<< $s
if [[ $? -ne 0 ]];then
# specified string not found in the output, it means SOMEPROCESS started failed
echo $s
fi
But in this code, it will store the all output in the memory, if the output is big enough, there will be a OOM risk.

tail not providing output in bash script

i have written a bash script that will filter 'tail' output, which the entire command
tail -f /var/log/asterisk/messages | awk 'match($12, /[^0-9]91([0-9]{10})#default/, a) {print a[1]}'
works fine from the CLI but not when placed in the bash script:
#!/bin/bash
phonenumber=$(tail -f /var/log/asterisk/messages | awk 'match($12, /[^0-9]91([0-9]{10})#default/, a) {print a[1]}')
echo "$phonenumber >> test.log"
which doesn't output anything, (2135551234, is the expected output string) i have tried writing to the log file and writing just the stdout but neither work.
i have tried the script using 'cat' instead of 'tail' and that works fine. but i dont want to dump the output of the entire file, hence the use of 'tail'.
I have also tried using 'tee' but to no avail
the end goal of this script with be to send the phone number as it comes into the PBX to a serial device to another system and used as the CID.
thanks for all your help in advance
Try this:
phonenumber=$(tail -f /var/log/asterisk/messages | awk 'match($12, /[^0-9]91([0-9]{10})#default/, a) {print a[1]; exit}')
Your version doesn't work because tail -f and awk are in an infinite loop. Adding exit to the awk script terminates the loop when the first phone number is found. awk exits immediately and its output is put into the variable, and tail -f gets a SIGPIPE signal when it tries to write the next line to the pipe, which causes it to exit.

Resources