Bash script to get specific user(s) id and processes count - linux

I need bash script to count processes of SPECIFIC users or all users. We can enter 0, 1 or more arguments. For example
./myScript.sh root deamon
should execute like this:
root 92
deamon 8
2 users has total processes: 100
If nothing is entered as parameter, then all users should be listed:
uuidd 1
awkd 2
daemon 1
root 210
kklmn 6
5 users has total processes: 220
What I have till now is script for all users, and it works fine (with some warnings). I just need part where arguments are entered (some kind of filter results). Here is script for all users:
cntp = 0 #process counter
cntu = 0 #user counter
ps aux |
awk 'NR>1{tot[$1]++; cntp++}
END{for(id in tot){printf "%s\t%4d\n",id,tot[id]; cntu++}
printf "%4d users has total processes:%4d\n", cntu, cntp}'

#!/bin/bash
users=$#
args=()
if [ $# -eq 0 ]; then
# all processes
args+=(ax)
else
# user processes, comma-separated list of users
args+=(-u${users// /,})
fi
# print the user field without header
args+=(-ouser=)
ps "${args[#]}" | awk '
{ tot[$1]++ }
END{ for(id in tot){ printf "%s\t%4d\n", id, tot[id]; cntu++ }
printf "%4d users has total processes:%4d\n", cntu, NR}'
The ps arguments are stored in array args and list either all processes with ax or user processes in the form -uuser1,user2
and -ouser= only lists the user field without header.
In the awk script I only removed the NR>1 test and variable cntp which can be replaced by NR.
Possible invocations:
./myScript.sh
./myScript.sh root daemon
./myScript.sh root,daemon

The following seems to work:
ps axo user |
awk -v args="$(IFS=,; echo "$*")" '
BEGIN {
# split args on comma
split(args, users, ",");
# associative array with user as indexes
for (i in users) {
enabled[users[i]] = 1
}
}
NR > 1 {
tot[$1]++;
cntp++;
}
END {
for(id in tot) {
# if we passed some arguments
# and its disabled
if (length(args) && enabled[id] == 0) {
continue
}
printf "%s\t%4d\n", id, tot[id];
cntu++;
}
printf "%4d users has total processes:%4d\n", cntu, cntp
}
'
Tested in repl.

Related

Display users with at least 10 running processes on Linux terminal

How to write the names and number of processes to standard output on Linux terminal where only users with at least 10 running processes are shown?
ps -eo user,cmd | awk '{ usr[$1]+=1;prc[$1][$2]="" } END { for (i in usr) { if (usr[i]>=10) { print i" - "usr[i];for (j in prc[i]) { print i" - "j } } } }'
This can be achieved by using the -o option of ps to remove any "noise" and then piping the output through to awk.
Track the number of processes per user by creating an array usr and track the processes for each user by creating an 2 dimensional array prc. At the end, loop through the arrays, printing the count for each user where the process count is greater or equal to 10 followed by the actual processes.

fastest way to sum the file sizes by owner in a directory

I'm using the below command using an alias to print the sum of all file sizes by owner in a directory
ls -l $dir | awk ' NF>3 { file[$3]+=$5 } \
END { for( i in file) { ss=file[i]; \
if(ss >=1024*1024*1024 ) {size=ss/1024/1024/1024; unit="G"} else \
if(ss>=1024*1024) {size=ss/1024/1024; unit="M"} else {size=ss/1024; unit="K"}; \
format="%.2f%s"; res=sprintf(format,size,unit); \
printf "%-8s %12d\t%s\n",res,file[i],i }}' | sort -k2 -nr
but, it doesn't seem to be fast all the times.
Is it possible to get the same output in some other way, but faster?
Another perl one, that displays total sizes sorted by user:
#!/usr/bin/perl
use warnings;
use strict;
use autodie;
use feature qw/say/;
use File::Spec;
use Fcntl qw/:mode/;
my $dir = shift;
my %users;
opendir(my $d, $dir);
while (my $file = readdir $d) {
my $filename = File::Spec->catfile($dir, $file);
my ($mode, $uid, $size) = (stat $filename)[2, 4, 7];
$users{$uid} += $size if S_ISREG($mode);
}
closedir $d;
my #sizes = sort { $a->[0] cmp $b->[0] }
map { [ getpwuid($_) // $_, $users{$_} ] } keys %users;
local $, = "\t";
say #$_ for #sizes;
Get a listing, add up sizes, and sort it by owner (with Perl)
perl -wE'
chdir (shift // ".");
for (glob ".* *") {
next if not -f;
($owner_id, $size) = (stat)[4,7]
or do { warn "Trouble stat for: $_"; next };
$rept{$owner_id} += $size
}
say (getpwuid($_)//$_, " => $rept{$_} bytes") for sort keys %rept
'
I didn't get to benchmark it, and it'd be worth trying it out against an approach where the directory is iterated over, as opposed to glob-ed (while I found glob much faster in a related problem).
I expect good runtimes in comparison with ls, which slows down dramatically as a file list in a single directory gets long. This is due to the system so Perl will be affected as well but as far as I recall it handles it far better. However, I've seen a dramatic slowdown only once entries get to half a million or so, not a few thousand, so I am not sure why it runs slow on your system.
If this need be recursive in directories it finds then use File::Find. For example
perl -MFile::Find -wE'
$dir = shift // ".";
find( sub {
return if not -f;
($owner_id, $size) = (stat)[4,7]
or do { warn "Trouble stat for: $_"; return };
$rept{$owner_id} += $size
}, $dir );
say (getpwuid($_)//$_, "$_ => $rept{$_} bytes") for keys %rept
'
This scans a directory with 2.4 Gb, of mostly small files over a hierarchy of subdirectories, in a little over 2 seconds. The du -sh took around 5 seconds (the first time round).
It is reasonable to bring these two into one script
use warnings;
use strict;
use feature 'say';
use File::Find;
use Getopt::Long;
my %rept;
sub get_sizes {
return if not -f;
my ($owner_id, $size) = (stat)[4,7]
or do { warn "Trouble stat for: $_"; return };
$rept{$owner_id} += $size
}
my ($dir, $recurse) = ('.', '');
GetOptions('recursive|r!' => \$recurse, 'directory|d=s' => \$dir)
or die "Usage: $0 [--recursive] [--directory dirname]\n";
($recurse)
? find( { wanted => \&get_sizes }, $dir )
: find( { wanted => \&get_sizes,
preprocess => sub { return grep { -f } #_ } }, $dir );
say (getpwuid($_)//$_, " => $rept{$_} bytes") for keys %rept;
I find this to perform about the same as the one-dir-only code above, when run non-recursively (default as it stands).
Note that File::Find::Rule interface has many conveniences but is slower in some important use cases, what clearly matters here. (That analysis should be redone since it's a few years old.)
Parsing output from ls - bad idea.
How about using find instead?
start in directory ${dir}
limit to that directory level (-maxdepth 1)
limit to files (-type f)
print a line with user name and file size in bytes (-printf "%u %s\n")
run the results through a perl filter
split each line (-a)
add to a hash under key (field 0) the size (field 1)
at the end (END {...}) print out the hash contents, sorted by key, i.e. user name
$ find ${dir} -maxdepth 1 -type f -printf "%u %s\n" | \
perl -ane '$s{$F[0]} += $F[1]; END { print "$_ $s{$_}\n" foreach (sort keys %s); }'
stefanb 263305714
A solution using Perl:
#!/usr/bin/perl
use strict;
use warnings;
use autodie;
use File::Spec;
my %users;
foreach my $dir (#ARGV) {
opendir(my $dh, $dir);
# files in this directory
while (my $entry = readdir($dh)) {
my $file = File::Spec->catfile($dir, $entry);
# only files
if (-f $file) {
my($uid, $size) = (stat($file))[4, 7];
$users{$uid} += $size
}
}
closedir($dh);
}
print "$_ $users{$_}\n" foreach (sort keys %users);
exit 0;
Test run:
$ perl dummy.pl .
1000 263618544
Interesting difference. The Perl solution discovers 3 more files in my test directory than the find solution. I have to ponder why that is...
Did I see some awk in the op? Here is one in GNU awk using filefuncs extension:
$ cat bar.awk
#load "filefuncs"
BEGIN {
FS=":" # passwd field sep
passwd="/etc/passwd" # get usernames from passwd
while ((getline < passwd)>0)
users[$3]=$1
close(passwd) # close passwd
if(path="") # set path with -v path=...
path="." # default path is cwd
pathlist[1]=path # path from the command line
# you could have several paths
fts(pathlist,FTS_PHYSICAL,filedata) # dont mind links (vs. FTS_LOGICAL)
for(p in filedata) # p for paths
for(f in filedata[p]) # f for files
if(filedata[p][f]["stat"]["type"]=="file") # mind files only
size[filedata[p][f]["stat"]["uid"]]+=filedata[p][f]["stat"]["size"]
for(i in size)
print (users[i]?users[i]:i),size[i] # print username if found else uid
exit
}
Sample outputs:
$ ls -l
total 3623
drwxr-xr-x 2 james james 3690496 Mar 21 21:32 100kfiles/
-rw-r--r-- 1 root root 4 Mar 21 18:52 bar
-rw-r--r-- 1 james james 424 Mar 21 21:33 bar.awk
-rw-r--r-- 1 james james 546 Mar 21 21:19 bar.awk~
-rw-r--r-- 1 james james 315 Mar 21 19:14 foo.awk
-rw-r--r-- 1 james james 125 Mar 21 18:53 foo.awk~
$ awk -v path=. -f bar.awk
root 4
james 1410
Another:
$ time awk -v path=100kfiles -f bar.awk
root 4
james 342439926
real 0m1.289s
user 0m0.852s
sys 0m0.440s
Yet another test with a million empty files:
$ time awk -v path=../million_files -f bar.awk
real 0m5.057s
user 0m4.000s
sys 0m1.056s
Not sure why question is tagged perl when awk is being used.
Here's a simple perl version:
#!/usr/bin/perl
chdir($ARGV[0]) or die("Usage: $0 dir\n");
map {
if ( ! m/^[.][.]?$/o ) {
($s,$u) = (stat)[7,4];
$h{$u} += $s;
}
} glob ".* *";
map {
$s = $h{$_};
$u = !( $s >>10) ? ""
: !(($s>>=10)>>10) ? "k"
: !(($s>>=10)>>10) ? "M"
: !(($s>>=10)>>10) ? "G"
: ($s>>=10) ? "T"
: undef
;
printf "%-8s %12d\t%s\n", $s.$u, $h{$_}, getpwuid($_)//$_;
} keys %h;
glob gets our file list
m// discards . and ..
stat the size and uid
accumulate sizes in %h
compute the unit by bitshifting (>>10 is integer divide by 1024)
map uid to username (// provides fallback)
print results (unsorted)
NOTE: unlike some other answers, this code doesn't recurse into subdirectories
To exclude symlinks, subdirectories, etc, change the if to appropriate -X tests. (eg. (-f $_), (!-d $_ and !-l $_), etc). See perl docs on the _ filehandle optimisation for caching stat results.
Using datamash (and Stefan Becker's find code):
find ${dir} -maxdepth 1 -type f -printf "%u\t%s\n" | datamash -sg 1 sum 2

Bash, print 0 in terminal each time a non recognised argument is input

I have a bash program which extracts marks from a file that looks like this:
Jack ex1=5 ex2=3 quiz1=9 quiz2=10 exam=50
I want the code to execute such that when I input into terminal:
./program -ex1 -ex2 -ex3
Jack does not have an ex3 in his data, so an output of 0 will be returned:
Jack 5 3 0
how do I code my program to output 0 for each unrecognized argument?
If I understand what you are trying to do, it isn't that difficult. What you need to do is read each line into a name and the remainder into marks. (input is read from stdin)
Then for each argument given on the command line, check if the first part matches the beginning of any grade in marks (the left size of the = sign). If it does, then save the grade (right side of the = sign) and set the found flag to 1.
After checking all marks against the first argument, if the found flag is 1, output the grade, otherwise output 0. Repeat for all command line arguments. (and then for all students in file) Let me know if you have questions:
#!/bin/bash
declare -i found=0 # initialize variables
declare -i grade=0
while read -r name marks; do # read each line into name & marks
printf "%s" "$name" # print student name
for i in "$#"; do # for each command line argument
found=0 # reset found (flag) 0
for j in $marks; do # for each set of marks check for match
[ $i = -${j%=*} ] && { found=1; grade=${j#*=}; } # if match save grade
done
[ $found -eq 1 ] && printf " %d" $grade || printf " 0" # print grade or 0
done
printf "\n" # print newline
done
exit 0
Output
$ bash marks_check.sh -ex1 -ex2 -ex3 < dat/marks.txt
Jack 5 3 0

Get a variable from an array in perl

I am running a following command and getting the 4 lines as output.
userid#server:/home/userid# ps -ef|grep process
This is the output for the command.
userid 10117 9931 0 06:25 pts/0 00:00:00 grep process
userid 15329 1 0 Jul11 ? 00:03:40 process APP1
userid 15334 15329 1 Jul11 ? 2-00:40:53 process1 APP1
userid 15390 15334 0 Jul11 ? 05:19:31 process2 APP1
I want to save the value APP1 to a variable using perl. So I want an output like $APP = APP1.
Try this (your output is in this case in the file in.txt):
perl -ne ' /(APP\d+)/g; print "$1\n";' in.txt
Prints:
APP1
APP1
APP1
Perhaps using an array for the captured APPS1s would be helpful:
use strict;
use warnings;
my #apps;
while (<DATA>) {
push #apps, $1 if /process\d*\s+(.+)/;
}
print "$_\n" for #apps;
__DATA__
userid 10117 9931 0 06:25 pts/0 00:00:00 grep process
userid 15329 1 0 Jul11 ? 00:03:40 process APP1
userid 15334 15329 1 Jul11 ? 2-00:40:53 process1 APP1
userid 15390 15334 0 Jul11 ? 05:19:31 process2 APP1
Output:
APP1
APP1
APP1
Is APP1 the last entry on the command line? Or, is it the second word after the process* command?
If it's the last word on the line, you could use this:
use strict;
use warnings;
use autodie;
open my $command_output, "|-", "pgrep -fl process";
while ( my $command = < $command_output > ) {
$command =~ /(\w+)$/;
my $app = $1; #The last word on the line...
Otherwise, things get a bit more tricky. I am using pgrep instead of ps -ef | grep. The ps command returns a header, plus lots of fields. You need to split them, and parse them all. Plus, it even shows you the grep command you used to get the processes you're interested in.
The pgrep command with the -f and -l parameters returns no header and returns just the process ID followed by the full command. This makes it much easier to parse with a regular expression. (If you don't know about regular expressions, you need to learn about them.)
open my $command_output, "|-", "pgrep -fl process";
while ( my $command = < $command_output > ) {
if ( not $process =~ /^\d+\s+process\w+\s+(\w+)/ ) {
next;
}
my $app = $1; #The second word in the returned command...
There's no need to split or mess. There's no header to skip The regular expression matches the numeric process ID, the process command, and then selects the second word. I even check to make sure the output of the pgrep matches what I expect. Otherwise, I'll get the next line.
I used a single line command to get the required result.
#!/usr/bin/perl
use strict;
use warnings;
my $app1
$app1 = ( split /\s+/, `pgrep -f process1` )[-1];
print ($app1);

grep lines before and after in aix/ksh shell

I want to extract lines before and after a matched pattern.
eg: if the file contents are as follows
absbasdakjkglksagjgj
sajlkgsgjlskjlasj
hello
lkgjkdsfjlkjsgklks
klgdsgklsdgkldskgdsg
I need find hello and display line before and after 'hello'
the output should be
sajlkgsgjlskjlasj
hello
lkgjkdsfjlkjsgklks
This is possible with GNU but i need a method that works in AIX / KSH SHELL WHERE NO GNU IS INSTALLED.
sed -n '/hello/{x;G;N;p;};h' filename
I've found it is generally less frustrating to build the GNU coreutils once, and benefit from many more features http://www.gnu.org/software/coreutils/
Since you'll have Perl on the machine, you could use the following code, but you'd probably do better to install the GNU utilities. This has options -b n1 for lines before and -f n1 for lines following the match. It works with PCRE matches (so if you want case-insensitive matching, add an i after the regex instead using a -i option. I haven't implemented -v or -l; I didn't need those.
#!/usr/bin/env perl
#
# #(#)$Id: sgrep.pl,v 1.7 2013/01/28 02:07:18 jleffler Exp $
#
# Perl-based SGREP (special grep) command
#
# Print lines around the line that matches (by default, 3 before and 3 after).
# By default, include file names if more than one file to search.
#
# Options:
# -b n1 Print n1 lines before match
# -f n2 Print n2 lines following match
# -n Print line numbers
# -h Do not print file names
# -H Do print file names
use warnings;
use strict;
use constant debug => 0;
use Getopt::Std;
my(%opts);
sub usage
{
print STDERR "Usage: $0 [-hnH] [-b n1] [-f n2] pattern [file ...]\n";
exit 1;
}
usage unless getopts('hnf:b:H', \%opts);
usage unless #ARGV >= 1;
if ($opts{h} && $opts{H})
{
print STDERR "$0: mutually exclusive options -h and -H specified\n";
exit 1;
}
my $op = shift;
print "# regex = $op\n" if debug;
# print file names if -h omitted and more than one argument
$opts{F} = (defined $opts{H} || (!defined $opts{h} and scalar #ARGV > 1)) ? 1 : 0;
$opts{n} = 0 unless defined $opts{n};
my $before = (defined $opts{b}) ? $opts{b} + 0 : 3;
my $after = (defined $opts{f}) ? $opts{f} + 0 : 3;
print "# before = $before; after = $after\n" if debug;
my #lines = (); # Accumulated lines
my $tail = 0; # Line number of last line in list
my $tbp_1 = 0; # First line to be printed
my $tbp_2 = 0; # Last line to be printed
# Print lines from #lines in the range $tbp_1 .. $tbp_2,
# leaving $leave lines in the array for future use.
sub print_leaving
{
my ($leave) = #_;
while (scalar(#lines) > $leave)
{
my $line = shift #lines;
my $curr = $tail - scalar(#lines);
if ($tbp_1 <= $curr && $curr <= $tbp_2)
{
print "$ARGV:" if $opts{F};
print "$curr:" if $opts{n};
print $line;
}
}
}
# General logic:
# Accumulate each line at end of #lines.
# ** If current line matches, record range that needs printing
# ** When the line array contains enough lines, pop line off front and,
# if it needs printing, print it.
# At end of file, empty line array, printing requisite accumulated lines.
while (<>)
{
# Add this line to the accumulated lines
push #lines, $_;
$tail = $.;
printf "# array: N = %d, last = $tail: %s", scalar(#lines), $_ if debug > 1;
if (m/$op/o)
{
# This line matches - set range to be printed
my $lo = $. - $before;
$tbp_1 = $lo if ($lo > $tbp_2);
$tbp_2 = $. + $after;
print "# $. MATCH: print range $tbp_1 .. $tbp_2\n" if debug;
}
# Print out any accumulated lines that need printing
# Leave $before lines in array.
print_leaving($before);
}
continue
{
if (eof)
{
# Print out any accumulated lines that need printing
print_leaving(0);
# Reset for next file
close ARGV;
$tbp_1 = 0;
$tbp_2 = 0;
$tail = 0;
#lines = ();
}
}
I had a situation where I was stuck with a slow telnet session on a tablet, believe it or not, and I couldn't write a Perl script very easily with that keyboard. I came up with this hacky maneuver that worked in a pinch for me with AIX's limited grep. This won't work well if your grep returns hundreds of lines, but if you just need one line and one or two above/below it, this could do it. First I ran this:
cat -n filename |grep criteria
By including the -n flag, I see the line number of the data I'm seeking, like this:
2543 my crucial data
Since cat gives the line number 2 spaces before and 1 space after, I could grep for the line number right before it like this:
cat -n filename |grep " 2542 "
I ran this a couple of times to give me lines 2542 and 2544 that bookended line 2543. Like I said, it's definitely fallable, like if you have reams of data that might have " 2542 " all over the place, but just to grab a couple of quick lines, it worked well.

Resources