I'm trying to build a script in Linux (Debian 10) that shows the net usage (%) of a process passed as an argument.
This is the code, but there isn't any output:
ProcessName=$1
(nethogs -t ens33 | awk '/$ProcessName/{print $3}') &> output.txt
While using tracemode nethogs -t, first field of output is program and it can consists of irregular number of arguments.
In case of brave:
/usr/lib/brave-bin/brave --type=utility --utility-sub-type=network.mojom.NetworkService --field-trial-handle=18208005703828410459,4915436466583499460,131072 --enable-features=AutoupgradeMixedContent,DnsOverHttps,LegacyTLSEnforced,PasswordImport,PrefetchPrivacyChanges,ReducedReferrerGranularity,SafetyTip,WebUIDarkMode --disable-features=AutofillEnableAccountWalletStorage,AutofillServerCommunication,DirectSockets,EnableProfilePickerOnStartup,IdleDetection,LangClientHintHeader,NetworkTimeServiceQuerying,NotificationTriggers,SafeBrowsingEnhancedProtection,SafeBrowsingEnhancedProtectionMessageInInterstitials,SharingQRCodeGenerator,SignedExchangePrefetchCacheForNavigations,SignedExchangeSubresourcePrefetch,SubresourceWebBundles,TabHoverCards,TextFragmentAnchor,WebOTP --lang=en-US --service-sandbox-type=none --shared-files=v8_context_snapshot_data:100/930/1000 0.0554687 0.0554687
so $3 will no longer be as expected, you need to get last column of output using $(NF) as follow:
... | awk /$ProcessName/'{print $(NF)}'
for second last column:
... | awk /$ProcessName/'{print $(NF - 1)}'
What I'm doing wrong?
What you're doing wrong is single-quoting $ProcessName while wanting this to be parameter-expanded. To get the expansion, just don't quote there, e. g.:
… awk /$ProcessName/'{print $3}' …
Related
I have 20GB log file, where it contains lots of fields, the field or column numbers 2 contains numbers. I use the below commands to print only column 2
zcat /path to file location/$date*/logfile_*.dat.zip | awk '/Read:ROP/' | nawk -F "=" '{print $2}'
the result of this command is:
"93711994166", Key
since i want only the number then i append the below command to my original command to clean the output:
| awk -F, '{print $1}' | sed 's/"//g'
the result is:
93711994166
my final purpose is to print only numbers having length other than 11 digits, therefore, I append the following to my final command:
-vE '^.{11}$'
so my final command is:
zcat /path to file location/$date*/logfile_*.dat.zip | awk '/Read:ROP/' | nawk -F "=" '{print $2}' | awk -F, '{print $1}' | sed 's/"//g' | grep -vE '^.{11}$' >/tmp/$file
this command takes long time to execute also causes high CPU usage. I want to achieve the following:
print all numbers with length not equal to 11 digits.
print all numbers that do not start with 93 (regardless of their length)
clean, effective and not cpu or memory costly command
I have another requirement which is to print also the numbers that not started with 93.
Note:
the log file contains lots of different lines but i use awk '/Read:ROP/' to work on the below output and extract numbers
Read:ROP (CustomerId="93700001865", Key=1, ActiveEndDate=2025-01-19 20:12:22, FirstCallDate=2018-01-08 12:30:30, IsFirstCallPassed=true, IsLocked=false, LTH={Data=["1|
MOC|07.07.2020 09:18:58|48000.0|119||OnPeakAccountID|480|19250||", "1|RECHARGE|04.07.2020 10:18:32|-4500.0|0|0", "1|RECHARGE|04.07.2020 10:18:59|-4500.0|0|0"], Index=0
}, LanguageID=2, LastKnownPeriod="Active", LastRechargeAmount=4500, LastRechargeDate=2020-07-04 10:18:59, VoucherRchFraudCounter=0, c_BlockPAYG=true, s_PackageKeyCount
er=13, s_OfferId="xyz", OnPeakAccountID_FU={Balance=18850});
20GB log file [...] zcat
Using zcat on 20GB log files is quite expensive. Check top when running your command line above.
It might be worth keeping the data from the first filtering step:
zcat /path to file location/$date*/logfile_*.dat.zip | awk '/Read:ROP/' > filter_data.out
and work with the filtered data. I assume here that this awk step can remove the majority of the data.
Bonus points: This step can be parallelized by running the zcat [...] |awk [...] pipe file-by-file, and you only need to do this once for each file.
The other steps don't look particularly expensive unless there are a lot of data lines left even after filtering.
sed '/.*Read:ROP.*([^=]="\([^"]*\)".*/!d; s//\1/'
/.../ - match regex
.*Read:ROP.* - match Read:ROP followed by anything with anything in front, ie. awk '/Read:ROP/'
([^=]*=" - match a (, followed by anything except =, then a =, then a ", ie. nawk -F "=" '{print $2}'
\([^"]*\) - match everythjing inside qoutes. I guess [0-9] would be fine also
".* - delete rest of line
! - if the line doesn't match the regex
d - remove the line
s - substitute
// - reuse the regex in /.../
\1 - substitute for first backreference, ie. for \([^"]*\)
So I am trying to do some homework and am having a lot of issues. This is the first script I have written in UNIX/Linux. Right now I am trying to print out a few columns from the free command and use awk to just grab those columns I need. Every time I try to do this, i get a bash: print command not found.
Command I am trying to use:
free | awk `{print $3}`
I have also tried without the {}.
So when It is finished I would like it to say for example: Total Free Memory: xx MB
with the xx filled in of course with the result.
Any help is appreciated. Thanks.
UPDATE:
Okay so don't use the backticks to get that to work. So now I would like the whole command to end up saying:
Total Free Memory: xx MB
The command I have so far is:
echo "Current Free Memory: " | free -m | awk '{print $3}'
I only want just that one column row intersection.
You are saying
free | awk `{print $3}`
Whereas you need to say awk '{...}'. That is, use single quotes instead of backticks:
free | awk '{print $3}'
^ ^
Note this is explained in man awk:
Syntax
awk <options> 'Program' Input-File1 Input-File2 ...
awk -f PROGRAM-FILE <options> Input-File1 Input-File2 ...
You should go for the field 4.
As I suggest using awk and sed but it is up to you.
Here is my suggestion :
freem="$(free -m | awk '{print $4}' | sed -n '2p')"
echo "Total Free Memory: $freem MB"
Where $4 is the columns you want and 2p is the line you want.
Before writing the script, check your result in command line.
free -m | awk '/Mem:/ {print "Current Free Memory: " $4 " MB"}'
I am writing a small script to map all the current memory being used by services running in a server. However, I am facing a problem doing that. My script is quite simple. I'm using pmap to find out memory being used and trying add up all the pid of a service running.
#!/bin/bash
result=`$pgrep java`
wc=`$pmap -x $result | wc -l`
gawk=`$pmap -x $result | gawk 'NR==$wc{print $3}'`
echo "$gawk"
Now, my problem is that gawk uses single quote when searching for a specific pattern (gawk 'NR==$wc{print $3}') but shell script gives me error because then meaning of single quote is different in shell from gawk.
Based on your comment, it looks like you're trying to do this:
pmap -x "$(pgrep java)" | awk '{s=$3}END{print s}'
This prints the third column of the last line of the output of pmap -x, with the PID of the running java process. In some versions of awk, you can simply do 'END{print $3}' but this isn't guaranteed to work.
pmap -x $result | gawk 'NR==$wc{print $3}' is not doing what you think it is. (I have replaced your $pmap with pmap, but my analysis is only of the gawk command so if that is incorrect it should be irrelevant.) The shell is going to pass the literal string NR==$wc{print $3} to awk, but it appears that you want awk to see the value of the shell variable $wc rather than the literal string $wc. When awk sees $wc, it treats wc an an uninitialized value, so $wc become equivalent to $0, and awk will print any line whose content matches the line number. The standard way to pass the shell variable into awk is:
pmap -x $result | gawk 'NR==w{print $3}' w=$wc
This assignes the shell variable wc to the awk variable w, and will print the third column of that line.
Note that there are a number of issues with this shell script, but this seems to be the core confusion.
I'm trying to parse out certain information from a bash script on Ubuntu
I'm having a bash script execute every x seconds which does write:
forever list
this response from that command looks like this:
info: Forever processes running
data: uid command script forever pid logfile uptime
data: [0] _1b2 /usr/bin/nodejs /home/ubuntu/node/server.js 28968 28970 /root/.forever/_1b2.log 0:0:17:17.233
I want to parse out the location of the logfile /root/.forever/_1b2.log
Any ideas how to accomplish this with bash?
Two of the many awk variations to solve this issue:
# most basic
command | awk 'NR==3{ print $8 }' data
# a little bit more robust:
command | awk '$1=="data:" && $2=="[0]" { print $8 }' data
# ^^^^^^^^^
# here I filter on the "[0]" text, but depending your needs
# you might want to use $3=="_1b2" or $4=="/usr/bin/nodejs"
You could try the below GNU and basic sed commands,
$ command | sed -nr 's/^.* (\S+) [0-9]+:[0-9]+\S+$/\1/p'
/root/.forever/_1b2.log
$ command | sed -n 's/^.* \(\S\+\) [0-9]\+:[0-9]\+\S\+$/\1/p'
/root/.forever/_1b2.log
It prints only the non-space characters which is followed by a space + one or more digits + : symbol + one or more digits.
You could grep the line(s) you're looking for and pipe the output to awk:
forever list | grep "\.log" | awk '{print $4}'
This will find all processes with a .log file, and print the log file's location (4th column).
Assuming the logfile is in /root/.forever directory, you can use grep this regex:
forever list | grep -o "/root/\.forever/.*\.log"
I want to do this:
run a command
capture the output
select a line
select a column of that line
Just as an example, let's say I want to get the command name from a $PID (please note this is just an example, I'm not suggesting this is the easiest way to get a command name from a process id - my real problem is with another command whose output format I can't control).
If I run ps I get:
PID TTY TIME CMD
11383 pts/1 00:00:00 bash
11771 pts/1 00:00:00 ps
Now I do ps | egrep 11383 and get
11383 pts/1 00:00:00 bash
Next step: ps | egrep 11383 | cut -d" " -f 4. Output is:
<absolutely nothing/>
The problem is that cut cuts the output by single spaces, and as ps adds some spaces between the 2nd and 3rd columns to keep some resemblance of a table, cut picks an empty string. Of course, I could use cut to select the 7th and not the 4th field, but how can I know, specially when the output is variable and unknown on beforehand.
One easy way is to add a pass of tr to squeeze any repeated field separators out:
$ ps | egrep 11383 | tr -s ' ' | cut -d ' ' -f 4
I think the simplest way is to use awk. Example:
$ echo "11383 pts/1 00:00:00 bash" | awk '{ print $4; }'
bash
Please note that the tr -s ' ' option will not remove any single leading spaces. If your column is right-aligned (as with ps pid)...
$ ps h -o pid,user -C ssh,sshd | tr -s " "
1543 root
19645 root
19731 root
Then cutting will result in a blank line for some of those fields if it is the first column:
$ <previous command> | cut -d ' ' -f1
19645
19731
Unless you precede it with a space, obviously
$ <command> | sed -e "s/.*/ &/" | tr -s " "
Now, for this particular case of pid numbers (not names), there is a function called pgrep:
$ pgrep ssh
Shell functions
However, in general it is actually still possible to use shell functions in a concise manner, because there is a neat thing about the read command:
$ <command> | while read a b; do echo $a; done
The first parameter to read, a, selects the first column, and if there is more, everything else will be put in b. As a result, you never need more variables than the number of your column +1.
So,
while read a b c d; do echo $c; done
will then output the 3rd column. As indicated in my comment...
A piped read will be executed in an environment that does not pass variables to the calling script.
out=$(ps whatever | { read a b c d; echo $c; })
arr=($(ps whatever | { read a b c d; echo $c $b; }))
echo ${arr[1]} # will output 'b'`
The Array Solution
So we then end up with the answer by #frayser which is to use the shell variable IFS which defaults to a space, to split the string into an array. It only works in Bash though. Dash and Ash do not support it. I have had a really hard time splitting a string into components in a Busybox thing. It is easy enough to get a single component (e.g. using awk) and then to repeat that for every parameter you need. But then you end up repeatedly calling awk on the same line, or repeatedly using a read block with echo on the same line. Which is not efficient or pretty. So you end up splitting using ${name%% *} and so on. Makes you yearn for some Python skills because in fact shell scripting is not a lot of fun anymore if half or more of the features you are accustomed to, are gone. But you can assume that even python would not be installed on such a system, and it wasn't ;-).
try
ps |&
while read -p first second third fourth etc ; do
if [[ $first == '11383' ]]
then
echo got: $fourth
fi
done
Your command
ps | egrep 11383 | cut -d" " -f 4
misses a tr -s to squeeze spaces, as unwind explains in his answer.
However, you maybe want to use awk, since it handles all of these actions in a single command:
ps | awk '/11383/ {print $4}'
This prints the 4th column in those lines containing 11383. If you want this to match 11383 if it appears in the beginning of the line, then you can say ps | awk '/^11383/ {print $4}'.
Using array variables
set $(ps | egrep "^11383 "); echo $4
or
A=( $(ps | egrep "^11383 ") ) ; echo ${A[3]}
Similar to brianegge's awk solution, here is the Perl equivalent:
ps | egrep 11383 | perl -lane 'print $F[3]'
-a enables autosplit mode, which populates the #F array with the column data.
Use -F, if your data is comma-delimited, rather than space-delimited.
Field 3 is printed since Perl starts counting from 0 rather than 1
Getting the correct line (example for line no. 6) is done with head and tail and the correct word (word no. 4) can be captured with awk:
command|head -n 6|tail -n 1|awk '{print $4}'
Instead of doing all these greps and stuff, I'd advise you to use ps capabilities of changing output format.
ps -o cmd= -p 12345
You get the cmmand line of a process with the pid specified and nothing else.
This is POSIX-conformant and may be thus considered portable.
Bash's set will parse all output into position parameters.
For instance, with set $(free -h) command, echo $7 will show "Mem:"