Trying to use awk to print a column but says print not found - linux

So I am trying to do some homework and am having a lot of issues. This is the first script I have written in UNIX/Linux. Right now I am trying to print out a few columns from the free command and use awk to just grab those columns I need. Every time I try to do this, i get a bash: print command not found.
Command I am trying to use:
free | awk `{print $3}`
I have also tried without the {}.
So when It is finished I would like it to say for example: Total Free Memory: xx MB
with the xx filled in of course with the result.
Any help is appreciated. Thanks.
UPDATE:
Okay so don't use the backticks to get that to work. So now I would like the whole command to end up saying:
Total Free Memory: xx MB
The command I have so far is:
echo "Current Free Memory: " | free -m | awk '{print $3}'
I only want just that one column row intersection.

You are saying
free | awk `{print $3}`
Whereas you need to say awk '{...}'. That is, use single quotes instead of backticks:
free | awk '{print $3}'
^ ^
Note this is explained in man awk:
Syntax
awk <options> 'Program' Input-File1 Input-File2 ...
awk -f PROGRAM-FILE <options> Input-File1 Input-File2 ...

You should go for the field 4.
As I suggest using awk and sed but it is up to you.
Here is my suggestion :
freem="$(free -m | awk '{print $4}' | sed -n '2p')"
echo "Total Free Memory: $freem MB"
Where $4 is the columns you want and 2p is the line you want.
Before writing the script, check your result in command line.

free -m | awk '/Mem:/ {print "Current Free Memory: " $4 " MB"}'

Related

Net Usage (%) of a Process in Linux

I'm trying to build a script in Linux (Debian 10) that shows the net usage (%) of a process passed as an argument.
This is the code, but there isn't any output:
ProcessName=$1
(nethogs -t ens33 | awk '/$ProcessName/{print $3}') &> output.txt
While using tracemode nethogs -t, first field of output is program and it can consists of irregular number of arguments.
In case of brave:
/usr/lib/brave-bin/brave --type=utility --utility-sub-type=network.mojom.NetworkService --field-trial-handle=18208005703828410459,4915436466583499460,131072 --enable-features=AutoupgradeMixedContent,DnsOverHttps,LegacyTLSEnforced,PasswordImport,PrefetchPrivacyChanges,ReducedReferrerGranularity,SafetyTip,WebUIDarkMode --disable-features=AutofillEnableAccountWalletStorage,AutofillServerCommunication,DirectSockets,EnableProfilePickerOnStartup,IdleDetection,LangClientHintHeader,NetworkTimeServiceQuerying,NotificationTriggers,SafeBrowsingEnhancedProtection,SafeBrowsingEnhancedProtectionMessageInInterstitials,SharingQRCodeGenerator,SignedExchangePrefetchCacheForNavigations,SignedExchangeSubresourcePrefetch,SubresourceWebBundles,TabHoverCards,TextFragmentAnchor,WebOTP --lang=en-US --service-sandbox-type=none --shared-files=v8_context_snapshot_data:100/930/1000 0.0554687 0.0554687
so $3 will no longer be as expected, you need to get last column of output using $(NF) as follow:
... | awk /$ProcessName/'{print $(NF)}'
for second last column:
... | awk /$ProcessName/'{print $(NF - 1)}'
What I'm doing wrong?
What you're doing wrong is single-quoting $ProcessName while wanting this to be parameter-expanded. To get the expansion, just don't quote there, e. g.:
… awk /$ProcessName/'{print $3}' …

How to clean output, prints the desired information with less CPU usage

I have 20GB log file, where it contains lots of fields, the field or column numbers 2 contains numbers. I use the below commands to print only column 2
zcat /path to file location/$date*/logfile_*.dat.zip | awk '/Read:ROP/' | nawk -F "=" '{print $2}'
the result of this command is:
"93711994166", Key
since i want only the number then i append the below command to my original command to clean the output:
| awk -F, '{print $1}' | sed 's/"//g'
the result is:
93711994166
my final purpose is to print only numbers having length other than 11 digits, therefore, I append the following to my final command:
-vE '^.{11}$'
so my final command is:
zcat /path to file location/$date*/logfile_*.dat.zip | awk '/Read:ROP/' | nawk -F "=" '{print $2}' | awk -F, '{print $1}' | sed 's/"//g' | grep -vE '^.{11}$' >/tmp/$file
this command takes long time to execute also causes high CPU usage. I want to achieve the following:
print all numbers with length not equal to 11 digits.
print all numbers that do not start with 93 (regardless of their length)
clean, effective and not cpu or memory costly command
I have another requirement which is to print also the numbers that not started with 93.
Note:
the log file contains lots of different lines but i use awk '/Read:ROP/' to work on the below output and extract numbers
Read:ROP (CustomerId="93700001865", Key=1, ActiveEndDate=2025-01-19 20:12:22, FirstCallDate=2018-01-08 12:30:30, IsFirstCallPassed=true, IsLocked=false, LTH={Data=["1|
MOC|07.07.2020 09:18:58|48000.0|119||OnPeakAccountID|480|19250||", "1|RECHARGE|04.07.2020 10:18:32|-4500.0|0|0", "1|RECHARGE|04.07.2020 10:18:59|-4500.0|0|0"], Index=0
}, LanguageID=2, LastKnownPeriod="Active", LastRechargeAmount=4500, LastRechargeDate=2020-07-04 10:18:59, VoucherRchFraudCounter=0, c_BlockPAYG=true, s_PackageKeyCount
er=13, s_OfferId="xyz", OnPeakAccountID_FU={Balance=18850});
20GB log file [...] zcat
Using zcat on 20GB log files is quite expensive. Check top when running your command line above.
It might be worth keeping the data from the first filtering step:
zcat /path to file location/$date*/logfile_*.dat.zip | awk '/Read:ROP/' > filter_data.out
and work with the filtered data. I assume here that this awk step can remove the majority of the data.
Bonus points: This step can be parallelized by running the zcat [...] |awk [...] pipe file-by-file, and you only need to do this once for each file.
The other steps don't look particularly expensive unless there are a lot of data lines left even after filtering.
sed '/.*Read:ROP.*([^=]="\([^"]*\)".*/!d; s//\1/'
/.../ - match regex
.*Read:ROP.* - match Read:ROP followed by anything with anything in front, ie. awk '/Read:ROP/'
([^=]*=" - match a (, followed by anything except =, then a =, then a ", ie. nawk -F "=" '{print $2}'
\([^"]*\) - match everythjing inside qoutes. I guess [0-9] would be fine also
".* - delete rest of line
! - if the line doesn't match the regex
d - remove the line
s - substitute
// - reuse the regex in /.../
\1 - substitute for first backreference, ie. for \([^"]*\)

bash: awk print with in print

I need to grep some pattern and further i need to print some output within that. Currently I am using the below command which is working fine. But I like to eliminate using multiple pipe and want to use single awk command to achieve the same output. Is there a way to do it using awk?
root#Server1 # cat file
Jenny:Mon,Tue,Wed:Morning
David:Thu,Fri,Sat:Evening
root#Server1 # awk '/Jenny/ {print $0}' file | awk -F ":" '{ print $2 }' | awk -F "," '{ print $1 }'
Mon
I want to get this output using single awk command. Any help?
You can try something like:
awk -F: '/Jenny/ {split($2,a,","); print a[1]}' file
Try this
awk -F'[:,]+' '/Jenny/{print $2}' file.txt
It is using muliple -F value inside the [ ]
The + means one or more since it is treated as a regex.
For this particular job, I find grep to be slightly more robust.
Unless your company has a policy not to hire people named Eve.
(Try it out if you don't understand.)
grep -oP '^[^:]*Jenny[^:]*:\K[^,:]+' file
Or to do a whole-word match:
grep -oP '^[^:]*\bJenny\b[^:]*:\K[^,:]+' file
Or when you are confident that "Jenny" is the full name:
grep -oP '^Jenny:\K[^,:]+' file
Output:
Mon
Explanation:
The stuff up until \K speaks for itself: it selects the line(s) with the desired name.
[^,:]+ captures the day of week (in this case Mon).
\K cuts off everything preceding Mon.
-o cuts off anything following Mon.

cut or awk command to print first field of first row

I am trying print the first field of the first row of an output. Here is the case. I just need to print only SUSE from this output.
# cat /etc/*release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 2
Tried with cat /etc/*release | awk {'print $1}' but that print the first string of every row
SUSE
VERSION
PATCHLEVEL
Specify NR if you want to capture output from selected rows:
awk 'NR==1{print $1}' /etc/*release
An alternative (ugly) way of achieving the same would be:
awk '{print $1; exit}'
An efficient way of getting the first string from a specific line, say line 42, in the output would be:
awk 'NR==42{print $1; exit}'
Specify the Line Number using NR built-in variable.
awk 'NR==1{print $1}' /etc/*release
try this:
head -1 /etc/*release | awk '{print $1}'
df -h | head -4 | tail -1 | awk '{ print $2 }'
Change the numbers to tweak it to your liking.
Or use a while loop but thats probably a bad way to do it.
You could use the head instead of cat:
head -n1 /etc/*release | awk '{print $1}'
sed -n 1p /etc/*release |cut -d " " -f1
if tab delimited:
sed -n 1p /etc/*release |cut -f1
Try
sed 'NUMq;d' /etc/*release | awk {'print $1}'
where NUM is line number
ex. sed '1q;d' /etc/*release | awk {'print $1}'
awk, sed, pipe, that's heavy
set `cat /etc/*release`; echo $1
the most code-golfy way i could think of to print first line only in awk :
awk '_{exit}--_' # skip the quotations and make it just
# awk _{exit}--_
#
# if u're feeling adventurous
first pass through exit block, "_" is undefined,
so it fails and skips over for row 1.
then the decrementing of the same counter will make
it "TRUE" in awk's eyes (anything not empty string
or numeric zero is considered "true" in their agile boolean sense). that same counter also triggers default action of print for row 1.
—- incrementing… decrementing… it's same thing,
merely direction and sign inverted.
then finally, at start of row 2, it hits criteria to
enter the action block, which instructs it to instantly
exit, thus performing essentially the same functionality as
awk '{ print; exit }'
… in a slightly less verbose manner. For a single line print, it's not even worth it to set FS to skip the field splitting part.
using that concept to print just 1st row 1st field :
awk '_{exit} NF=++_'
awk '_++{exit} NF=_'
awk 'NR==1&&NF=1' file
grep -om1 '^[^ ]\+' file
# multiple files
awk 'FNR==1&&NF=1' file1 file2
You can kill the process which is running the container.
With this command you can list the processes related with the docker container:
ps -aux | grep $(docker ps -a | grep container-name | awk '{print $1}')
Now you have the process ids to kill with kill or kill -9.

Calculate percentage free swap space with `free` and `awk`

I'm trying to calculate the percentage of free swap space available.
Using something like this:
free | grep 'Swap' | awk '{t = $2; f = $4; print ($f/$t)}'
but awk is throwing:
awk: program limit exceeded: maximum number of fields size=32767
And I don't really understand why, my program is quite simple, is it possible I'm having a weird range error?
Try this one :
free | grep 'Swap' | awk '{t = $2; f = $4; print (f/t)}'
In your code you are trying to print $f and $t which is respectively $FreeMemory and $TotalMemory. So i guess you have about 4gig ram in total which would refer to ~ $400000 which is a little bit over the total of fields awk uses in standard config. Apart from the easier attempt with meminfo try just printing f/t which refers to the variables and you get your answer.
Note that it might be easier/more robust to read the info by using /proc/meminfo's SwapFree line.
Something like:
$ grep SwapFree /proc/meminfo | awk '{print $2}'
You do not need the variables. You can use plain
awk '{ print $4/$2 }'
Read it from /proc/meminfo:
lennart#trololol:~$ grep SwapFree /proc/meminfo | awk '{print $2}'
0
I realise that the question is about using "free" and "awk", but if you have SAR running, then this will give you the most recently recorded percentage value:
sar -S|tail -2|head -1|awk '{print $5}'

Resources