how to include the command/filter awk into command watch? - linux

my command run fine:
free -h -w | awk '{print $7}'
but how I can run using watch:
watch -n.6 "free -h -w | awk '{print $7}'"; ???
(that retun error)
PD:
yes, I want only column available from free

You can see the problem if you try what you posted. Look at the watch output. I see something like this:
Every 2.0s: free -h -w | awk '{print }'
The $7 is being resolved to nothing before watch gets to it.
You need to escape the $7 somehow.
This seems to work for me:
watch "free -h -w | awk '{print \$7}'"

How about putting \ in front of $7? i.e.,
watch -n.6 "free -h -w | awk '{print \$7}'";
The code without \ does not work because $7 is expanded in the double quotes. The shell tries to replace $7 (a positional parameter) with its content, which is likely to be blank. So in effect, you are running something like ... awk '{print }'.

What error are you receiving?
You are almost there, you just need a backslash in front of the $7
watch -n.6 "free -h -w | awk '{print \$7}'";

Escape the $ by using a backslash: \
watch -n.6 "free -h -w | awk '{print \$7}'";
That said, you still won't get the desired output, since not all the values are on the 7'th column. Consider a /proc/meminfo alternative;
awk '/MemFree/ { printf "%.3f \n", $2/1024/1024 }' /proc/meminfo
Using watch:
watch "awk '/MemFree/ { printf \"%.3f \n\", \$2/1024/1024 }' /proc/meminfo"

Related

How do I get just the filename from lsof?

I want to use lsof to create a list of open files in a folder. I'm using macOS, but understand this is common to *nix.
In man lsof, it seems the the -Fn option is closest to what I want, but I can't get further than that. I'm piping to cut to get to just the filename. So this is the command I'm using:
$ lsof -Fn dir1 | cut -c2-
I'm expecting
dir1/file1.txt
I'm getting
32783
34
dir1/file1.txt
32785
3
dir1/.DS_Store
Could you please try following(since no samples were posted so tested in my test box only).
lsof | awk 'match($0,/\/([^ ])*|\/([^$])*/){print substr($0,RSTART,RLENGTH)}'
EDIT: Since OP need not to have files with DOT in their names(basically hidden ones I think) then one could try following.
lsof |
awk '
FNR>1 && match($0,/\/.*/){
val=substr($0,RSTART,RLENGTH)
if(val !~ /\/\./){
print val
}
val=""
}'
you can use this command:
lsof your_dir | awk '{print $9}'
Update:
lsof your_dir | awk '{for(i=9;i<=NF;++i)print $i}'
this will print column 9 to the end.

Take output from AWK command and display line by line based on white space

I am running the following command in a bash script:
echo `netstat -plten | grep -i autossh | awk '{print $4}'` >> /root/logs/autossh.txt
The output displays in a single line:
127.0.0.1:25001 127.0.0.1:15501 127.0.0.1:10001 127.0.0.1:20501 127.0.0.1:15001 127.0.0.1:5501 127.0.0.1:20001
I would like each IP to display line by line. What do I need to do with the awk command to make the output display line by line
Just remove the echo and subshell:
netstat -plten | grep -i autossh | awk '{print $4}' >> /root/logs/autossh.txt
awk is already printing them one per line, but when you pass them to echo it parses its arguments and prints them each with a space between them. Every line of awk output then becomes a separate argument to echo so you lose your line endings.
Of course, awk can do pattern matching too, so no real need for grep:
netstat -plten | awk '/autossh/ {print $4}' >> /root/logs/autossh.txt
with gawk at least you can have it ignore case too
netstat -plten | awk 'BEGIN {IGNORECASE=1} /autossh/ {print $4}' >> /root/logs/autossh.txt
or as Ed Morton pointed out, with any awk you could do
netstat -plten | awk 'tolower($0) ~ /autossh/ {print $4}' >> /root/logs/autossh.txt
You can just quote the result of command substitution to prevent the shell from performing word splitting.
You can modify it as follows to achieve what you want.
echo "`netstat -plten | grep -i autossh | awk '{print $4}'`" >> /root/logs/autossh.txt

Sum out of grep -c

I am trying to find the number an even occured in my log file.
Command:
grep -Eo "2016-08-30" applciationLog.log* -c
Output:
applciationLog.log.1:0
applciationLog.log.2:0
applciationLog.log.3:0
applciationLog.log.4:0
applciationLog.log.5:7684
applciationLog.log.6:9142
applciationLog.log.7:8699
applciationLog.log.8:0
What I actually need is sum of all these values 7684 + 9142 + 8699 = 25525. Any suggestion I can do it? Anything I can append to the grep to enable it.
Any help or pointers are welcome and appreciated.
If you want to keep your grep command, pipe its output to awk, the quick and dirty way is down here:
grep -Eo "aaa" -c aaa.txt bbb.txt -c | awk 'BEGIN {cnt=0;FS=":"}; {cnt+=$2;}; END {print cnt;}'
Or use use awk regex directly:
awk 'BEGIN {cnt=0}; {if(/aaa/) {cnt+=1;}}; END {print cnt;}' aaa.txt bbb.txt
As addition to the already given answer by ghoti:
You can avoid awk -F: by using grep -h:
grep -c -h -F "2016-08-30" applicationLog.log* | awk '{n+=$0} END {print n}'
This means no filenames and only the counts are printed by grep and we can use the first field for the addition in awk.
See if this works for you:
grep -Eo "2016-08-30" applciationLog.log* -c | awk -F':' 'BEGIN {sum = 0;} {sum += $2;} END {print sum;}'
We use awk to split each line up with a delimeter of :, sum up the numbers for each line, and print the result at the end.
The grep command doesn't do arithmetic, it just finds lines that match regular expressions.
To count the output you already have, I'd use awk.
grep -c -F "2016-08-30" applciationLog.log* | awk -F: '{n+=$2} END {print n}'
Note that your grep options didn't make sense -- -E tells the command to use Extended regular expressions, but you're just looking for a fixed string (the date). So I swapped in the -F option instead. And -o tells grep to print the matched text, which you've overridden with -c, so I dropped it.
An alternative using for-loop and arithmetic expansion could be:
x=0
for i in $(grep -hc "2016-08-30" applciationLog.log*);do
x=$((x+i))
done
echo "$x"
An easy alternative is to merge all the files before grep sees them:
cat applciationLog.log* | grep -Eo "2016-08-30" -c
In my directory have have hundreds of files, each file contains lot of text along with a lines similar to this-
Job_1-Run.log:[08/27/20 01:28:40] Total Jobs Cancelled for Job_1_set0 = 10
I do
grep '^Total Jobs Cancelled' ./*
to get that above line.
Then I do a pipe
| awk 'BEGIN {cnt=0;FS="="}; {cnt+=$2;}; END {print cnt;}'
so my final command is-
grep '^Total Jobs Cancelled' ./* | awk 'BEGIN {cnt=0;FS="="}; {cnt+=$2;};END {print cnt;}'
and result is the sum. e.g. -
900
I am using Cmder # https://cmder.net/
Thanks to the answer by #alagner, #john above

awk not working as expected

I have what I am assuming to be a very simple problem. I have used the following command in scripts many times before without issues.
display=/some/file/with/data
sed -i "s/$(more ${display} | grep 110 | gawk '{ print $3 }')/"replacestring"/g" ${display}
Never had an issue with this command.
I created a script that simplified everything that was needed and have the following code:
displayText=$(more ${display} | grep 110 | gawk '{ print $3 }')
I keep receiving
line 2: syntax error: unexpected end of file.
I am very familiar with Linux so the problem has been quite irritating as the answer has escaped me. Please assist.
Not sure why you are using more.
You also have backquotes around the awk script where you should have single quotes.
displayText=$(cat ${display} | grep 110 | gawk '{ print $3 }')
You will be told that this is a useless-use-of-cat (the cat command isn't needed since grep can read a file). You can just do:
displayText=$(grep 110 ${display} | gawk '{ print $3 }')
As awk can also search, this can be simplified further into:
displayText=$(gawk '/110/ {print $3}' ${display})

Pipe 'tail -f' into awk without hanging

Something like this will hang:
tail -f textfile | awk '{print $0}'
while grep won't hang when used instead of awk.
My actual intention is to add color to some log output using merely standard commands; however it seems that piping tail -f into awk won't work. I don't know if it's a buffer problem, but I tried some approaches that haven't worked, like:
awk '{print $0;fflush()}'
and also How to pipe tail -f into awk
Any ideas?
I ran into almost exactly the same problem with mawk. I think it is due to the way mawk is flushing its buffer, the problem went away when I switched to gawk. Hope this helps (a bit late I know).
I tried this command :
tail -f test | awk '{print $0;}'
And it doesn't hang. Awk will print the new values each time I add something in the test file.
echo "test" >> test
I think you just forgot a quote in your command because you wrote (edit : well, before your post was edited) :
tail -f textfile | awk {print $0}'
Instead of :
tail -f textfile | awk '{print $0}'

Resources