Take output from AWK command and display line by line based on white space - linux

I am running the following command in a bash script:
echo `netstat -plten | grep -i autossh | awk '{print $4}'` >> /root/logs/autossh.txt
The output displays in a single line:
127.0.0.1:25001 127.0.0.1:15501 127.0.0.1:10001 127.0.0.1:20501 127.0.0.1:15001 127.0.0.1:5501 127.0.0.1:20001
I would like each IP to display line by line. What do I need to do with the awk command to make the output display line by line

Just remove the echo and subshell:
netstat -plten | grep -i autossh | awk '{print $4}' >> /root/logs/autossh.txt
awk is already printing them one per line, but when you pass them to echo it parses its arguments and prints them each with a space between them. Every line of awk output then becomes a separate argument to echo so you lose your line endings.
Of course, awk can do pattern matching too, so no real need for grep:
netstat -plten | awk '/autossh/ {print $4}' >> /root/logs/autossh.txt
with gawk at least you can have it ignore case too
netstat -plten | awk 'BEGIN {IGNORECASE=1} /autossh/ {print $4}' >> /root/logs/autossh.txt
or as Ed Morton pointed out, with any awk you could do
netstat -plten | awk 'tolower($0) ~ /autossh/ {print $4}' >> /root/logs/autossh.txt

You can just quote the result of command substitution to prevent the shell from performing word splitting.
You can modify it as follows to achieve what you want.
echo "`netstat -plten | grep -i autossh | awk '{print $4}'`" >> /root/logs/autossh.txt

Related

Sum out of grep -c

I am trying to find the number an even occured in my log file.
Command:
grep -Eo "2016-08-30" applciationLog.log* -c
Output:
applciationLog.log.1:0
applciationLog.log.2:0
applciationLog.log.3:0
applciationLog.log.4:0
applciationLog.log.5:7684
applciationLog.log.6:9142
applciationLog.log.7:8699
applciationLog.log.8:0
What I actually need is sum of all these values 7684 + 9142 + 8699 = 25525. Any suggestion I can do it? Anything I can append to the grep to enable it.
Any help or pointers are welcome and appreciated.
If you want to keep your grep command, pipe its output to awk, the quick and dirty way is down here:
grep -Eo "aaa" -c aaa.txt bbb.txt -c | awk 'BEGIN {cnt=0;FS=":"}; {cnt+=$2;}; END {print cnt;}'
Or use use awk regex directly:
awk 'BEGIN {cnt=0}; {if(/aaa/) {cnt+=1;}}; END {print cnt;}' aaa.txt bbb.txt
As addition to the already given answer by ghoti:
You can avoid awk -F: by using grep -h:
grep -c -h -F "2016-08-30" applicationLog.log* | awk '{n+=$0} END {print n}'
This means no filenames and only the counts are printed by grep and we can use the first field for the addition in awk.
See if this works for you:
grep -Eo "2016-08-30" applciationLog.log* -c | awk -F':' 'BEGIN {sum = 0;} {sum += $2;} END {print sum;}'
We use awk to split each line up with a delimeter of :, sum up the numbers for each line, and print the result at the end.
The grep command doesn't do arithmetic, it just finds lines that match regular expressions.
To count the output you already have, I'd use awk.
grep -c -F "2016-08-30" applciationLog.log* | awk -F: '{n+=$2} END {print n}'
Note that your grep options didn't make sense -- -E tells the command to use Extended regular expressions, but you're just looking for a fixed string (the date). So I swapped in the -F option instead. And -o tells grep to print the matched text, which you've overridden with -c, so I dropped it.
An alternative using for-loop and arithmetic expansion could be:
x=0
for i in $(grep -hc "2016-08-30" applciationLog.log*);do
x=$((x+i))
done
echo "$x"
An easy alternative is to merge all the files before grep sees them:
cat applciationLog.log* | grep -Eo "2016-08-30" -c
In my directory have have hundreds of files, each file contains lot of text along with a lines similar to this-
Job_1-Run.log:[08/27/20 01:28:40] Total Jobs Cancelled for Job_1_set0 = 10
I do
grep '^Total Jobs Cancelled' ./*
to get that above line.
Then I do a pipe
| awk 'BEGIN {cnt=0;FS="="}; {cnt+=$2;}; END {print cnt;}'
so my final command is-
grep '^Total Jobs Cancelled' ./* | awk 'BEGIN {cnt=0;FS="="}; {cnt+=$2;};END {print cnt;}'
and result is the sum. e.g. -
900
I am using Cmder # https://cmder.net/
Thanks to the answer by #alagner, #john above

Bash tries to execute commands in heredoc

I am trying to write a simple bash script that will print a multiline output to another file. I am doing it through heredoc format:
#!/bin/sh
echo "Hello!"
cat <<EOF > ~/Desktop/what.txt
a=`echo $1 | awk -F. '{print $NF}'`
b=`echo $2 | tr '[:upper:]' '[:lower:]'`
EOF
I was expecting to see a file in my desktop with these contents:
a=`echo $1 | awk -F. '{print $NF}'`
b=`echo $2 | tr '[:upper:]' '[:lower:]'`
But instead, I am seeing these as the contents of my what.txt file:
a=
b=
Somehow, even though it is part of a heredoc, bash is trying to execute it line by line. How do I prevent this, and print the contents to the file as it is?
Quote EOF so that bash takes inputs literally:
cat <<'EOF' > what.txt
a=`echo $1 | awk -F. '{print $NF}'`
b=`echo $2 | tr '[:upper:]' '[:lower:]'`
EOF
Also start using $() for command substitution instead of old and problematic ``.

cut or awk command to print first field of first row

I am trying print the first field of the first row of an output. Here is the case. I just need to print only SUSE from this output.
# cat /etc/*release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 2
Tried with cat /etc/*release | awk {'print $1}' but that print the first string of every row
SUSE
VERSION
PATCHLEVEL
Specify NR if you want to capture output from selected rows:
awk 'NR==1{print $1}' /etc/*release
An alternative (ugly) way of achieving the same would be:
awk '{print $1; exit}'
An efficient way of getting the first string from a specific line, say line 42, in the output would be:
awk 'NR==42{print $1; exit}'
Specify the Line Number using NR built-in variable.
awk 'NR==1{print $1}' /etc/*release
try this:
head -1 /etc/*release | awk '{print $1}'
df -h | head -4 | tail -1 | awk '{ print $2 }'
Change the numbers to tweak it to your liking.
Or use a while loop but thats probably a bad way to do it.
You could use the head instead of cat:
head -n1 /etc/*release | awk '{print $1}'
sed -n 1p /etc/*release |cut -d " " -f1
if tab delimited:
sed -n 1p /etc/*release |cut -f1
Try
sed 'NUMq;d' /etc/*release | awk {'print $1}'
where NUM is line number
ex. sed '1q;d' /etc/*release | awk {'print $1}'
awk, sed, pipe, that's heavy
set `cat /etc/*release`; echo $1
the most code-golfy way i could think of to print first line only in awk :
awk '_{exit}--_' # skip the quotations and make it just
# awk _{exit}--_
#
# if u're feeling adventurous
first pass through exit block, "_" is undefined,
so it fails and skips over for row 1.
then the decrementing of the same counter will make
it "TRUE" in awk's eyes (anything not empty string
or numeric zero is considered "true" in their agile boolean sense). that same counter also triggers default action of print for row 1.
—- incrementing… decrementing… it's same thing,
merely direction and sign inverted.
then finally, at start of row 2, it hits criteria to
enter the action block, which instructs it to instantly
exit, thus performing essentially the same functionality as
awk '{ print; exit }'
… in a slightly less verbose manner. For a single line print, it's not even worth it to set FS to skip the field splitting part.
using that concept to print just 1st row 1st field :
awk '_{exit} NF=++_'
awk '_++{exit} NF=_'
awk 'NR==1&&NF=1' file
grep -om1 '^[^ ]\+' file
# multiple files
awk 'FNR==1&&NF=1' file1 file2
You can kill the process which is running the container.
With this command you can list the processes related with the docker container:
ps -aux | grep $(docker ps -a | grep container-name | awk '{print $1}')
Now you have the process ids to kill with kill or kill -9.

How to run grep inside awk?

Suppose I have a file input.txt with few columns and few rows, the first column is the key, and a directory dir with files which contain some of these keys. I want to find all lines in the files in dir which contain these key words. At first I tried to run the command
cat input.txt | awk '{print $1}' | xargs grep dir
This doesn't work because it thinks the keys are paths on my file system. Next I tried something like
cat input.txt | awk '{system("grep -rn dir $1")}'
But this didn't work either, eventually I have to admit that even this doesn't work
cat input.txt | awk '{system("echo $1")}'
After I tried to use \ to escape the white space and the $ sign, I came here to ask for your advice, any ideas?
Of course I can do something like
for x in `cat input.txt` ; do grep -rn $x dir ; done
This is not good enough, because it takes two commands, but I want only one. This also shows why xargs doesn't work, the parameter is not the last argument
You don't need grep with awk, and you don't need cat to open files:
awk 'NR==FNR{keys[$1]; next} {for (key in keys) if ($0 ~ key) {print FILENAME, $0; next} }' input.txt dir/*
Nor do you need xargs, or shell loops or anything else - just one simple awk command does it all.
If input.txt is not a file, then tweak the above to:
real_input_generating_command |
awk 'NR==FNR{keys[$1]; next} {for (key in keys) if ($0 ~ key) {print FILENAME, $0; next} }' - dir/*
All it's doing is creating an array of keys from the first file (or input stream) and then looking for each key from that array in every file in the dir directory.
Try following
awk '{print $1}' input.txt | xargs -n 1 -I pattern grep -rn pattern dir
First thing you should do is research this.
Next ... you don't need to grep inside awk. That's completely redundant. It's like ... stuffing your turkey with .. a turkey.
Awk can process input and do "grep" like things itself, without the need to launch the grep command. But you don't even need to do this. Adapting your first example:
awk '{print $1}' input.txt | xargs -n 1 -I % grep % dir
This uses xargs' -I option to put xargs' input into a different place on the command line it runs. In FreeBSD or OSX, you would use a -J option instead.
But I prefer your for loop idea, converted into a while loop:
while read key junk; do grep -rn "$key" dir ; done < input.txt
Use process substitution to create a keyword "file" that you can pass to grep via the -f option:
grep -f <(awk '{print $1}' input.txt) dir/*
This will search each file in dir for lines containing keywords printed by the awk command. It's equivalent to
awk '{print $1}' input.txt > tmp.txt
grep -f tmp.txt dir/*
grep requires parameters in order: [what to search] [where to search]. You need to merge keys received from awk and pass them to grep using the \| regexp operator.
For example:
arturcz#szczaw:/tmp/s$ cat words.txt
foo
bar
fubar
foobaz
arturcz#szczaw:/tmp/s$ grep 'foo\|baz' words.txt
foo
foobaz
Finally, you will finish with:
grep `commands|to|prepare|a|keywords|list` directory
In case you still want to use grep inside awk, make sure $1, $2 etc are outside quote.
eg. this works perfectly
cat file_having_query | awk '{system("grep " $1 " file_to_be_greped")}'
// notice the space after grep and before file name

Is there any equivalent command grep -nP "\t" some_file , using sed or awk

I am trying to find the occurance of tab in a file some_file and print those line with leading line number.
grep -nP "\t" some_file works well for me but I want sed or awk equivalent command for the same.
To emulate: grep -nP "\t" file.txt
Here's one way using GNU awk:
awk '/\t/ { print NR ":" $0 }' file.txt
Here's one way using GNU sed:
< file.txt sed -n '/\t/{ =;p }' | sed '{ N;s/\n/:/ }'
Well, you can always do it in sed:
cat -n test.txt | sed -n "/\t/p"
Unfortunately, sed can only print line numbers to stdout with a new line, so in any case, more than one command is necessary. A more lengthy (unnecessary so) version of the above, but one only using sed, would be:
sed = test.txt | sed -n "N;s/\n/ /;/\t/p"
but I like the one with cat more. CATS ARE NICE.

Resources