Pipe to AWK ignore line containing Unknown host - linux

How could I tell AWK to ignore line that contains "Unknown host":
user1#ubuntu:~$ gethostip -d blabla | awk '{print $1;exit}'
blabla: Unknown host
user1#ubuntu:~$
Essentially, I want it to return nothing if "Unknown hosts" is contained in the line.

Use a condition before the block to test the contents of the line:
gethostip -d blabla | awk '!/Unknown host/ {print $1;exit}'

The error message goes to standard error, not standard output. If you want to discard it, redirect it to nowhere:
gethostip -d blabla 2>/dev/null | awk '{print $1;exit}'
If you want to process it, you can redirect stderr to stdout so awk sees both:
gethostip -d blabla 2>&1 | | awk '(/Unknown host/){print "Error";exit}{print $1}'

You should be able to simply do:
awk '$0 !~ /Unknown host/'

Related

"awk" is not processing shell variables as expected

I am working on filtering Specific text from a log file. Problem here is that awk is not processing on shell variable. But working fine on filename.
I am storing a new log entry that comes in log file in a shell variable using "new_log=tail -n5 alerts.log" in a loop whenever a new log comes, then ,
Level_no=`awk '{FS="Rule: "}{print $2}' "$new_log" | sed '/^$/d' | awk '{FS=" "}{print $3}' | sed 's/)//g'
Output:
awk: fatal: cannot open file `** Alert 1564460779.1380: mail - ossec,syscheck
* New Log starts from ** Alert 1564460779.1380: mail - ossec,syscheck *`
Above mentioned command works well when I run it in terminal using filename instead of shell variable as follows:
awk '{FS="Rule: "}{print $2}' logs_mining | sed '/^$/d' | awk '{FS=" "}{print $3}' | sed 's/)//g'
But its performance issue if I store new log entry in another file and process from there.
So I researched more and more and came to know about awk variables...here is my shell script..
Level_no=`awk -v var="$new_log" '{FS="Rule: "}{print $2}' var | sed '/^$/d' | awk '{FS=" "}{print $3}' | sed 's/)//g'
Then output says
awk: fatal: cannot open file `var' for reading (No such file or directory)
Actual Result should be Successful execution of awk script.
If new_log contains the data you want to process, not a filename, you need to pipe it to awk. You can do this with a here-string.
Level_no=`awk '{FS="Rule: "}{print $2}' <<<"$new_log" | sed '/^$/d' | awk '{FS=" "}{print $3}' | sed 's/)//g'`
It's also not necessary to pipe the output to sed and another awk, you can do it all in the first script.
Level_no=$(awk -F'Rule: ' '$2 != "" {split($2, a, " "); gsub(/)/, "", a[3]); print a[3]}' <<<"$new_log")
You probably don't need the variable, though, just pipe the output of the command to awk:
Level_no=$(tail -n5 alerts.log | awk -F'Rule: ' '$2 != "" {split($2, a, " "); gsub(/)/, "", a[3]); print a[3]}')

Optimize Multiline Pipe to Awk in Bash Function

I have this function:
field_get() {
while read data; do
echo $data | awk -F ';' -v number=$1 '{print $number}'
done
}
which can be used like this:
cat filename | field_get 1
in order to extract the first field from some piped in input. This works but I'm iterating on each line and it's slower than expected.
Does anybody know how to avoid this iteration?
I tried to use:
stdin=$(cat)
echo $stdin | awk -F ';' -v number=$1 '{print $number}'
but the line breaks get lost and it treats all the stdin as a single line.
IMPORTANT: I need to pipe in the input because in general I DO NOT have just to cat a file. Assume that the file is multiline, the problem is that actually. I know I can use "awk something filename" but that won't help me.
Just lose the while. Awk is a while loop in itself:
field_get() {
awk -F ';' -v number=$1 '{print $number}'
}
$ echo 1\;2\;3 | field_get 2
2
Update:
Not sure what you mean by your comment on multiline pipe and file but:
$ cat foo
1;2
3;4
$ cat foo | field_get 1
1
3
Use either stdin or file
field_get() {
awk -F ';' -v number="$1" '{print $number}' "${2:-/dev/stdin}"
}
Test Results:
$ field_get() {
awk -F ';' -v number="$1" '{print $number}' "${2:-/dev/stdin}"
}
$ echo '1;2;3;4' >testfile
$ field_get 3 testfile
3
$ echo '1;2;3;4' | field_get 2
2
No need to use a while loop and then awk. awk itself can read the input file. Where $1 is the argument passed to your script.
cat script.ksh
awk -v field="$1" '{print $field}' Input_file
./script.ksh 1
This is a job for the cut command:
cut -d';' -f1 somefile

Using awk to modify output

I have a command that is giving me the output:
/home/konnor/md5sums:ea66574ff0daad6d0406f67e4571ee08 counted-file.xml.20131003-083611
I need the output to be:
ea66574ff0daad6d0406f67e4571ee08 counted-file.xml
The closest I got was:
$ echo /home/konnor/md5sums:ea66574ff0daad6d0406f67e4571ee08 counted-file.xml.20131003-083611 | awk '{ printf "%s", $1 }; END { printf "\n" }'
/home/konnor/md5sums:ea66574ff0daad6d0406f67e4571ee08
I'm not familiar with awk but I believe this is the command I want to use, any one have any ideas?
Or just a sed oneliner:
echo /home/konnor/md5sums:ea66574ff0daad6d0406f67e4571ee08 counted-file.xml.20131003-083611 \
| sed -E 's/.*:(.*\.xml).*/\1/'
$ echo "/home/konnor/md5sums:ea66574ff0daad6d0406f67e4571ee08 counted-file.xml.20131003-083611" |
cut -d: -f2 |
cut -d. -f1-2
ea66574ff0daad6d0406f67e4571ee08 counted-file.xml
Note that this relies on the dot . being present as in counted-file.xml.
$ awk -F[:.] -v OFS="." '{print $2,$3}' <<< "/home/konnor/md5sums:ea66574ff0daad6d0406f67e4571ee08 counted-file.xml.20131003-083611"
ea66574ff0daad6d0406f67e4571ee08 counted-file.xml
not sure if this is ok for you:
sed 's/^.*:\(.*\)\.[^.]*$/\1/'
with your example:
kent$ echo "/home/konnor/md5sums:ea66574ff0daad6d0406f67e4571ee08 counted-file.xml.20131003-083611"|sed 's/^.*:\(.*\)\.[^.]*$/\1/'
ea66574ff0daad6d0406f67e4571ee08 counted-file.xml
this grep line works too:
grep -Po ':\K.*(?=\..*?$)'

Linux Command line Help | extract specific parts from file

Suppose I have a log which has data in the format given below
Time number status
2013-5-10 19:18:43.430 123456 success
2013-5-10 19:28:13.430 134324 fail
2013-5-10 19:58:33.430 456456 success
I want to extract the numbers having success status.
Is there any way in linux using command line(grep, sed) to extract the data as mentioned. ??
Thanks all ..
grep only solution:
grep -Po '\d+(?= success)' file
or with awk only:
awk '$4=="success"&&$0=$3' input
This prints numbers based on success status-:
awk '$4 ~ /success/ {print $3}' logfile
You could do
(grep 'success' | cut -d ' ' -f 3) <$file
cat file | grep success | awk '{print $3}'
Using perl:
perl -ne '/success/ && split && print "$_[2]\n"' inputfile

Linux awk command doesn't print integers correctly?

Can someone explain why this command doesn't print out a list of PID without the newline?
I want output like:
1234 5678 123 456
I tried all these, and none of them work
ps -eww --no-headers -o pid,args | grep 'usr' | awk '{ printf "%d ", $1 }'
ps -eww --no-headers -o pid,args | grep 'usr' | awk '{ printf "%s ", $1 }'
ps -eww --no-headers -o pid,args | grep 'usr' | awk '{ print $1 }' | tr '\n' ''
ps -eww --no-headers -o pid,args | grep 'usr' | awk '{ print $1 }' | tr -d '\n'
I just found out bash works fine, but not zsh in my case
zsh has a feature of letting the user know that the last output line was partial (i.e. there were no final newline). For more details on this you can look up PROMPT_CR, PROMPT_SP and PROMPT_EOL_MARK in man zshoptions.
You can add PROMPT_EOL_MARK='' to your ~/.zshrc to make the partial line indicator empty, but I would advise against it: now we know that it's just a feature, and sometimes we can notice a problem with our data if we leave it enabled. On a reasonably powerful terminal, the percent sign (the default when PROMPT_EOL_MARK is unset) is output bold and inverted, so it can't be confused with a piece of actual output.
Your command's output is a list of pids exactly as you desired. Adding a final newline makes it also look right with zsh:
ps -eww --no-headers -o pid,args | awk '/usr/ { printf "%d ", $1 } END {print""}'
(using also another answer's idea of getting rid of grep using the power of awk).
It does for me like this:
ps -eww --no-headers -o pid,args | awk '/usr/{printf "%d ",$1}'
I.e. awk can search for strings matching regular expressions, so you don't really need grep when using awk.

Resources