Here is a simplified version of my problem.
if (echo "AA BB CC" | awk '{ print $1 $2 }' | grep -q "B"); then
echo $2
fi
I would like to make $2 available in bash, so I can use it elsewhere in the script.
Can that be done?
Update
I realized that I had simplified the problem too much. The awk expression should have been awk '{ print $1 $2 }' instead of just awk '{ print $2 }' which I originally posted.
You can use set:
set -- `echo "AA BB CC" | awk '{print $2}'`
case $1 in *B*) echo $1;; esac
... or if you used the awk just to split the output, let set do that part as well:
set -- `echo "AA BB CC"`
case $2 in *B*) echo $2;; esac
Remember the output of awk, test it for the regular expression and print it:
output=$( echo "AA BB CC" | awk '{ print $2 }' )
if grep -q B <<< "$output" ; then echo "$output" ; fi
You can capture stdout into a variable by using the backtick operator, e.g.
a=`echo foo`
echo $a
For your example, it would be something like:
a=`echo "AA BB CC" | awk '{ print $2 }' | grep -q "B"`
echo $a
Related
I'm monitoring from an actively written to file:
My current solution is:
ws_trans=0
sc_trans=0
tail -F /var/log/file.log | \
while read LINE
echo $LINE | grep -q -e "enterpriseID:"
if [ $? = 0 ]
then
((ws_trans++))
fi
echo $LINE | grep -q -e "sc_ID:"
if [ $? = 0 ]
then
((sc_trans++))
fi
printf "\r WSTRANS: $ws_trans \t\t SCTRANS: $sc_trans"
done
However when attempting to do this with AWK I don't get the output - the $ws_trans and $sc_trans remains 0
ws_trans=0
sc_trans=0
tail -F /var/log/file.log | \
while read LINE
echo $LINE | awk '/enterpriseID:/ {++ws_trans} END {print | ws_trans}'
echo $LINE | awk '/sc_ID:/ {++sc_trans} END {print | sc_trans}'
printf "\r WSTRANS: $ws_trans \t\t SCTRANS: $sc_trans"
done
Attempting to do this to reduce load. I understand that AWK doesn't deal with bash variables, and it can get quite confusing, but the only reference I found is a non tail application of AWK.
How can I assign the AWK Variable to the bash ws_trans and sc_trans? Is there a better solution? (There are other search terms being monitored.)
You need to pass the variables using the option -v, for example:
$ var=0
$ printf %d\\n {1..10} | awk -v awk_var=${var} '{++awk_var} {print awk_var}'
To set the variable "back" you could use declare, for example:
$ declare $(printf %d\\n {1..10} | awk -v awk_var=${var} '{++awk_var} END {print "var=" awk_var}')
$ echo $var
$ 10
Your script could be rewritten like this:
ws_trans=0
sc_trans=0
tail -F /var/log/system.log |
while read LINE
do
declare $(echo $LINE | awk -v ws=${ws_trans} '/enterpriseID:/ {++ws} END {print "ws_trans="ws}')
declare $(echo $LINE | awk -v sc=${sc_trans} '/sc_ID:/ {++sc} END {print "sc_trans="sc}')
printf "\r WSTRANS: $ws_trans \t\t SCTRANS: $sc_trans"
done
I am facing some issue when I am reading the 3rd word(a hex string) of each line in a text file and compare it with a hex number. Can some one please help me on it.
#!/bin/bash
A=$1
cat $A | while read a; do
a1=$(echo \""$a"\" | awk '{ print $3 }')
#echo $a > cut -d " " -f 3
echo $a1
(("$a1" == 0x10F7))
echo $?
done
But when I use below, the comparison happens correctly,
a1= 0xADCAFE
(( "$a1" == 0x10F7 ))
echo $?
Then why it is showing issue when I read like below,
a1=$(echo \""$a"\" | awk '{ print $3 }')
or> a1=$(echo $a | awk '{ print $3 }')
echo $a prints intended hex value, but comparison does not happen.
Regards,
Running Awk inside a while read loop is an antipattern. Just do the loop in Awk; it's good at that.
awk '$3 == 4343' "$1"
If you want to compare against a string whose value is "0x10F7" then it's
awk '$3 == "0x10F7"' "$1"
If you want to match either, case insensitively etc, a regex is a good way to do that.
awk '$3 ~ /^(0x10[Ff]7|4343)$/' "$1"
Notice how the $1 in double quotes is handled by the shell, and gets replaced by a (properly quoted!) copy of the script's first command-line argument before Awk runs, while the Awk script in single quotes has its own namespace, so $3 is an Awk variable which refers to the third field in the current input line.
Either way, avoid the useless use of cat and always always always quote variables which contain file names with double quotes.
That's literal double quotes. You seem to have tried both a dangerous bare $a and a doubly double-quoted "\"$a\"" where the simple "$a" would be what you actually want.
Thank you all for your responses, Now my script is working fine. I was trying to match two files, below script does the purpose
#!/bin/bash
A=$1
B=$2
dos2unix -f "$A"
dos2unix -f "$B"
rm search_match.txt search_data_match.txt search_nomatch.txt search_data_nomatch.txt
while read line;do
search_word=$(echo $line | awk '{ print $1 }')
grep "$search_word" $B >> temp_file.txt
while read var;do
file1_hex=$(echo $line | awk '{ print $2 }')
file2_hex=$(echo $var | awk '{ print $3 }')
(("$file1_hex" == "$file2_hex"))
zero=$(echo $?)
if [ "$zero" -eq 0 ] ; then
echo $line >> search_match.txt
echo $var >> search_data_match.txt
else
echo $line >> search_nomatch.txt
echo $var >> search_data_nomatch.txt
fi
done < "temp_file.txt"
rm temp_file.txt
done < "$A"
I hope you can help me.
I try to separate a String:
#!/bin/bash
file=$(<sample.txt)
echo "$file"
The File itself contains Values like this:
(;FF[4]GM[1]SZ[19]CA[UTF-8]SO[sometext]BC[cn]WC[ja]
What I need is a way to extract the Values between the [ ] and set them as variables, for Example:
$FF=4
$GM=1
$SZ=19
and so on
However, some Files do not contain all Values, so that in some cases there is no FF[*]. In this case the Program should use the Value of "99"
How do I have to do this?
Thank you so much for your help.
Greetings
Chris
It may be a bit overcomplicated, but here it comes another way:
grep -Po '[-a-zA-Z0-9]*' file | awk '!(NR%2) {printf "declare %s=\"%s\";\n", a,$0; next} {a=$0} | bash
By steps
Filter file by printing only the needed blocks:
$ grep -Po '[-a-zA-Z0-9]*' a
FF
4
GM
1
SZ
19
CA
UTF-8
SO
sometext
BC
cn
WC
ja
Reformat so that it specifies the declaration:
$ grep -Po '[-a-zA-Z0-9]*' a | awk '!(NR%2) {printf "declare %s=\"%s\";\n", a,$0; next} {a=$0}'
declare FF="4";
declare GM="1";
declare SZ="19";
declare CA="UTF-8";
declare SO="sometext";
declare BC="cn";
declare WC="ja";
And finally pipe to bash so that it is executed.
Note 2nd step could be also rewritten as
xargs -n2 | awk '{print "declare"$1"=\""$2"\";"}'
I'd write this, using ; or [ or ] as awk's field separators
$ line='(;FF[4]GM[1]SZ[19]CA[UTF-8]SO[sometext]BC[cn]WC[ja]'
$ awk -F '[][;]' '{for (i=2; i<NF; i+=2) {printf "%s=\"%s\" ", $i, $(i+1)}; print ""}' <<<"$line"
FF="4" GM="1" SZ="19" CA="UTF-8" SO="sometext" BC="cn" WC="ja"
Then, to evaluate the output in your current shell:
$ source <(!!)
source <(awk -F '[][;]' '{for (i=2; i<NF; i+=2) {printf "%s=\"%s\" ", $i, $(i+1)}; print ""}' <<<"$line")
$ echo $SO
sometext
To handle the default FF value:
$ source <(awk -F '[][;]' '{
print "FF=99"
for (i=2; i<NF; i+=2) printf "%s=\"%s\" ", $i, $(i+1)
print ""
}' <<< "(;A[1]B[2]")
$ echo $FF
99
$ source <(awk -F '[][;]' '{
print "FF=99"
for (i=2; i<NF; i+=2) printf "%s=\"%s\" ", $i, $(i+1)
print ""
}' <<< "(;A[1]B[2]FF[3]")
$ echo $FF
3
Per your request:
while IFS=\[ read -r A B; do
[[ -z $B ]] && B=99
eval "$A=\$B"
done < <(exec grep -oE '[[:alpha:]]+\[[^]]*' sample.txt)
Although using an associative array would be better:
declare -A VALUES
while IFS=\[ read -r A B; do
[[ -z $B ]] && B=99
VALUES[$A]=$B
done < <(exec grep -oE '[[:alpha:]]+\[[^]]*' sample.txt)
There you could have access both with keys ("${!VALUES[#]}") and values "${VALUES['FF']}".
I would probably do something like this:
set $(sed -e 's/^(;//' sample.txt | tr '[][]' ' ')
while (( $# > 2 ))
do
varname=${1}
varvalue=${2}
# do something to test varname and varvalue to make sure they're sane/safe
declare "${varname}=${varvalue}"
shift 2
done
if I have a string like "sn":"1$$$$12056597.3,2595585.69$$", how can I use awk to split "1$$$$"
I tried
**cat $filename | awk -F "\"1\$\$\$\$" '{ print $2 }'**
**cat $filename | awk -F "\"1$$$$" '{ print $2 }'**
but all failed
any number of $ use
echo '"1$$$$12056597.3,2595585.69$$"' | awk -F '"1[$]+' '{ print $2 }'
exactly 4 use
echo '"1$$$$12056597.3,2595585.69$$"' | awk -F '"1[$]{4}' '{ print $2 }'
to help debug problems with escape characters in the shell you can use the built-in shell command set which will print the arguments that are being passed to awk after the shell has interpreted any escape characters and replaced shell variables
In this case the shell first interprets \$ as an escape for a plain $
set -x
echo '"1$$$$12056597.3,2595585.69$$"'|awk -F "\"1\$\$\$\$" '{ print $2 }'
+ echo '"1$$$$12056597.3,2595585.69$$"'
+ awk -F '"1$$$$' '{ print $2 }'
You can use \$ so the \$ get to awk, but \$ is interpreted in awk regular expressions as a $ anyway. At least awk is nice enough to warn you...
echo '"1$$$$12056597.3,2595585.69$$"'|awk -F "\"1\\$\\$\\$\\$" '{ print $2 }'
+ echo '"1$$$$12056597.3,2595585.69$$"'
+ awk -F '"1\$\$\$\$' '{ print $2 }'
awk: warning: escape sequence `\$' treated as plain `$'
Turn off debugging with
set +x
echo '"1$$$$12056597.3,2595585.69$$"' | awk -F '"1[$]+' '{ print $2 }' |sed 's/.\{3\}$//'
Or if you want to split both float digit:
echo '"1$$$$12056597.3,2595585.69$$"' | awk -F '"1[$]+' '{ print $2 }' |sed 's/.\{3\}$//' |awk 'BEGIN {FS=","};{print $1}'
And
echo '"1$$$$12056597.3,2595585.69$$"' | awk -F '"1[$]+' '{ print $2 }' |sed 's/.\{3\}$//' |awk 'BEGIN {FS=","};{print $2}'
I am trying to search for a pattern and from the results i am extracting just the second column. The command works well in command line but not inside a bash script.
#!/bin/bash
set a = grep 'NM_033356' test.txt | awk '{ print $2 }'
echo $a
It doesnt print any output at all.
Input
NM_033356 2
NM_033356 5
NM_033356 7
Your code:
#!/bin/bash
set a = grep 'NM_033356' test.txt | awk '{ print $2 }'
echo $a
Change it to:
#!/bin/bash
a="$(awk '$1=="NM_033356"{ print $2 }' test.txt)"
echo "$a"
Code changes are based on your sample input.
.......
a="$(awk '/NM_033356/ { print $2 }' test.txt)"
Try this:
a=`grep 'NM_033356' test.txt | awk '{ print $2 }'`