Parsing Command Output in Bash Script - linux

I want to run a command that gives the following output and parse it:
[VDB VIEW]
[VDB] vhctest
[BACKEND] domain.computername: ENABLED:RW:CONSISTENT
[BACKEND] domain.computername: ENABLED:RW:CONSISTENT
...
I'm only interested in some key works, such as 'ENABLED' etc. I can't search just for ENABLED as I need to parse each line at a time.
This is my first script, and I want to know if anyone can help me?
EDIT:
I now have:
cmdout=`mycommand`
while read -r line
do
#check for key words in $line
done < $cmdout
I thought this did what I wanted but it always seems to output the following right before the command output.
./myscript.sh: 29: cannot open ... : No such file
I don't want to write to a file to have to achieve this.
Here is the psudo code:
cmdout=`mycommand`
loop each line in $cmdout
if line contains $1
if line contains $2
output 1
else
output 0

The reason for the error is that
done < $cmdout
thinks that the contents of $cmdout is a filename.
You can either do:
done <<< $cmdout
or
done <<EOF
$cmdout
EOF
or
done < <(mycommand) # without using the variable at all
or
done <<< $(mycommand)
or
done <<EOF
$(mycommand)
EOF
or
mycommand | while
...
done
However, the last one creates a subshell and any variables set in the loop will be lost when the loop exits.

"How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?"
"I set variables in a loop. Why do they suddenly disappear after the loop terminates? Or, why can't I pipe data to read?"

$ cat test.sh
#!/bin/bash
while read line ; do
if [ `echo $line|grep "$1" | wc -l` != 0 ]; then
if [ `echo $line|grep "$2" | wc -l` != 0 ]; then
echo "output 1"
else
echo "output 0"
fi
fi
done
USAGE
$ cat in.txt | ./test.sh ENABLED RW
output 1
output 1
This isn't the best solution, but its a word by word translation of what you want and should give you something to start with and add your own logic

Related

How do I use for to loop over potentially-empty lines of output from egrep?

I'm trying to print out blank lines in a text file but I want it to also print out numbers to see how many lines of white spaces the egrep returned by using:
for x in $(egrep '$^ txtfile); do echo '$x'; done
but this doesn't echo or return anything, is there any way I know how many blank lines the egrep command returned?
for is the wrong tool for this job; the right one (if you don't want to use grep -c but really do want to read each line of output) is a while read loop, as discussed in BashFAQ #1:
#!/usr/bin/env bash
# ^^^^- important: bash, not sh
count=0
while IFS= read -r x; do
echo "Read a line: <$x>" >&2
(( ++count ))
done < <(egrep '^$' txtfile)
echo "Read $count lines" >&2

Calling a function that decodes in base64 in bash

#!/bin/bash
#if there are no args supplied exit with 1
if [ "$#" -eq 0 ]; then
echo "Unfortunately you have not passed any parameter"
exit 1
fi
#loop over each argument
for arg in "$#"
do
if [ -f arg ]; then
echo "$arg is a file."
#iterates over the files stated in arguments and reads them $
cat $arg | while read line;
do
#should access only first line of the file
if [ head -n 1 "$arg" ]; then
process line
echo "Script has ran successfully!"
exit 0
#should access only last line of the file
elif [ tail -n 1 "$arg" ]; then
process line
echo "Script has ran successfully!"
exit 0
#if it accesses any other line of the file
else
echo "We only process the first and the last line of the file."
fi
done
else
exit 2
fi
done
#function to process the passed string and decode it in base64
process() {
string_to_decode = "$1"
echo "$string_to_decode = " | base64 --decode
}
Basically what I want this script to do is to loop over the arguments passed to the script and then if it's a file then call the function that decodes in base64 but just on the first and the last line of the chosen file. Unfortunately when I run it even with calling a right file it does nothing. I think it might be encountering problems with the if [ head -n 1 "$arg" ]; then part of the code. Any ideas?
EDIT: So I understood that I am actually just extracting first line over and over again without really comparing it to anything. So I tried changing the if conditional of the code to this:
first_line = $(head -n 1 "$arg")
last_line = $(tail -n 1 "$arg")
if [ first_line == line ]; then
process line
echo "Script has ran successfully!"
exit 0
#should access only last line of the file
elif [ last_line == line ]; then
process line
echo "Script has ran successfully!"
exit 0
My goal is to iterate through files for example one is looking like this:
MTAxLmdvdi51awo=
MTBkb3duaW5nc3RyZWV0Lmdvdi51awo=
MXZhbGUuZ292LnVrCg==
And to decode the first and the last line of each file.
To decode the first and last line of each file given to your script, use this:
#! /bin/bash
for file in "$#"; do
[ -f "$file" ] || exit 2
head -n1 "$file" | base64 --decode
tail -n2 "$file" | base64 --decode
done
Yea, as the others already said the true goal of the script isn't really clear. That said, i imagine every variation of what you may have wanted to do would be covered by something like:
#!/bin/bash
process() {
encoded="$1";
decoded="$( echo "${encoded}" | base64 --decode )";
echo " Value ${encoded} was decoded into ${decoded}";
}
(( $# )) || {
echo "Unfortunately you have not passed any parameter";
exit 1;
};
while (( $# )) ; do
arg="$1"; shift;
if [[ -f "${arg}" ]] ; then
echo "${arg} is a file.";
else
exit 2;
fi;
content_of_first_line="$( head -n 1 "${arg}" )";
echo "Content of first line: ${content_of_first_line}";
process "${content_of_first_line}";
content_of_last_line="$( tail -n 1 "${arg}" )";
echo "Content of last line: ${content_of_last_line}";
process "${content_of_last_line}";
line=""; linenumber=0;
while IFS="" read -r line; do
(( linenumber++ ));
echo "Iterating over all lines. Line ${linenumber}: ${line}";
process "${line}";
done < "${arg}";
done;
some additions you may find useful:
If the script is invoked with multiple filenames, lets say 4 different filenames, and the second file does not exist (but the others do),
do you really want the script to: process the first file, then notice that the second file doesnt exist, and exit at that point ? without processing the (potentially valid) third and fourth file ?
replacing the line:
exit 2;
with
continue;
would make it skip any invalid filenames, and still process valid ones that come after.
Also, within your process function, directly after the line:
decoded="$( echo "${encoded}" | base64 --decode )";
you could check if the decoding was successful before echoing whatever the resulting garbage may be if the line wasnt valid base64.
if [[ "$?" -eq 0 ]] ; then
echo " Value ${encoded} was decoded into ${decoded}";
else
echo " Garbage.";
fi;
--
To answer your followup question about the IFS/read-construct, it is a mixture of a few components:
read -r line
reads a single line from the input (-r tells it not to do any funky backslash escaping magic).
while ... ; do ... done ;
This while loop surrounds the read statement, so that we keep repeating the process of reading one line, until we run out.
< "${arg}";
This feeds the content of filename $arg into the entire block of code as input (so this becomes the source that the read statement reads from)
IFS=""
This tells the read statement to use an empty value instead of the real build-in IFS value (the internal field separator). Its generally a good idea to do this for every read statement, unless you have a usecase that requires splitting the line into multiple fields.
If instead of
IFS="" read -r line
you were to use
IFS=":" read -r username _ uid gid _ homedir shell
and read from /etc/passwd which has lines such as:
root:x:0:0:root:/root:/bin/bash
apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin
then that IFS value would allow it to load those values into the right variables (in other words, it would split on ":")
The default value for IFS is inherited from your shell, and it usually contains the space and the TAB character and maybe some other stuff. When you only read into one single variable ($line, in your case). IFS isn't applied but when you ever change a read statement and add another variable, word splitting starts taking effect and the lack of a local IFS= value will make the exact same script behave very different in different situations. As such it tends to be a good habbit to control it at all times.
The same goes for quoting your variables like "$arg" or "${arg}" , instead of $arg . It doesn't matter when ARG="hello"; but once the value starts containing spaces suddenly all sorts of things can act different; suprises are never a good thing.

Shell script that filters command output and saves it in Json formated list

never worked with shell scripts before,but i need to in my current task.
So i have to run a command that returns output like this:
awd54a7w6ds54awd47awd refs/heads/SomeInfo1
awdafawe23413f13a3r3r refs/heads/SomeInfo2
a8wd5a8w5da78d6asawd7 refs/heads/SomeInfo3
g9reh9wrg69egs7ef987e refs/heads/SomeInfo4
And i need to loop over every line of output get only the "SomeInfo" part and write it to a file in a format like this:
["SomeInfo1","SomeInfo2","SomeInfo3"]
I've tried things like this:
for i in $(some command); do
echo $i | cut -f2 -d"heads/" >> text.txt
done
But i don't know how to format it into an array without using a temporary file.
Sorry if the question is dumb and probably too easy and im sure i can figure it out on my own,but i just don't have the time for it because its just an extra conveniance feature that i personally want to implement.
Try this
# json_encoder.sh
arr=()
while read line; do
arr+=(\"$(basename "$line")\")
done
printf "[%s]" $(IFS=,; echo "${arr[*]}")
And then invoke
./your_command | json_encoder.sh
PS. I personally do this kind of data massaging with Vim.
Using Perl one-liner
$ cat petar.txt
awd54a7w6ds54awd47awd refs/heads/SomeInfo1
awdafawe23413f13a3r3r refs/heads/SomeInfo2
a8wd5a8w5da78d6asawd7 refs/heads/SomeInfo3
g9reh9wrg69egs7ef987e refs/heads/SomeInfo4
$ perl -ne ' { /.*\/(.*)/ and push(#res,"\"$1\"") } END { print "[".join(",",#res)."]\n" }' petar.txt
["SomeInfo1","SomeInfo2","SomeInfo3","SomeInfo4"]
While you should rarely ever use a script to format json, in your case you are simply parsing output into a comma-separated line with added end-caps of [...]. You can use bash parameter expansion to avoid spawning any additional subshells to obtain the last field of information in each line as follows:
#!/bin/bash
[ -z "$1" -o ! -r "$1" ] && { ## validate file given as argument
printf "error: file doesn't exist or not readable.\n" >&2
exit 1
}
c=0 ## simple flag variable
while read -r line; do ## read each line
if [ "$c" -eq '0' ]; then ## is flag 0?
printf "[\"%s\"" "${line##*/}" ## output ["last"
else ## otherwise
printf ",\"%s\"" "${line##*/}" ## output ,"last"
fi
c=1 ## set flag 1
done < file ## redirect file to loop
echo "]" ## append closing ]
Example Use/Output
Using your given data as the input file, you would get the following:
$ bash script.sh file
["SomeInfo1","SomeInfo2","SomeInfo3","SomeInfo4"]
Look things over and let me know if you have any questions.
You can also use awk without any loops I guess:
cat prev_output | awk -v ORS=',' -F'/' '{print "\042"$3"\042"}' | \
sed 's/^/[/g ; s/,$/]\n/g' > new_output
cat new_output
["SomeInfo1","SomeInfo2","SomeInfo3","SomeInfo4"]

Why is this Bash variable empty? [duplicate]

I have a Bash script where I want to count how many things were done when looping through a file. The count seems to work within the loop but after it the variable seems reset.
nKeys=0
cat afile | while read -r line
do
#...do stuff
let nKeys=nKeys+1
# this will print 1,2,..., etc as expected
echo Done entry $nKeys
done
# PROBLEM: this always prints "... 0 keys"
echo Finished writing $destFile, $nKeys keys
The output of the above is something alone the lines of:
Done entry 1
Done entry 2
Finished writing /blah, 0 keys
The output I want is:
Done entry 1
Done entry 2
Finished writing /blah, 2 keys
I am not quite sure why nKeys is 0 after the loop :( I assume it's something basic but damned if I can spot it despite looking at http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html and other resources.
Fingers crossed someone else can look at it and go "well duh! You have to ..."!
In the just-released Bash 4.2, you can do this to prevent creating a subshell:
shopt -s lastpipe
Also, as you'll probably see at the link Ignacio provided, you have a Useless Use of cat.
while read -r line
do
...
done < afile
As mentioned in the accepted answer, this happens because pipes spawn separate subprocesses. To avoid this, command grouping has been the best option for me. That is, doing everything after the pipe in a subshell.
nKeys=0
cat afile |
{
while read -r line
do
#...do stuff
let nKeys=nKeys+1
# this will print 1,2,..., etc as expected
echo Done entry $nKeys
done
# PROBLEM: this always prints "... 0 keys"
echo Finished writing $destFile, $nKeys keys
}
Now it will report the value of $nKeys "correctly" (i.e. what you wish).
I arrived at the desired result in the following way without using pipes or here documents
#!/bin/sh
counter=0
string="apple orange mango egg indian"
str_len=${#string}
while [ $str_len -ne 0 ]
do
c=${string:0:1}
if [[ "$c" = [aeiou] ]]
then
echo -n "vowel : "
echo "- $c"
counter=$(( $counter + 1 ))
fi
string=${string:1}
str_len=${#string}
done
printf "The number of vowels in the given string are : %s "$counter
echo

Linux script to search for string in a file

I am newbie to shell scripting. I have a requirement to read a file by line and match for specific string. If it matches, print x and if it doesn't match, print y.
Here is what I am trying. But,I am getting unexpected results. I am getting 700 lines of result where my /tmp/l1.txt has 10 lines only. Somewhere, I am going through the loop. I appreciate your help.
for line in `cat /tmp/l3.txt`
do
if echo $line | grep "abc.log" ; then
echo "X" >>/tmp/l4.txt
else
echo "Y" >>/tmp/l4.txt
fi
done
I don't understand the urge to do looping ...
awk '{if($0 ~ /abc\.log/){print "x"}else{print "y"}}' /tmp/13.txt > /tmp/14.txt
EDIT after inquiry ...
Of course, your spec wasn't overly precise, and I'm jumping to conclusions regarding your lines format ... we basically take the whole line that matched abc.log, replace everything up to the directory abc and from /log to the end of line with nothing, which leaves us with clusterX/xyz.
awk '{if($0 ~ /abc\.log/){print gensub(/.+\/abc\/(.+)\/logs/, "\\1", 1)}else{print "y"}}' /tmp/13.txt > /tmp/14.txt
cat /tmp/l3.txt | while read line # read the entire line into the variable "line"
do
if [ -n `echo "$line" | grep "abc.log"` ] # If there is a value "-n"
then
echo "X" >> /tmp/l4.txt # Echo "X" or the value of the variable "line" into l4.txt
else
echo "Y" >> /tmp/l4.txt # If empty echo "Y" into l4.txt
fi
done
While read statement will read either the entire line if only one variable is given, in this case "line" or if you have a fixed amount of fields you can specify a variable for each field, I.E. "| while read field1 field2" etc... The -n tests for if their is a value or not. -z will test if it's empty.
Why worry about cat and the rest before grep, you can simply test the return of grep and append all matching lines to /tmp/14.txt or append "Y":
[ -f "/tmpfile.tmp" ] && :> /tmpfile.tmp # test for existing tmpfile & truncate
if grep "abc.log" /tmp/13.txt >>tmpfile.tmp ; then # write all matching lines to tmpfile
cat tmpfile.tmp /tmp/14.txt # if grep matched append to /tmp/14.txt
else
echo "Y" >> /tmp/14.txt # write "Y" to /tmp/14.txt
fi
rm tmpfile.tmp # cleanup
Note: if you don't want the result of the grep appended to /tmp/14.txt, then just replace cat tmpfile.tmp /tmp/14.txt with echo "X" >> /tmp/14.txt and you can remove the 1st and last lines.
I think the "awk" answer above is better. However, if you really need to interact using a bash loop, you can use:
PATTERN="abc.log"
OUTPUTFILE=/tmp/14.txt
INPUTFILE=/tmp/13.txt
while read line
do
grep -q "$PATTERN" <<< "$line" > /dev/null 2>&1 && echo X || echo Y
done < $INPUTFILE >> $OUTPUTFILE

Resources