for loop in bash to get more than 1 variables to use it in one command [closed] - linux

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have one text file and I need to get 2 variables from the same text and put them in one command like
for i in `cat TEXT | grep -i UID | awk '{print($2)}'` &&
x in `cat TEXT | grep -i LOGICAL | awk '{print($4)}'`
do
echo "naviseccli -h 10.1.1.37 sancopy -create -incremental -name copy_$i -srcwwn $x -destwwn xxxxxxxxxxxxxxxxxxxxxx -verify -linkbw 2048" >> OUTPUT
done
is there any possible way to accomplish that am storage admin and need to do tons of commands so i need to get this script to do it

You could make use of file descriptors. Moreover, your cat, grep, awk command could be combined into a single awk command:
exec 5< <(awk '{IGNORECASE=1}/UID/ {print $2}' TEXT)
exec 6< <(awk '{IGNORECASE=1}/LOGICAL/ {print $4}' TEXT)
while read i <&5 && read x <&6
do
echo command $i $x # Do something with i and x here!
done

Perform the cat operations first and save the result into two arrays. Then you can iterate with an index over one array and use the same index to also access the other one.
See http://tldp.org/LDP/abs/html/arrays.html about arrays in bash. Especially see the section "Example 27-5. Loading the contents of a script into an array".
With that resource you should be able to both populate your arrays and then also process them.

Since your words don't have spaces in them (by virtue of the use of awk), you could use:
paste <(grep -i UID TEXT | awk '{print($2)}') \
<(grep -i LOGICAL TEXT | awk '{print($4)}') |
while read i x
do
echo "naviseccli -h 10.1.1.37 sancopy -create -incremental -name copy_$i -srcwwn" \
"$x -destwwn xxxxxxxxxxxxxxxxxxxxxx -verify -linkbw 2048" >> OUTPUT
done
This uses Process Substitution to give paste two files which it pieces together. Each line will have two fields on it, which are read into i and x for use in the body of the loop.
Note that there is no need to use cat; you were eligible for a UUOC (Useless Use of cat) award.

Imma take a wild guess that UID and LOGICAL must be on the same line in your incoming TEXT, in which case this might actually make some sense and work:
cat TEST | awk '/LOGICAL/ && /UID/ { print $2, $4 }' | while read i x
do
echo "naviseccli -h 10.1.1.37 sancopy -create -incremental -name copy_$i -srcwwn" \
"$x -destwwn xxxxxxxxxxxxxxxxxxxxxx -verify -linkbw 2048"
done

Related

Bash - Piping output of command into while loop

I'm writing a Bash script where I need to look through the output of a command and do certain actions based on that output. For clarity, this command will output a few million lines of text and it may take roughly an hour or so to do so.
Currently, I'm executing the command and piping it into a while loop that reads a line at a time then looks for certain criteria. If that criterion exists, then update a .dat file and reprint the screen. Below is a snippet of the script.
eval "$command"| while read line ; do
if grep -Fq "Specific :: Criterion"; then
#pull the sixth word from the line which will have the data I need
temp=$(echo "$line" | awk '{ printf $6 }')
#sanity check the data
echo "\$line = $line"
echo "\$temp = $temp"
#then push $temp through a case statement that does what I need it to do.
fi
done
So here's the problem, the sanity check on the data is showing weird results. It is printing lines that don't contain the grep criteria.
To make sure that my grep statement is working properly, I grep the log file that contains a record of the text that is output by the command and it outputs only the lines that contain the specified criteria.
I'm still fairly new to Bash so I'm not sure what's going on. Could it be that the command is force feeding the while loop a new $line before it can process the $line that met the grep criteria?
Any ideas would be much appreciated!
How does grep know what line looks like?
if ( printf '%s\n' "$line" | grep -Fq "Specific :: Criterion"); then
But I cant help feel like you are overcomplicating a lot.
function process() {
echo "I can do anything I want"
echo " per element $1"
echo " that I want here"
}
export -f process
$command | grep -F "Specific :: Criterion" | awk '{print $6}' | xargs -I % -n 1 bash -c "process %";
Run the command, filter only matching lines, and pull the sixth element. Then if you need to run an arbitrary code on it, send it to a function (you export to make it visible in subprocesses) via xargs.
What are you applying the grep on ?
Modify
if grep -Fq "Specific :: Criterion"; then
as below
if ( echo $line | grep -Fq "Specific :: Criterion" ); then

How can I assign output to two bash arrays variables

There are vaguely similar answers here but nothing that could really answer my question. I am at a point in my bash script where I have to fill two arrays from an output that looks like the following:
part-of-the-file1:line_32
part-of-the-file1:line_97
part-of-the-file2:line_88
part-of-the-file2:line_93
What I need to do is pull out the files and line numbers in there own separate arrays. So far I have:
read FILES LINES <<<($(echo $INPUTFILES | xargs grep -n $id | cut -f1,2 -d':' | awk '{print $1 " " $2}'
with a modified IFS=':' but this doesn't work. I'm sure there are better solutions since I am NOT a scripting or tools wizard.
read cannot populate two arrays at a time like it can populate two regular parameters using word-splitting. It's best to just iterate over the output of your pipeline to build the arrays.
while IFS=: read fname lineno therest; do
files+=("$fname")
lines+=("$lineno")
done < <(echo $INPUTFILES | xargs grep -n "$id")
Here, I'm letting read do the splitting and discarding of the the tail end of grep's output in place of using cut and awk.

is it possible run a linux command on the output of a previous command ASSUMING that the previous command comes first?

I know that I can use `` to get the output of a command, for example:
echo `ls`
but is there a way for me to use the ls command first and then run echo on it? For example: ls <some special redirection> echo? I tried ls > echo and it does not do what I want.
The reason I am asking is that sometimes I write complicated commands to get certain output for example: bjobs -u username01 | grep normal | awk '{print $1}' is a simple "complicated" command (sometimes they are 6 or 7 changed together). Now, I am currently having to do
Mycommand `(complicated string of commands)`
but I would much rather just do
(complicated string of commands) <some special redirection> Mycommand
is this possible?
You may use xargs
ls | xargs echo
When you need more actions on your result, you can parse them:
Parsing ls should be avoided, just rewriting the example:
ls | while read file; do
echo I found ${file}
done
This construction can be useful for more difficult parsing:
echo "red ford 2012
blue mustang 1998" | while read color car year; do
echo "My ${color} ${car} is from the year ${year}"
done

Scripting with unix to get the processes run by users [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
If I find out I have two users logged (UserA and UserB) in to the systems right now, How do i find out the processes run by those two users. but, the trick here is the script is to be run in an unattended batch without any input from the keyboard. other than being invoked.
I know the first part of the script would be
who | awk '{print $1}'
the output of this would be
UserA
UserB
What I would like to know is, how can I use this output and shove it with some ps command automatically and get the required result.
I finally figured out the one-liner I was searching for, with the help of the other answers (updated for case where no users logged in - see comments).
ps -fU "`who | cut -d' ' -f1 | uniq | xargs echo`" 2> /dev/null
The thing inside the backticks is executed and "inserted at the spot". It works as follows:
who : you know what that does
cut -d' ' : split strings into fields, using ' ' as separator
-f1 : and return only field 1
uniq : return only unique entries
xargs echo : take each of the values piped in, and send them through echo: this strips the \n
2> /dev/null : if there are any error messages (sent to 2: stderr)
: redirect those to /dev/null - i.e. "dump them, never to be seen again"
The output of all that is
user1 user2 user3
...however many there are. And you then call ps with the -fU flags, requesting all processes for these users with full format (you can of course change these flags to get the formatting you want, just keep the -U in there just before the thing in "` `"
ps -fU user1 user2 user3
Get a list of users (using who), save to a file, then list all processes, and grep that (using the file you just created),
tempfile=/tmp/wholist.$$
who | cut -f1 -d' '|sort -u > $tempfile
ps -ef |grep -f $tempfile
rm $tempfile
LOGGED_IN=$( who | awk '{print $1}' | sort -u | xargs echo )
[ "$LOGGED_IN" ] && ps -fU "$LOGGED_IN"
The standard switch -U will restrict output to only those processes whose real user ID corresponds to any given as its argument. (E.g., ps -f -U "UserA UserB".)
Not sure if I'm understanding your question correctly, but you can pipe the output of ps through grep to get the processes run by a particular user, like so:
ps -ef | grep '^xxxxx '
where xxxxx is the user.

how to egrep after using egrep -o [duplicate]

This question already has answers here:
Capturing Groups From a Grep RegEx
(10 answers)
Closed 3 years ago.
I have a file called random.html with the following line(not the only line):
blahblahblahblah random="whatever h45" blahblahblahblah
I want to specifically only get whatever, so far i used the following:
egrep -o 'random="([a-z]*[A-Z]*[0-9]*[ ]*)+'
This gives me random="whatever h45
I cant use just egrep -o ="([a-z]*[A-Z]*[0-9]*[ ]*)+' to begin with because this is not my only line and there will be unwanted lines, the random keyword is important for distinction purposes. I tried to do a double egrep -o such as:
egrep -o 'random="([a-z]*[A-Z]*[0-9]*[ ]*)+' | egrep -o '="([a-z]*[A-Z]*[0-9]*[ ]*)+'
Where it would just display ="whatever h45 but that doesn't work. Am i doing something wrong or is this illegal? I don't want to use anything fancy or use cut. This is supposed to be very "basic".
You can do this in bash alone as well:
while read -r; do
[[ $REPLY =~ random=\"([a-zA-Z0-9]+) ]] || continue
echo ${BASH_REMATCH[1]}
done < file.txt
If your version of grep supports Perl regexes, you can use lookback assertions to match only text that follows random=".
grep -P -o '(?<=random=\")([a-zA-Z0-9]+)' file.txt
You're just using the wrong tool, this is trivial in awk. There's various solutions, here's one:
$ cat file
blahblahblahblah random="whatever h45" blahblahblahblah
$ awk 'match($0,/random="([a-z]*[A-Z]*[0-9]*[ ]*)+/) { print substr($0,RSTART+8,RLENGTH-8) }' file
whatever h45
It wasn't clear from your question if you wanted whatever or whatever h45 or ="whatever h45 or some other part of the string printed, so I just picked the one I thought most likely. Whichever it is, it's trivial...
By the way, your regexp doesn't seem to make sense, I just copied it from your question to ease the contrast between what you had and the awk solution. if you tell us in words what it's meant to represent we can write it correctly for you but I THINK the most likely thing is that it should just be non-double-quote, e.g.:
$ awk 'match($0,/random="[^"]+/) { print substr($0,RSTART+8,RLENGTH-8) }' file
whatever h45
Perl solution for completeness.
#% perl -n -e 'print $1, "\n" if m!random="(\S+)!' tt
gives
whatever
whatever
where tt is
#% cat tt
blahblahblahblah random="whatever h45" blahblahblahblah
blahblahblahblah random="whatever h45" blahblahblahblah

Resources