Redirecting output of bash for loop - linux

I have a simple BASH command that looks like
for i in `seq 2`; do echo $i; done; > out.dat
When this runs the output of seq 2 is output to the terminal and nothing is output to the data file (out.dat)
I am expecting standard out to be redirected to out.dat like it does simply running the command seq 2 > out.dat

Remove your semicolon.
for i in `seq 2`; do echo "$i"; done > out.dat
SUGGESTIONS
Also as suggested by Fredrik Pihl, try not to use external binaries when they are not needed, or at least when practically not:
for i in {1..2}; do echo "$i"; done > out.dat
for (( i = 1; i <= 2; ++i )); do echo "$i"; done > out.dat
for i in 1 2; do echo "$i"; done > out.dat
Also, be careful of outputs in words that may cause pathname expansion.
for a in $(echo '*'); do echo "$a"; done
Would show your files instead of just a literal *.
$() is also recommended as a clearer syntax for command substitution in Bash and POSIX shells than backticks (`), and it supports nesting.
The cleaner solutions as well for reading output to variables are
while read var; do
...
done < <(do something)
And
read ... < <(do something) ## Could be done on a loop or with readarray.
for a in "${array[#]}"; do
:
done
Using printf can also be an easier alternative with respect to the intended function:
printf '%s\n' {1..2} > out.dat

Another possibility, for the sake of completeness: You can move the output inside the loop, using >> to append to the file, if it exists.
for i in `seq 2`; do echo $i >> out.dat; done;
Which one is better certainly depends on the use case. Writing the file in one go is certainly better than appending to it a thousand times. Also, if the loop contains multiple echo statements, all of which shall go to the file, doing done > out.dat is probably more readable and easier to maintain. The advantage of this solution, of course, is that it gives more flexibility.

Try:
(for i in `seq 2`; do echo $i; done;) > out.dat

Related

How can I create a file which containing 10 random numbers in Linux Shell?

I want to create a Linux command which creates a file containing 10 random numbers.
This is a solution to generate 10 random numbers;
RANDOM=$$
for i in `seq 10`
do
echo $RANDOM
done
It is working to generate random numbers but how can I combine this with 'touch' command? Should I create a loop?
Using touch? Like this?
touch file.txt && RANDOM=$$
for i in `seq 10`
do
echo $RANDOM
done >> file.txt
Not sure why you need touch though, this will also work:
for i in `seq 10`; do echo $RANDOM; done > file.txt
Use >> to write to file, $1 is first argument of your .sh file
FILE=$1
RANDOM=$$
for i in `seq 10`
do
echo $RANDOM >> $FILE
echo "\n" >> $FILE
done
You could use head(1) and od(1) or GNU gawk with random(4).
For example, perhaps
head -20c /dev/random | od -s > /tmp/tenrandomnumbers.txt

Speed up dig -x in bash script

I have to run as an exercise at my university a bash script to reverse lookup all their DNS entries for a B class network block they own.
This is the fastest I have got but takes forever. Any help optimising this code?
#!/bin/bash
network="a.b"
CMD=/usr/bin/dig
for i in $(seq 1 254); do
for y in $(seq 1 254); do
answer=`$CMD -x $network.$i.$y +short`;
echo $network.$i.$y ' resolves to ' $answer >> hosts_a_b.txt;
done
done
Using GNU xargs to run 64 processes at a time might look like:
#!/usr/bin/env bash
lookupArgs() {
for arg; do
# echo entire line together to ensure atomicity
echo "$arg resolves to $(dig -x "$arg" +short)"
done
}
export -f lookupArgs
network="a.b"
for (( x=1; x<=254; x++ )); do
for (( y=1; y<=254; y++ )); do
printf '%s.%s.%s\0' "$network" "$x" "$y"
done
done | xargs -0 -P64 bash -c 'lookupArgs "$#"' _ >hosts_a_b.txt
Note that this doesn't guarantee order of output (and relies on the lookupArgs function doing one write() syscall per result) -- but output is sortable so you should be able to reorder. Otherwise, one could get ordered output (and ensure atomicity of results) by switching to GNU parallel -- a large perl script, vs GNU xargs' small, simple, relatively low-feature implementation.

For loop with multiple varibles(statements)

I'm looking to automate dns-add by creating two for loop variables. I'm not sure how this is possible. I know my code below is wrong. I'm having difficulties understanding how to create two variables in a one-liner.
for i in `cat list.csv`;
for g in `cat list2.csv`; do
echo $i;
echo $g;
dns-add-record --zone=impl.wd2.wd --record=$i --type=CNAME --record-value=$g
done;
done
The only thing i thought might work was this, but I doubt it'll work. Does anyone have any hints?
for i in `cat list.csv` && \
for g in `cat list2.csv ; do
echo $i && $g;
dns-add-record --zone=impl.wd2.wd --record=$i --type=CNAME --record-value=$g
done;
done
A for loop is the wrong construct for iterating over any file (see Bash FAQ 001), let alone two files. Use a while loop with the read command instead.
while read -u 3 i; read -u 4 g; do
echo "$i"
echo "$g"
dns-add-record --zone=impl.wd2.wd --record="$i" --type=CNAME --record-value="$g"
done 3< list.csv 4< list2.csv
I think you are missing one do ?
for i in `cat list.csv`; **do**
for g in `cat list2.csv`; do
echo $i;
echo $g;
dns-add-record --zone=impl.wd2.wd --record=$i --type=CNAME --record- value=$g
done; done

Running man copies of a program with redirection input and sleep

I would like to do the follow
while read input
do echo "$input"
sleep 1
done < input.txt | program $1 $2
But run many copies of the program in the background, I was thinking something with a for loop and an &, but that doesn't work well, anyone have know how?
Like so
for (( i=1; i<=3; i++ ))
do
while read input
do echo "$input"
sleep 1
done < input.txt | program $1 $2 &
done
Or would it be better to have a different bash script call this bash script using &?

How to properly handle wildcard expansion in a bash shell script?

#!/bin/bash
hello()
{
SRC=$1
DEST=$2
for IP in `cat /opt/ankit/configs/machine.configs` ; do
echo $SRC | grep '*' > /dev/null
if test `echo $?` -eq 0 ; then
for STAR in $SRC ; do
echo -en "$IP"
echo -en "\n\t ARG1=$STAR ARG2=$2\n\n"
done
else
echo -en "$IP"
echo -en "\n\t ARG1=$SRC ARG2=$DEST\n\n"
fi
done
}
hello $1 $2
The above is the shell script which I provide source (SRC) & desitnation (DEST) path. It worked fine when I did not put in a SRC path with wild card ''. When I run this shell script and give ''.pdf or '*'as follows:
root#ankit1:~/as_prac# ./test.sh /home/dev/Examples/*.pdf /ankit_test/as
I get the following output:
192.168.1.6
ARG1=/home/dev/Examples/case_Contact.pdf ARG2=/home/dev/Examples/case_howard_county_library.pdf
The DEST is /ankit_test/as but DEST also get manupulated due to '*'. The expected answer is
ARG1=/home/dev/Examples/case_Contact.pdf ARG2=/ankit_test/as
So, if you understand what I am trying to do, please help me out to solve this BUG.
I'll be grateful to you.
Thanks in advance!!!
I need to know exactly how I use '*.pdf' in my program one by one without disturbing DEST.
Your script needs more work.
Even after escaping the wildcard, you won't get your expected answer. You will get:
ARG1=/home/dev/Examples/*.pdf ARG2=/ankit__test/as
Try the following instead:
for IP in `cat /opt/ankit/configs/machine.configs`
do
for i in $SRC
do
echo -en "$IP"
echo -en "\n\t ARG1=$i ARG2=$DEST\n\n"
done
done
Run it like this:
root#ankit1:~/as_prac# ./test.sh "/home/dev/Examples/*.pdf" /ankit__test/as
The shell will expand wildcards unless you escape them, so for example if you have
$ ls
one.pdf two.pdf three.pdf
and run your script as
./test.sh *.pdf /ankit__test/as
it will be the same as
./test.sh one.pdf two.pdf three.pdf /ankit__test/as
which is not what you expect. Doing
./test.sh \*.pdf /ankit__test/as
should work.
If you can, change the order of the parameters passed to your shell script as follows:
./test.sh /ankit_test/as /home/dev/Examples/*.pdf
That would make your life a lot easier since the variable part moves to the end of the line. Then, the following script will do what you want:
#!/bin/bash
hello()
{
SRC=$1
DEST=$2
for IP in `cat /opt/ankit/configs/machine.configs` ; do
echo -en "$IP"
echo -en "\n\t ARG1=$SRC ARG2=$DEST\n\n"
done
}
arg2=$1
shift
while [[ "$1" != "" ]] ; do
hello $1 $arg2
shift
done
You are also missing a final "done" to close your outer for loop.
OK, this appears to do what you want:
#!/bin/bash
hello() {
SRC=$1
DEST=$2
while read IP ; do
for FILE in $SRC; do
echo -e "$IP"
echo -e "\tARG1=$FILE ARG2=$DEST\n"
done
done < /tmp/machine.configs
}
hello "$1" $2
You still need to escape any wildcard characters when you invoke the script
The double quotes are necessary when you invoke the hello function, otherwise the mere fact of evaluating $1 causes the wildcard to be expanded, but we don't want that to happen until $SRC is assigned in the function
Here's what I came up with:
#!/bin/bash
hello()
{
# DEST will contain the last argument
eval DEST=\$$#
while [ $1 != $DEST ]; do
SRC=$1
for IP in `cat /opt/ankit/configs/machine.configs`; do
echo -en "$IP"
echo -en "\n\t ARG1=$SRC ARG2=$DEST\n\n"
done
shift || break
done
}
hello $*
Instead of passing only two parameters to the hello() function, we'll pass in all the arguments that the script got.
Inside the hello() function, we first assign the final argument to the DEST var. Then we loop through all of the arguments, assigning each one to SRC, and run whatever commands we want using the SRC and DEST arguments. Note that you may want to put quotation marks around $SRC and $DEST in case they contain spaces. We stop looping when SRC is the same as DEST because that means we've hit the final argument (the destination).
For multiple input files using a wildcard such as *.txt, I found this to work perfectly, no escaping required. It should work just like a native bash app like "ls" or "rm." This was not documented just about anywhere so since I spent a better part of 3 days trying to figure it out I decided I should post it for future readers.
Directory contains the following files (output of ls)
file1.txt file2.txt file3.txt
Run script like
$ ./script.sh *.txt
Or even like
$ ./script.sh file{1..3}.txt
The script
#!/bin/bash
# store default IFS, we need to temporarily change this
sfi=$IFS
#set IFS to $'\n\' - new line
IFS=$'\n'
if [[ -z $# ]]
then
echo "Error: Missing required argument"
echo
exit 1
fi
# Put the file glob into an array
file=("$#")
# Now loop through them
for (( i=0 ; i < ${#file[*]} ; i++ ));
do
if [ -w ${file[$i]} ]; then
echo ${file[$i]} " writable"
else
echo ${file[$i]} " NOT writable"
fi
done
# Reset IFS to its default value
IFS=$sfi
The output
file1.txt writable
file2.txt writable
file3.txt writable
The key was switching the IFS (Internal Field Separator) temporarily. You have to be sure to store this before switching and then switch it back when you are done with it as demonstrated above.
Now you have a list of expanded files (with spaces escaped) in the file[] array which you can then loop through. I like this solution the best, easiest to program for and easiest for the users.
There's no need to spawn a shell to look at the $? variable, you can evaluate it directly.
It should just be:
if [ $? -eq 0 ]; then
You're running
./test.sh /home/dev/Examples/*.pdf /ankit_test/as
and your interactive shell is expanding the wildcard before the script gets it. You just need to quote the first argument when you launch it, as in
./test.sh "/home/dev/Examples/*.pdf" /ankit_test/as
and then, in your script, quote "$SRC" anywhere where you literally want the things with wildcards (ie, when you do echo $SRC, instead use echo "$SRC") and leave it unquoted when you want the wildcards expanded. Basically, always put quotes around things which might contain shell metacharacters unless you want the metacharacters interpreted. :)

Resources