Shell script that writes a shell script - linux

Two questions: how can I write a shell variable from this script into its child script?
Are there any easier ways to do this?
If you can't follow what I'm doing, I'm:
1) starting with a list of directories whose names will be stored as values taken by $i
2) cd'ing to every value of $i and ls'ing its contents
3) echoing its contents into a new script with the name of the directory via cat
4) using echo and cat to write a new script that contains the ls'd values of $i and sends them all to a blogging email address called $i#tumblr.com
#/bin/sh
read -d '' commands <<EOF
#list of directories goes here
dir1
dir2
dir3
etc...
EOF
for i in $commands
do
cd $SPECIALPATH/$i
echo ("#/bin/sh \n read -d '' directives <<EOF \n") | cat >> $i.sh
ls | cat >> $i.sh
echo ("EOF \n for q in $directives \n do \n uuencode $q $q | sendmail $i \n done \n") | cat >> $i.sh
# NB -- I am asking the script to write the shell variable $i into the new
# script, called $i.sh, as the email address specified, in the middle of an
# echo statement... I am well aware that it doesn't work as is
chmod +x $i.sh
./$i.sh
done

You are abusing felines a lot - you should simply redirect, rather than pipe to cat which appends.
You can avoid the intermediary $i.sh file by bundling all the output that goes to the file with a single I/O redirection that pipes direct into a shell - no need for the intermediate file to clean up (you didn't show that happening) or the chmod operation.
I would have done this using braces:
{
echo "..."
ls
echo "..."
} | sh
However, when I looked at the script in that form, I realized that wasn't necessary. I've left the initial part of your script unchanged, but the loop is vastly simpler like this:
#/bin/sh
read -d '' commands <<EOF
#list of directories goes here
dir1
dir2
dir3
etc...
EOF
for i in $commands
do
(
cd $SPECIALPATH/$i
ls |
while read q
do uuencode $q $q | sendmail $i
done
)
done
I'm assuming the sendmail command works - it isn't the way I'd try sending email. I'd probably use mailx or something similar, and I'd avoid using uuencode too (I'd use a base-64 encoding, left to my own devices):
do uuencode $q $q | mailx -s "File $q" $i#tumblr.com
The script also uses parentheses around the cd command. It means that the cd command and what follows is run in a sub-shell, so the parent script does not change directory. In this case, with an absolute pathname for $SPECIALDIR, it would not matter much. But as a general rule, it often makes life easier if you isolate directory changes like that.
I'd probably simplify it still further for general reuse (though I'd need to add something to ensure that SPECIALPATH is set appropriately):
#/bin/sh
for i in "$#"
do
(
cd $SPECIALPATH/$i
ls |
while read q
do uuencode $q $q | sendmail $i
done
)
done
I can then invoke it with:
script-name $(<list-of-dirs)
That means that without editing the script, it can be reused for any list of directories.
Intermediate step 1:
for i in $commands
do
(
cd $SPECIALPATH/$i
{
echo "read -d '' directives <<EOF"
ls
echo "EOF"
echo "for q in $directives"
echo "do"
echo " uuencode $q $q | sendmail $i"
echo "done"
} |
sh
)
done
Personally, I find it easier to read the generated script if the code that generates makes the generated script clear - using multiple echo commands. This includes indenting the code.
Intermediate Step 2:
for i in $commands
do
(
cd $SPECIALPATH/$i
{
ls |
echo "while read q"
echo "do"
echo " uuencode $q $q | sendmail $i"
echo "done"
} |
sh
)
done
I don't need to read the data into a variable in order to step through each item in the list once - simply read each line in turn. The while read mechanism is often useful for splitting up a line into multiple variables too: while read var1 var2 var3 junk will read the first field into $var1, the second into $var2, the third into $var3, and if there's anything left over, it goes into $junk. If you've generated the data accurately, there won't be any junk; but sometimes you have to deal with other people's data.

If the generated script is meant to be temporary, I would not use files. Besides, chmoding them to executable sounds unsafe. When I needed to parallel my scripting, I used a bash script to form a set of commands (in an array, split the array in two, then implode the array) to a single \n-separated string and then pass that to a new bash instance.
Basically, in bash:
for orig in "$#"
do
commands="$commands echo \"echoeing stuff here for arguments $orig\" \n"
done
echo -e $commands |bash
And a small tip: if the script doesn't need supervising, throw in a & after the piped bash to make your first script quit and do the rest of the work forked background.

If you export a variable
export VAR1=FOO
it'll be present in any child processes.
If you take a look at the init scripts, /etc/init..d/* you'll notice that many source another file full of "external" definitions. You could set up a file like that and have your child script source these files.

Related

Why is a part of the code inside a (False) if statement executed?

I wrote a small script which:
prints the content of a file (generated by another application) on paper with a matrix printer
prints the same line into a backup file
removes the original file.
The script runs every minute by a cronjob and works fine as long as there are files to print. If there are no files to print, it prints an empty line on the matrix printer and in the backup file. I don't understand why this happens as i implemented an if statement which checks if there is a file to print before the print command is executed. This behaviour only happens if the script is executed by the cron and not if i execute it manually with ./script.sh. What's the reason of this? and how can i solve it?
Something i noticed on the side is that if I place an echo "hi" command in the script, its printed to the matrix printer and the backup file. I expected that its printed to the console console when it has no >> something behind. How does this work?
The script:
#!/bin/bash
# Make sure the backup directory exists
if [ ! -d /home/user/backup_logprint ]
then
mkdir /home/user/backup_logprint
fi
# Print the records if there are any
date=`date +%Y-%m-%d`
filename='_logprint_backup'
printer_path="/dev/usb/lp0"
if [ `ls /tmp/ | grep logprint | wc -l` -gt 0 ]
then
for f in `ls /tmp | grep logprint`
do
echo `cat /tmp/$f` >> "/home/user/backup_logprint/$date$filename"
echo `cat /tmp/$f` >> $printer_path
rm "/tmp/$f"
done
fi
There's no need for ls or an if statement. Just use a proper glob in the for loop, and if no file match, the loop won't be entered.
#!/bin/bash
# Don't check first; just let mkdir decide if
# anything actually needs to be created.
d=/home/user/backup_logprint
mkdir -p "$d"
filename=$(date +"$d/%Y-%m-%d_logprint_backup")
printer_path="/dev/usb/lp0"
# Cause non-matching globs to expand to an empty
# sequence instead of being treated literally.
shopt -s nullglob
for f in /tmp/*logprint*; do
cat "$f" > "$printer_path" && mv "$f" "$d"
done

bash script loop breaks [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Loop ends prematurely when executing a command via SSH in a Bash function [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

'read -r' doesn't read beyond first line in a loop that does ssh [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Bash Variable Maths Not Working

I have a simple bash script, which forms part of an in house web app that I've developed.
It's purpose is to automate deletion of thumbnails of images when the original image has been deleted by the user.
The script logs some basic status info to a file /var/log/images.log
#!/bin/bash
cd $thumbpath
filecount=0
# Purge extraneous thumbs
find . -type f | while read file
do
if [ ! -f "$imagepath/$file" ]
then
filecount=$[$filecount+1]
rm -f "$file"
fi
done
echo `date`: $filecount extraneous thumbs removed>>/var/log/images.log
Whilst the script correctly deletes thumbs, it doesn't correctly output the number of thumbs that are being purged, it always shows 0.
For example, having just manually created some orphaned thumbnails, and then running my script, the manually generated orphaned thumbs are deleted, but the log shows:
Thu Jun 9 23:30:12 BST 2011: 0 extraneous thumbs removed
What am I doing wrong that is stopping $filecounter from showing a number other than zero, when files are being deleted.
I've created the following bash script to test this, and this works perfectly, outputting 0 then 1:
#!/bin/bash
count=0
echo $count
count=$[$count+1]
echo $count
Edit:
Thanks for the answers, but why does the following work
$ x=3
$ x=$[$x+1]
$ echo $x
4
...and also the second example works, yet it doesn't work in the first script?
Second Edit:
This works
count=0
echo Initial Value $count
for i in `seq 1 5`
do
count=$[$count+1]
echo $count
done
echo Final Value $count
Initial Value 0
1
2
3
4
5
Final Value 5
as does replacing count=$[$count+1] with count=$((count+1)), but not in my initial script.
You're using the wrong operator. Try using $(( ... )) instead, e.g.:
$ x=4
$ y=$((x + 1))
$ echo $y
5
$
EDIT
The other problem you're bumping into is down to the pipe. Bumped into this one before (with ksh, but wouldn't suprise me to find that other shells have the same problem). The pipe is forking another bash process, so when you do the increment, filcount is getting incremented in the subshell that's been forked after the pipe. This value isn't passed back to the calling shell as the subshell has it's own independent environment (environment variables are inherited in called processes, but called process cannot modify the environment of the calling process).
As an example, this demonstrates that filecount gets incremented okay:
#!/bin/bash
filecount=0
ls /bin | while read x
do
filecount=$((filecount + 1))
echo $filecount
done
echo $filecount
...so you should see filecount increase in the loop, but the final filecount will be zero because this echo belongs to the main shell, but the forked subshell (which consists purely of the while loop).
One way you can get the value back is like this...
#!/bin/bash
filecount=0
filecount=`ls /bin | while read x
do
filecount=$((filecount + 1))
echo $filecount
done | tail -1`
echo $filecount
This will only work if you don't care about any other stdout output in the loop as this throws it all away apart from the last line we output (the final value of filecount). This works because we're using stdout and stdin to feed the data back to the parent shell.
Depending on your viewpoint this is either a nasty hack or a nifty bit of shell jiggery-pokery. I'll leave you to decide what you think it is :-)
If you remove the pipeline into the while construct, you remove bash's need to create a subshell.
Change this:
filecount=0
find . -type f | while read file; do
if [ ! -f "$imagepath/$file" ]; then
filecount=$[$filecount+1]
rm -f "$file"
fi
done
echo $filecount
to this:
filecount=0
while read file; do
if [ ! -f "$imagepath/$file" ]; then
rm -f "$file" && (( filecount++ ))
fi
done < <(find . -type f)
echo $filecount
That is harder to read because the find command is hidden at the end. Another possibility is:
files=$( find . -type f )
while ...; do
:
done <<< "$files"
Chris J is quite right that you are using the wrong operator and POSIX subshell variable scoping means you can't get a final count that way.
As a side note, when doing math operations you could also consider using the let shell bultin like this:
$ filecount=4
$ let filecount=$filecount+1
$ echo $filecount
5
Also if you want scoping to just work like you expected it to in spite of that pipeline, you could use zsh instead of bash. In this case it should be a drop in replacement and work as expected.

Resources