Get output of FOR with EOF in bash - linux

I've created a bash script to temporarily help me send some files to a FTP server based on the id of the commit, i get the last commit, track the files and send as listed below.
#!/bin/bash
commit_hash=$(git log --format="%H" -n 1)
[[ -z "$1" ]] || commit_hash=$1
files=$(git diff-tree --no-commit-id --name-only -r $commit_hash)
echo -e $(git log -1 $commit_hash --pretty=format:"%h - %an, %ar : %s");
printf "\n"
HOST=
USER=
PASS=
for file in $files; do
ftp -nv $HOST << EOF
user $USER $PASS
cd /www/example
passive
put $file
bye
EOF
done;
of course it isn't the best approach to do that, but i automated some things that i am currently working on.
it is possible to catch the ftp output of the heredoc and apply some filters? with pipelines for example, i only want to know if the transfer was completed successfully.

it is possible to catch the ftp output of the heredoc and apply some filters? with pipelines for example, i only want to know if the transfer was completed successfully.
I presume you mean you want to catch the output of the ftp command whose input is redirected from the heredoc; the heredoc itself does not have output in a sense that anything other than the associated command can see.
But you can redirect the command's output. The thing to remember is that the heredoc begins on the next line, not immediately after the associated redirection operator. Thus, you can add a pipeline to another command after the heredoc operator. For example:
$ cat << EOF | grep flag
flag this line
not this line
or this line
flag this
last flag
EOF
Output:
flag this line
flag this
last flag

Do not use a for loop for this. See Bash FAQ 001.
commit_hash=${1:-$(git log --format="%H" -n 1)}
while IFS= read -r file; do
ftp -nv "$HOST" << EOF
user $USER $PASS
cd /www/example
passive
put $file
bye
EOF
done < <(git diff-tree --no-commit-id --name-only -r "$commit_hash")

Related

How to create a shell script file using tee [duplicate]

How can I write a here document to a file in Bash script?
Read the Advanced Bash-Scripting Guide Chapter 19. Here Documents.
Here's an example which will write the contents to a file at /tmp/yourfilehere
cat << EOF > /tmp/yourfilehere
These contents will be written to the file.
This line is indented.
EOF
Note that the final 'EOF' (The LimitString) should not have any whitespace in front of the word, because it means that the LimitString will not be recognized.
In a shell script, you may want to use indentation to make the code readable, however this can have the undesirable effect of indenting the text within your here document. In this case, use <<- (followed by a dash) to disable leading tabs (Note that to test this you will need to replace the leading whitespace with a tab character, since I cannot print actual tab characters here.)
#!/usr/bin/env bash
if true ; then
cat <<- EOF > /tmp/yourfilehere
The leading tab is ignored.
EOF
fi
If you don't want to interpret variables in the text, then use single quotes:
cat << 'EOF' > /tmp/yourfilehere
The variable $FOO will not be interpreted.
EOF
To pipe the heredoc through a command pipeline:
cat <<'EOF' | sed 's/a/b/'
foo
bar
baz
EOF
Output:
foo
bbr
bbz
... or to write the the heredoc to a file using sudo:
cat <<'EOF' | sed 's/a/b/' | sudo tee /etc/config_file.conf
foo
bar
baz
EOF
Instead of using cat and I/O redirection it might be useful to use tee instead:
tee newfile <<EOF
line 1
line 2
line 3
EOF
It's more concise, plus unlike the redirect operator it can be combined with sudo if you need to write to files with root permissions.
Note:
the following condenses and organizes other answers in this thread, esp the excellent work of Stefan Lasiewski and Serge Stroobandt
Lasiewski and I recommend Ch 19 (Here Documents) in the Advanced Bash-Scripting Guide
The question (how to write a here document (aka heredoc) to a file in a bash script?) has (at least) 3 main independent dimensions or subquestions:
Do you want to overwrite an existing file, append to an existing file, or write to a new file?
Does your user or another user (e.g., root) own the file?
Do you want to write the contents of your heredoc literally, or to have bash interpret variable references inside your heredoc?
(There are other dimensions/subquestions which I don't consider important. Consider editing this answer to add them!) Here are some of the more important combinations of the dimensions of the question listed above, with various different delimiting identifiers--there's nothing sacred about EOF, just make sure that the string you use as your delimiting identifier does not occur inside your heredoc:
To overwrite an existing file (or write to a new file) that you own, substituting variable references inside the heredoc:
cat << EOF > /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, with the variable contents substituted.
EOF
To append an existing file (or write to a new file) that you own, substituting variable references inside the heredoc:
cat << FOE >> /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, with the variable contents substituted.
FOE
To overwrite an existing file (or write to a new file) that you own, with the literal contents of the heredoc:
cat << 'END_OF_FILE' > /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, without the variable contents substituted.
END_OF_FILE
To append an existing file (or write to a new file) that you own, with the literal contents of the heredoc:
cat << 'eof' >> /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, without the variable contents substituted.
eof
To overwrite an existing file (or write to a new file) owned by root, substituting variable references inside the heredoc:
cat << until_it_ends | sudo tee /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, with the variable contents substituted.
until_it_ends
To append an existing file (or write to a new file) owned by user=foo, with the literal contents of the heredoc:
cat << 'Screw_you_Foo' | sudo -u foo tee -a /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, without the variable contents substituted.
Screw_you_Foo
To build on #Livven's answer, here are some useful combinations.
variable substitution, leading tab retained, overwrite file, echo to stdout
tee /path/to/file <<EOF
${variable}
EOF
no variable substitution, leading tab retained, overwrite file, echo to stdout
tee /path/to/file <<'EOF'
${variable}
EOF
variable substitution, leading tab removed, overwrite file, echo to stdout
tee /path/to/file <<-EOF
${variable}
EOF
variable substitution, leading tab retained, append to file, echo to stdout
tee -a /path/to/file <<EOF
${variable}
EOF
variable substitution, leading tab retained, overwrite file, no echo to stdout
tee /path/to/file <<EOF >/dev/null
${variable}
EOF
the above can be combined with sudo as well
sudo -u USER tee /path/to/file <<EOF
${variable}
EOF
When root permissions are required
When root permissions are required for the destination file, use |sudo tee instead of >:
cat << 'EOF' |sudo tee /tmp/yourprotectedfilehere
The variable $FOO will *not* be interpreted.
EOF
cat << "EOF" |sudo tee /tmp/yourprotectedfilehere
The variable $FOO *will* be interpreted.
EOF
For future people who may have this issue the following format worked:
(cat <<- _EOF_
LogFile /var/log/clamd.log
LogTime yes
DatabaseDirectory /var/lib/clamav
LocalSocket /tmp/clamd.socket
TCPAddr 127.0.0.1
SelfCheck 1020
ScanPDF yes
_EOF_
) > /etc/clamd.conf
For those looking for a pure bash solution (or a need for speed), here's a simple solution without cat:
# here-doc tab indented
{ read -r -d '' || printf >file '%s' "$REPLY"; } <<-EOF
foo bar
EOF
or for an easy "mycat" function (and avoid leaving REPLY in environment):
mycat() {
local REPLY
read -r -d '' || printf '%s' "$REPLY"
}
mycat >file <<-EOF
foo bar
EOF
Quick speed comparison of "mycat" vs OS cat (1000 loops >/dev/null on my OSX laptop):
mycat:
real 0m1.507s
user 0m0.108s
sys 0m0.488s
OS cat:
real 0m4.082s
user 0m0.716s
sys 0m1.808s
NOTE: mycat doesn't handle file arguments, it just handles the problem "write a heredoc to a file"
As instance you could use it:
First(making ssh connection):
while read pass port user ip files directs; do
sshpass -p$pass scp -o 'StrictHostKeyChecking no' -P $port $files $user#$ip:$directs
done <<____HERE
PASS PORT USER IP FILES DIRECTS
. . . . . .
. . . . . .
. . . . . .
PASS PORT USER IP FILES DIRECTS
____HERE
Second(executing commands):
while read pass port user ip; do
sshpass -p$pass ssh -p $port $user#$ip <<ENDSSH1
COMMAND 1
.
.
.
COMMAND n
ENDSSH1
done <<____HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
____HERE
Third(executing commands):
Script=$'
#Your commands
'
while read pass port user ip; do
sshpass -p$pass ssh -o 'StrictHostKeyChecking no' -p $port $user#$ip "$Script"
done <<___HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
___HERE
Forth(using variables):
while read pass port user ip fileoutput; do
sshpass -p$pass ssh -o 'StrictHostKeyChecking no' -p $port $user#$ip fileinput=$fileinput 'bash -s'<<ENDSSH1
#Your command > $fileinput
#Your command > $fileinput
ENDSSH1
done <<____HERE
PASS PORT USER IP FILE-OUTPUT
. . . . .
. . . . .
. . . . .
PASS PORT USER IP FILE-OUTPUT
____HERE
If you want to keep the heredoc indented for readability:
$ perl -pe 's/^\s*//' << EOF
line 1
line 2
EOF
The built-in method for supporting indented heredoc in Bash only supports leading tabs, not spaces.
Perl can be replaced with awk to save a few characters, but the Perl one is probably easier to remember if you know basic regular expressions.
In addition, if you're writing to a file, it can be a good idea to check whether or not your write succeeded for failed. For example:
if ! echo "contents" > ./file ; then
echo "ERROR: failed to write to file" >& 2
exit 1
fi
To do the same with heredoc, there are two possible approaches.
1)
if ! cat > ./file << EOF
contents
EOF
then
echo "ERROR: failed to write to file" >& 2
exit 1
fi
if ! cat > ./file ; then
echo "ERROR: failed to write to file" >& 2
exit 1
fi << EOF
contents
EOF
You can test the error case in the above code by replacing the destination file ./file with /file (assuming you're not running as root).
I like the following method of basic redirection for its concision, readability and presentation in an indented script:
<<-End_of_file >file
→ foo bar
End_of_file
Where →        is a tab character.
This is standard Bourne shell redirection without forking any cat or tee process.
But it is not working with current bash even when called through /bin/sh.
It is still working with /bin/zsh since more than 20 years.

Loop ends prematurely when executing a command via SSH in a Bash function [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

'read -r' doesn't read beyond first line in a loop that does ssh [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Looping inside ftp command

I'm wondering if I can in some case add 0-10 files, lets say I have a shell script with video1.jpg, and want it to go up to video10.jpgand start over. The script runs and give video1.jpg, I want it to keep adding +1 up to 10.
#!/bin/bash
ftp -n -v $HOST << EOT
ascii
user $USER $PASSWD
prompt
get video1.jpg
bye
EOT
you can try this
#!/bin/bash
for((i=1;i<=10;i++))
do
ftp -n -v $HOST << EOT
ascii
user $USER $PASSWD
prompt
get video${i}.jpg
bye
EOT
done
I have no chance to try it right now, but this should do:
#!/bin/bash
for i in {1..10}; do
ftp -n -v $HOST << EOT
ascii
user $USER $PASSWD
prompt
get video$i.jpg
bye
EOT
done
EDIT #2: I was wrong, my answer is almost identical to Shravan Yadav's as you can't loop inside an ftp transaction. You can pick either one as the correct answer, but you should prefer his as he came first I guess :)
As other answers have already pointed out, the here document you pass to the ftp program is just text, and does not have a facility for looping. However, generating ten ftp sessions to get ten files is wasteful -- you can generate a single here document which instructs ftp to fetch them all in one session.
#!/bin/bash
(
printf '%s\n' ascii "user $USER $PASSWD" prompt
printf 'get video%i.jpg' {1..10}
printf 'bye\n'
) | ftp -n -v "$HOST"
There will be no facility for proper error handling, though. The real solution is to use a properly scriptable ftp program (lftp, ncftp, what have you) or write your own simple client.
Here is an approach to add ftp inside the loop
#!/bin/bash
HOST='hostname'
USER='username'
PASSWD='password'
# Local directory where the files are stored.
cd "/local/directory/from where to upload files/"
# To get all the files added today only.
TODAYSFILES=`find -maxdepth 1 -type f -mtime -1`
# remote server directory to upload backup
REMOTEDIR="/directory on remote ftp computer/"
for FILENAME in ${TODAYSFILES[#]}; do
ftp -n -v $HOST << EOT
ascii
user $USER $PASSWD
prompt
cd $REMOTEDIR
put $FILENAME
bye
EOT
done

scp: how to find out that copying was finished

I'm using scp command to copy file from one Linux host to another.
I run scp commend on host1 and copy file from host1 to host2. File is quite big and it takes for some time to copy it.
On host2 file appears immediately as soon as copying was started. I can do everything with this file even if copying is still in progress.
Is there any reliable way to find out if copying was finished or not on host2?
Off the top of my head, you could do something like:
touch tinyfile
scp bigfile tinyfile user#host:
Then when tinyfile appears you know that the transfer of bigfile is complete.
As pointed out in the comments, this assumes that scp will copy the files one by one, in the order specified. If you don't trust it, you could do them one by one explicitly:
scp bigfile user#host:
scp tinyfile user#host:
The disadvantage of this approach is that you would potentially have to authenticate twice. If this were an issue you could use something like ssh-agent.
On sending side (host1) use script like this:
#!/bin/bash
echo 'starting transfer'
scp FILE USER#DST_SERVER:DST_PATH
OUT=$?
if [ $OUT = 0 ]; then
echo 'transfer successful'
touch successful
scp successful USER#DST_SERVER:DST_PATH
else
echo 'transfer faild'
fi
On receiving side (host2) make script like this:
#!/bin/bash
SLEEP_TIME=30
MAX_CNT=10
CNT=0
while [[ ! -e successful && $CNT < $MAX_CNT ]]; do
((CNT++))
sleep($SLEEP_TIME);
done;
if [[ -e successful ]]; then
echo 'successful'
rm successful
# do somethning with FILE
fi
With CNT and MAX_CNT you disable endless loop (in case file successful isn't transferred).
Product MAX_CNT and SLEEP_TIME should be equal or greater expected transfer time. In my example expected transfer time is less than 300 seconds.
A checksum (md5sum, sha256sum ,sha512sum) of the local and remote files would tell you if they're identical.
For the situation where you don't have SSH access to the remote system - like an FTP server - you can download the file after it's uploaded and compare the checksums. I do this for files I send from production scripts at work. Below is a snippet from the script in which I do this.
MD5SRC=$(md5sum $LOCALFILE | cut -c 1-32)
MD5TESTFILE=$(mktemp -p /ramdisk)
curl \
-o $MD5TESTFILE \
-sS \
-u $FTPUSER:$FTPPASS \
ftp://$FTPHOST/$REMOTEFILE
MD5DST=$(md5sum $MD5TESTFILE | cut -c 1-32)
if [ "$MD5SRC" == "$MD5DST" ]
then
echo "+Local and Remote files match!"
else
echo "-Local and Remote files don't match"
fi
if you use inotify-tools,
then the solution will looks like this:
while ! inotifywait -e close $(dirname ${bigfile_fullname}) 2>/dev/null | \
grep -Eo "CLOSE $(basename ${bigfile_fullname})$">/dev/null
do true
done
echo "File ${bigfile_fullname} closed"
After some investigation, and discussion of the problem on other forums I have found one more solution. Maybe it can help somebody.
There is a command "lsof". It lists open files. During copying the file will be opened, so the command
lsof | grep filename
will return non empty result.
So you might want to make a while loop to wait until lsof returns nothing and proceed with your task.
Example:
# provide your file name here
f=<nameOfYourFile>
lsofresult=`lsof | grep $f | wc -l`
while [ $lsofresult != 0 ]; do
echo still copying file $f...
sleep 5
lsofresult=`lsof | grep $f | wc -l`
done; echo copying file $f is finished: `ls $f`
For the duplicate question, How to check if file has been scp 100% to the remote location , which was for an expect script, to know if a file is transferred completely, we can add expect 100% .. .. i.e something like this ...
expect -c "
set timeout 1
spawn scp user#$REMOTE_IP:/tmp/my.file user#$HOST_IP:/home/.
expect yes/no { send yes\r ; exp_continue }
expect password: { send $SCP_PASSWORD\r }
expect 100%
sleep 1
exit
"
if [ -f "/home/my.file" ]; then
echo "Success"
fi
If avoiding a second SSH handshake is important, you can use something like the following:
ssh host cat \> bigfile \&\& touch complete < bigfile
Then wait for the "complete" file to get created on the remote end.

Resources