linux sftp :file transfer error - linux

I have a small problem regarding "sftp".
I have a script, which simply transfers a file to a remote sftp server. But when this script runs it fails at sftp and my script fails.
So, i have to manually transfer the file,using command which is same as the command that i have used in the script, and it works fine.
So my problem is that the sftp command runs smoothly when i run it manually, but creates problem when the same command is run through the script.
this is the code that I'm using
sftp -v -b sftp_input.txt UserId#aa.bb.cc.dd
if (($? > 0 ));
then
echo "sftp error. Exiting.."
exit
fi
where sftp_input.txt contains the cmd to put the file to remote server.
Please advice.....

The script can't work because it's malformed. You forgot to separate the if statement and also forgot the closing fi. Here's the correct form for your script:
sftp -v -b sftp_input.txt UserId#aa.bb.cc.ddd
if (($? > 0 )); then
echo "sftp error. Exiting.."
exit
fi
If you want it all in one line, then:
sftp -v -b sftp_input.txt UserId#aa.bb.cc.ddd; if (($? > 0 )); then echo "sftp error. Exiting.."; exit; fi
But as you can see it's a bad idea. Better to write readable and well indented code.

Related

commands in bash script doesn't work properly

I have this script :
#!/bin/bash
./process-list $1
det=$?
echo $det
if [ $det -eq 1 ]
then
echo "!!!"
ssh -n -f 192.0.2.1 "/usr/local/bin/sshfs -r 192.0.2.2:/home/sth/rootcheck_redhat /home/ossl7/r"
rk=$(ssh -n -f 192.0.2.1 'cd /home/s/r/rootcheck-2.4; ./ossec-rootcheck >&2; echo $?' 2>res)
if [ $rk -eq 0 ]
then
echo "not!"
fi
fi
exit;
I ssh to system 192.0.2.1 and run sshfs command on it. actualy I want to mount a directory of system 192.0.2.2 on system 192.0.2.1 and then run a program (which is located in that directory) on system 192.0.2.1. all these ssh and sshfs commands work properly. when I run them manually and output of program ossec-rootcheck is written to file res ,but when I run this script, mount is done but no output is written to file res. I guess program ossec-rootcheck is runned but I don't know why the output isn't written!
this script used to work properly before I don't know what happend suddenly!
As far as I understand the program, the remote machine has stdin>stderr, but how do you get that to the local machine where ssh is being evaluated?
The end ' means on the rk= line, the 2>res happens locally. (and there is no error from ssh, the remote error, if any, is lost when ssh successfully completes.) You could try >res it will get whatever ssh prints out, unfortunately including non-errors.

Read command in bash script not waiting for user input when piped to bash?

Here is what I'm entering in Terminal:
curl --silent https://raw.githubusercontent.com/githubUser/repoName/master/installer.sh | bash
The WordPress installing bash script contains a "read password" command that is supposed to wait for users to input their MySQL password. But, for some reason, that doesn't happen when I run it with the "curl githubURL | bash" command. When I download the script via wget and run it via "sh installer.sh", it works fine.
What could be the cause of this? Any help is appreciated!
If you want to run a script on a remote server without saving it locally, you can try this.
#!/bin/bash
RunThis=$(lynx -dump http://127.0.0.1/example.sh)
if [ $? = 0 ] ; then
bash -c "$RunThis"
else
echo "There was a problem downloading the script"
exit 1
fi
In order to test it, I wrote an example.sh:
#!/bin/bash
# File /var/www/example.sh
echo "Example read:"
read line
echo "You typed: $line"
When I run Script.sh, the output looks like this.
$ ./Script.sh
Example read:
Hello World!
You typed: Hello World!
Unless you absolutely trust the remote scripts, I would avoid doing this without examining it before executing.
It wouldn't stop for read:
As when you are piping in a way you are forking a child which has been given input from parent shell.
You cannot give the values back to parent(modify parent's env) from child.
and through out this process you are always in parent process.

Notify via email if something wrong got happened in the shell script

fileexist=0
mv /data/Finished-HADOOP_EXPORT_&Date#.done /data/clv/daily/archieve-wip/
fileexist=1
--some other script below
Above is the shell script I have in which in the for loop, I am moving some files. I want to notify myself via email if something wrong got happened in the moving process, as I am running this script on the Hadoop Cluster, so it might be possible that cluster went down while this was running etc etc. So how can I have better error handling mechanism in this shell script? Any thoughts?
Well, atleast you need to know "What are you expecting to go wrong". based on that you can do this
mv ..... 2> err.log
if [ $? -ne 0 ]
then
cat ./err.log | mailx -s "Error report" admin#abc.com
rm ./err.log
fi
Or as William Pursell suggested, use-
trap 'rm -f err.log' 0; mv ... 2> err.log || < err.log mailx ...
mv may return a non-zero return code upon error, and $? returns that error code. If the entire server goes down then unfortunately this script doesn't run either so that's better left to more advanced monitoring tools such as Foglight running on a different monitoring server. For more basic checks, you can use method above.

Loop until connected to SSH

Sometimes when connecting to a remote SSH server I get Connection Closed By *IP*; Couldn't read packet: Connection reset by peer. But after trying one or two more times it connects properly.
This presents a problem with a few bash scripts I use to automatically upload my archived backups to the SSH server, like so;
export SSHPASS=$sshpassword
sshpass -e sftp -oBatchMode=no -b - root#$sshaddress << !
cd $remotefolder
put $backupfolder/Qt_$date.sql.gz
bye
!
How can I have this part loop until it actually properly connects?
UPDATE: (Solution)
RETVAL=1
while [ $RETVAL -ne 0 ]
do
export SSHPASS=$sshpassword
sshpass -e sftp -oBatchMode=no -b - root#$sshaddress << !
cd $remotefolder
put $backupfolder/Qt_$date.tgz
bye
!
RETVAL=$?
[ $RETVAL -eq 0 ] && echo Success
[ $RETVAL -ne 0 ] && echo Failure
done
Try something like this :
export SSHPASS=$sshpassword
sshpassFunc() {
sshpass -e sftp -oBatchMode=no -b - root#$sshaddress << !
cd $remotefolder
put $backupfolder/Qt_$date.sql.gz
bye
!
}
until sshpassFunc; do
sleep 1
done
(not tested)
I am not a shell scripting expert, but I would check the return value of sshpass when it exits.
From man ssh:
ssh exits with the exit status of the remote command or
with 255 if an error occurred.
From man sshpath:
Return Values
As with any other program, sshpass returns 0 on success. In case of
failure, the following return codes are used:
Invalid command line argument
Conflicting arguments given
General runtime error
Unrecognized response from ssh (parse error)
Invalid/incorrect password
Host public key is unknown. sshpass exits without confirming the new key.
In addition, ssh might be complaining about a man in the middle
attack. This complaint does not go to the tty. In other words, even
with sshpass, the error message from ssh is printed to standard error.
In such a case ssh's return code is reported back. This is typically
an unimaginative (and non-informative) "255" for all error cases.
So try to run the command, and check its return value. If the return value was not 0 (for SUCCESS) then try again. Repeat using a while loop until you succeed.
Sidenote: why are you using sshpass instead of public-key (passwordless) authentication? It is more secure (you don't have to write down your password) and makes logging in via regular ssh as easy as ssh username#host.
There's even an easy tool to set it up: ssh-copy-id.

how to re-run the "curl" command automatically when the error occurs

Sometimes when I execute a bash script with the curl command to upload some files to my ftp server, it will return some error like:
56 response reading failed
and I have to find the wrong line and re-run them manually and it will be OK.
I'm wondering if that could be re-run automatically when the error occurs.
My scripts is like this:
#there are some files(A,B,C,D,E) in my to_upload directory,
# which I'm trying to upload to my ftp server with curl command
for files in `ls` ;
do curl -T $files ftp.myserver.com --user ID:pw ;
done
But sometimes A,B,C, would be uploaded successfully, only D were left with an "error 56", so I have to rerun curl command manually. Besides, as Will Bickford said, I prefer that no confirmation will be required, because I'm always asleep at the time the script is running. :)
Here's a bash snippet I use to perform exponential back-off:
# Retries a command a configurable number of times with backoff.
#
# The retry count is given by ATTEMPTS (default 5), the initial backoff
# timeout is given by TIMEOUT in seconds (default 1.)
#
# Successive backoffs double the timeout.
function with_backoff {
local max_attempts=${ATTEMPTS-5}
local timeout=${TIMEOUT-1}
local attempt=1
local exitCode=0
while (( $attempt < $max_attempts ))
do
if "$#"
then
return 0
else
exitCode=$?
fi
echo "Failure! Retrying in $timeout.." 1>&2
sleep $timeout
attempt=$(( attempt + 1 ))
timeout=$(( timeout * 2 ))
done
if [[ $exitCode != 0 ]]
then
echo "You've failed me for the last time! ($#)" 1>&2
fi
return $exitCode
}
Then use it in conjunction with any command that properly sets a failing exit code:
with_backoff curl 'http://monkeyfeathers.example.com/'
Perhaps this will help. It will try the command, and if it fails, it will tell you and pause, giving you a chance to fix run-my-script.
COMMAND=./run-my-script.sh
until $COMMAND; do
read -p "command failed, fix and hit enter to try again."
done
I have faced a similar problem where I need to make contact with servers using curl that are in the process of starting up and haven't started up yet, or services that are temporarily unavailable for whatever reason. The scripting was getting out of hand, so I made a dedicated retry tool that will retry a command until it succeeds:
#there are some files(A,B,C,D,E) in my to_upload directory,
# which I'm trying to upload to my ftp server with curl command
for files in `ls` ;
do retry curl -f -T $files ftp.myserver.com --user ID:pw ;
done
The curl command has the -f option, which returns code 22 if the curl fails for whatever reason.
The retry tool will by default run the curl command over and over forever until the command returns status zero, backing off for 10 seconds between retries. In addition retry will read from stdin once and once only, and writes to stdout once and once only, and writes all stdout to stderr if the command fails.
Retry is available from here: https://github.com/minfrin/retry

Resources