Linux FTP put success output - linux

I have a bash script that creates backups incrementally(daily) and full(on Mondays). Every 7 days the script combines the week of backups(full and Incremental) and sends them off to an FTP server, the problem i am having is i want to delete the files from my backup directory after the FTP upload is finished, but i cant do that until i know the file was successfully uploaded. I need to figure out how to capture the '226 transfer complete' so i can use that in an 'IF' statement to delete the backup files. Any help is greatly appreciated. also he is my FTP portion of the script
if [ -a "$WKBKDIR/weekending-$YEAR-$MONTH-$DAY.tar.gz" ]; then
HOST=192.168.40.30 #This is the FTP servers host or IP address.
USER=username #This is the FTP user that has access to the server.
PASS=password #This is the password for the FTP user.
ftp -inv $HOST << EOF
user $USER $PASS
cd /baks
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
fi
I could use whatever mean i needed i suppose, FTP was something already setup for another backup function for something else, thanks
2nd EDIT Ahmed the rsync works great in test from command line, its a lot faster than FTP, the server is on the local network so SSH not that big of a deal but nice to have for added security, i will finish implementing in my script tomorrow, thanks again

FTP OPTION
The simple solution would be to do something like this:
ftp -inv $HOST >ftp-results-$YEAR-$MONTH-$DAY.out 2>&1 <<-EOF
user $USER $PASS
cd /baks
bin
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
Also there is an issue with your here-document syntax; there is no space between << and the delimiter word (in your case EOF) and I added a - because you are putting white-spaces before the ACTUAL delimeter (it's tabbed in for the if / fi block) so the [-] is required
Now when you do this; you can parse the output file to look for the successful put of the file. For example:
if grep -q '226 transfer complete' ftp-results-$YEAR-$MONTH-$DAY.out; then
echo "It seems that FTP transfer completed fine, we can schedule a delete"
echo "rm -f $PWD/weekending-$YEAR-$MONTH-$DAY.tar.gz" >> scheduled_cleanup.sh
fi
and just run scheduled_cleanup.sh using cron at a given time; this way you will have some margin before the files are cleaned up
If your remote FTP server has good SITE or PROXY options you may be able to get the remote FTP to run a checksum on the uploaded file after successful upload and return the result.
SCP / RSYNC OPTION
Using FTP is clunky and dangerous, you should really try and see if you can have scp or ssh access to the remote system.
If you can then generate an ssh key if you don't have one using ssh-keygen:
ssh-keygen -N "" -t rsa -f ftp-rsa
put the ftp-rsa.pub file into the $HOST/home/$USER/.ssh/authorized_keys and you have a much nicer method for uploading files:
if scp -B -C weekending-$YEAR-$MONTH-$DAY.tar.gz $USER#$HOST:/baks/ ; then
echo Upload successful 1>&2
else
echo Upload failed 1>&2
fi
Or better yet using rsync:
if rsync --progress -a weekending-$YEAR-$MONTH-$DAY.tar.gz $HOST:/baks/ ; then
echo Upload successful 1>&2
else
echo Upload failed 1>&2
fi
et voilĂ  you are done since rsync works over ssh you are happy and secure

Try the next
#!/bin/bash
runifok() { echo "will run this when the transfer is OK"; }
if [ -a "$WKBKDIR/weekending-$YEAR-$MONTH-$DAY.tar.gz" ]; then
HOST=192.168.40.30 #This is the FTP servers host or IP address.
USER=username #This is the FTP user that has access to the server.
PASS=password #This is the password for the FTP user.
ftp -inv <<EOF | grep -q '226 transfer complete' && runifok
user $USER $PASS
cd /baks
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
fi
test it and when will run ok - replace the echo in the runifok function for your commands what want execute after the upload is succesful.

Related

Acessing and transferring files from a FTP server to my ftp server through cron file

I have a FTP Server that uses Cron to automate tasks and I would like to use it to access another ftp server, get a file that starts with 26 and have an extension .csv, transfer to my FTP I am running the cron and delete the file on the origin FTP server, every friday of the week. Can somebody help me with the script code?
What I have right now is this:
#!/bin/bash -x
filename="dir/*.csv"
hostname="files.test"
username="testuser"
password="testpassword"
ftp -in $hostname <<EOF
quote USER $username
quote PASS $password
binary
get $filename
quit
EOF
Please, help
#!/bin/bash
USER=user
PASS=password
URL=myIP
PLACE=tmp
#
ftp -v -n > /tmp/xftpb.log <<EOF
open $URL
user $USER $PASS
binary
cd $PLACE
mget 26*.csv
mdel 26*.csv
quit
EOF
To run every friday, 8:00h, use at crontab:
0 8 * * 5 /path/mybash.sh

How to connect input/output to SSH session

What is a good way to be able to directly send to STDIN and receive from STDOUT of a process? I'm specifically interested in SSH, as I want to do the following:
[ssh into a remote server]
[run remote commands]
[run local commands]
[run remote commands]
etc...
For example, let's say I have a local script "localScript" that will output the next command I want to run remotely, depending on the output of "remoteScript". I could do something like:
output=$(ssh myServer "./remoteScript")
nextCommand=$(./localScript $output)
ssh myServer "$nextCommand"
But it would be nice to do this without closing/reopening the SSH connection at every step.
You can redirect SSH input and output to FIFO-s and then use these for two-way communication.
For example local.sh:
#!/bin/sh
SSH_SERVER="myServer"
# Redirect SSH input and output to temporary named pipes (FIFOs)
SSH_IN=$(mktemp -u)
SSH_OUT=$(mktemp -u)
mkfifo "$SSH_IN" "$SSH_OUT"
ssh "$SSH_SERVER" "./remote.sh" < "$SSH_IN" > "$SSH_OUT" &
# Open the FIFO-s and clean up the files
exec 3>"$SSH_IN"
exec 4<"$SSH_OUT"
rm -f "$SSH_IN" "$SSH_OUT"
# Read and write
counter=0
echo "PING${counter}" >&3
cat <&4 | while read line; do
echo "Remote responded: $line"
sleep 1
counter=$((counter+1))
echo "PING${counter}" >&3
done
And simple remote.sh:
#!/bin/sh
while read line; do
echo "$line PONG"
done
The method you are using works, but I don't think you can reuse the same connection everytime. You can, however, do this using screen, tmux or nohup, but that would greatly increase the complexity of your script because you will now have to emulate keypresses/shortcuts. I'm not even sure if you can if you do directly in bash. If you want to emulate keypresses, you will have to run the script in a new x-terminal and use xdotool to emulate the keypresses.
Another method is to delegate the whole script to the SSH server by just running the script on the remote server itself:
ssh root#MachineB 'bash -s' < local_script.sh

FTP status check using a variable - Linux

I am doing an ftp and I want to check the status. I don't want to use '$?' as mostly it returns 0 (Success) for ftp even though internally ftp didn't go through.
I know I can check the log file and do a grep from there for "Transfer complete" (221 status). That works fine but I don't want to do it as I have many different reports doing ftp. So creating multiple log files for all of them is what I want to avoid.
Can I get the logged information in a local script variable and process it inside the script itself?
Something similar to these (I've tried both but neither worked):
Grab FTP output in BASH SCRIPT
FTP status check whether successful or not
Below is something similar to what I am trying to do:
ftp -inv ${HOST} > log_file.log <<!
user ${USER} ${PASS}
bin
cd "${TARGET}"
put ${FEEDFILE}
bye
!
Any suggestions on how can I get the entire ftp output in a script variable and then check it within the script?
To capture stdout to a variable you can use bash's command substitution, so either OUTPUT=`cmd` or OUTPUT=$(cmd).
Here's an example how to capture the output from ftp in your case:
CMDS="user ${USER} ${PASS}
bin
cd \"${TARGET}\"
put \"${FEEDFILE}\"
bye"
OUTPUT=$(echo "${CMDS}" | ftp -inv ${HOST})

Save password between bash script execution

I want my script to prompt for a password, but I only want it to do so once per day session (let's say half an hour). Is it possible to save the login credentials of a user between script executions securely? I need this to be a bash script, because it has to run on several different types of UNIX, on which I am not authorized to install anything.
I was thinking of encrypting a text file to which I would write the login credentials, but where would I keep the password to that file? Seems like I just re-create the problem.
I know about utilities which run an enrypted script, and I am very against using them, because I do not like the idea of keeping a master password inside a script that people might need to debug later on.
EDIT: This is not a server logon script, but authenticates with a web server that I have no control over.
EDIT 2: Edited session duration
Depending on what the "multiple invocations" of the script are doing, you could do this using 2 scripts, a server and a client, using a named pipe to communicate. Warning: this may be unportable.
Script 1 "server":
#!/bin/bash
trigger_file=/tmp/trigger
read -s -p "Enter password: " password
echo
echo "Starting service"
mknod $trigger_file p
cmd=
while [ "$cmd" != "exit" ]; do
read cmd < $trigger_file
echo "received command: $cmd"
curl -u username:$password http://www.example.com/
done
rm $trigger_file
Script 2 "client":
#!/bin/bash
trigger_file=/tmp/trigger
cmd=$1
echo "sending command: $cmd"
echo $cmd > $trigger_file
Running:
$ ./server
Enter password: .....
Starting service
received command: go
other window:
$ ./client go
sending command: go
EDIT:
Here is a unified self-starting server/client version.
#!/bin/bash
trigger_file=/tmp/trigger
cmd=$1
if [ -z "$cmd" ]; then
echo "usage: $0 cmd"
exit 1
fi
if [ "$cmd" = "server" ]; then
read -s password
echo "Starting service"
mknod $trigger_file p
cmd=
while [ "$cmd" != "exit" ]; do
read cmd < $trigger_file
echo "($$) received command $cmd (pass: $password)"
curl -u username:$password http://www.example.com/
done
echo exiting
rm $trigger_file
exit
elif [ ! -e $trigger_file ]; then
read -s -p "Enter password: " password
echo
echo $password | $0 server &
while [ ! -e $trigger_file ]; do
sleep 1
done
fi
echo "sending command: $cmd"
echo $cmd > $trigger_file
You are correct that saving the password anywhere that is accessible re-creates the problem. Also asking for credentials once per day instead of each time the program runs is essentially the same as not having an authentication system at all from the point of view of system security. Having the password anywhere that is easily readable (whether as plain text or encrypted by a plain text key) eliminates any security you gained by having a password to anyone with decent system knowledge/scanning tools.
The traditional way of solving this problem (and one of the more secure mechanisms) is to use SSH keys in lieu of passwords. Once a user has the key they don't need to ever re-enter their authentication manually. For even better security you can make the SSH key login as a user who only has execute privileges to the script/executable. Thus they wouldn't be able to change what the script does nor reproduce it by reading the file. Then the actual owner of the file can easily edit/run the script with no authentication required while keeping other users in a restricted use mode.
Usually, passwords are not stored (for security) reasons, rather the password hash is stored. Everytime the user enters the password the hash is compared for authentication. However, your requirement is something like 'remember password' feature (like on a web browser, or windows apps). In this case there is no other way to store the password in a flat file and then use something like gpg to encrypt the file, but then you end up having a key for the encryption.
The entire design of asking the user of his credentials once per day is as good as not asking for any credentials. A tightly secure system should have appropriate time outs set to log the user off due to in-activity especially on back end server operations.

Linux script - password step cuts the flow

Lets assume the script i want to write ssh to 1.2.3.4 and then invokes
ls.
The problem is that when the line "ssh 1.2.3.4" is invoked, a password is
Required, hence, the flow is stopped, even when i fill the password,
The script wont continue.
How can i make the script continue after the password is given?
Thx!
You want to do public key authentication. Here are some resources which should get you going.
http://magicmonster.com/kb/net/ssh/auto_login.html
http://www.cs.rpi.edu/research/groups/vision/doc/auto/ssh/ssh_public_key_authentication.html
I would post a couple more links, but I don't have enough reputation points. ;) Just google on "SSH automated login" or "SSH public key authentication" if you need more help.
Actually you're trying to run ls locally but you have an ssh session opened. So it won't run ls until the session is opened. If you want to run ls remotely, you should use
ssh username#host COMMAND
Where command is the command you want to run. Ssh session will finish as soon as the command is invoked and you can capture its output normally.
I would suggest you to use RSA authentication method for script that needs ssh.
I just tried this script:
#!/bin/sh
ssh vps1 ls
mkdir temp
cd temp
echo test > file.txt
And it works. I can connect to my server and list my home. Then, locally, it creates temp dir, cd into it and then creates file.txt with 'test' inside.
write simple login bash script named login_to and give exec permissions (chmod 744 login_to)
#!/bin/bash
if [ $1 = 'srv1' ]; then
echo 'srv1-pass' | pbcopy
ssh root#11.11.11.11
fi
if [ $1 = 'foo' ]; then
echo 'barbaz' | pbcopy
ssh -t dux#22.22.22.22 'cd ~/somedir/someotherdir; bash'
fi
now use it like this
login_to srv1
login_to foo
When asked for password, just pate (ctrl+v or command+v) and you will be logged in.

Resources