I'm using simple script to automate ftp. The script looks like this:
ftp -nv $FTP_HOST<<END_FTP
user $FTP_USER $FTP_PASS
binary
mkdir $REMOTE_DIR
cd $REMOTE_DIR
lcd $LOCAL
put $FILE
bye
END_FTP
But I would like to pipe STDERR to the syslog and STDOUT to a logfile. Normally I would do something like that: ftp -nv $FTP_HOST 1>>ftp.log | logger<<END_FTP but in this case that won't work because of <<END_FTP. How should I do it properly to make the script work? Note that I want to redirect only output from the FTP command inside my script and not the whole script.
This works without using a temp file for the error output. The 2>&1 sends the error output to where standard output is going — which is the pipe. The >> changes where standard output is going — which is now the file — without changing where standard error is going. So, the errors go to logger and the output to ftp.log.
ftp -nv $FTPHOST <<END_FTP 2>&1 >> ftp.log | logger
user $FTP_USER $FTP_PASS
binary
mkdir $REMOTE_DIR
cd $REMOTE_DIR
lcd $LOCAL
put $FILE
bye
END_FTP
How about:
exec > mylogfile; exec 2> >(logger -t myftpscript)
in front of you ftp script
Another way of doing this I/O redirection is with the { ... } operations, thus:
{
ftp -nv $FTPHOST <<END_FTP >> ftp.log
user $FTP_USER $FTP_PASS
binary
mkdir $REMOTE_DIR
cd $REMOTE_DIR
lcd $LOCAL
put $FILE
bye
END_FTP
# Optionally other commands here...stderr will go to logger too
} 2>&1 | logger
This is often the best mechanism when more than one command, but not all commands, need the same I/O redirection.
In context, though, I think this solution is the best (but that's someone else's answer, not mine):
ftp -nv $FTPHOST <<END_FTP 2>&1 >> ftp.log | logger
...
END_FTP
Why not create a netrc file and let that do your login and put the file for you.
The netrc file will let you login and allow you to define an init macro that will make the needed directory and put the file you want over there. Most ftp commands let you specify which netrc file you'd like to use, so you could use various netrc files for various purposes.
Here's an example netrc file called my_netrc:
machine ftp_host
user ftp_user
password swordfish
macrodef init
binary
mkdir my_dir
cd my_dir
put my_file
bye
Then, you could do this:
$ ftp -v -Nmy_netrc $FTPHOST 2>&1 >> ftp.log | logger
Related
In my script I want to open a specific (device driver) file as FD 3.
exec 3< works fine for this in regular cases.
However the device driver file is only readable as root, so I'm looking for a way to open the FD as root using sudo.
-> How can I open a file (descriptor) with sudo rights?
Unfortunately I have to keep the file open for the runtime of the script, so tricks like piping in or out do not work.
Also I don't want to run the whole script under sudo rights.
If sudo + exec is not possible at all, an alternative solution is that I could call a program, in background like sudo tail -f -- but this poses another set of problems:
how to determine whether the program call was successful
how to get error messages if the call was not successful
how to "kill" the program at the end of execution.
EDIT:
To clarify what I want to achieve:
open /dev/tpm0 which requires root permissions
execute my commands with user permissions
close /dev/tpm0
The reason behind this is that opening /dev/tpm0 blocks other commands from accessing the tpm which is critical in my situation.
Thanks for your help
Can you just do something like the following?
# open the file with root privileges for reading
exec 3< <(sudo cat /dev/tpm0)
# read three characters from open file descriptor
read -n3 somechars <&3
# read a line from the open file descriptor
read line <&3
# close the file descriptor
exec 3<&-
In order to detect a failed open, you could do something like this:
exec 3< <(sudo cat /dev/tpm0 || echo FAILEDCODE)
Then when you first read from fd 3, see if you get the FAILCODE. Or you could do something like this:
rm -f /tmp/itfailed
exec 3< <(sudo cat /dev/tpm0 || touch /tmp/itfailed)
Then check for /tmp/itfailed; if it exists, the sudo command failed.
I am doing an ftp and I want to check the status. I don't want to use '$?' as mostly it returns 0 (Success) for ftp even though internally ftp didn't go through.
I know I can check the log file and do a grep from there for "Transfer complete" (221 status). That works fine but I don't want to do it as I have many different reports doing ftp. So creating multiple log files for all of them is what I want to avoid.
Can I get the logged information in a local script variable and process it inside the script itself?
Something similar to these (I've tried both but neither worked):
Grab FTP output in BASH SCRIPT
FTP status check whether successful or not
Below is something similar to what I am trying to do:
ftp -inv ${HOST} > log_file.log <<!
user ${USER} ${PASS}
bin
cd "${TARGET}"
put ${FEEDFILE}
bye
!
Any suggestions on how can I get the entire ftp output in a script variable and then check it within the script?
To capture stdout to a variable you can use bash's command substitution, so either OUTPUT=`cmd` or OUTPUT=$(cmd).
Here's an example how to capture the output from ftp in your case:
CMDS="user ${USER} ${PASS}
bin
cd \"${TARGET}\"
put \"${FEEDFILE}\"
bye"
OUTPUT=$(echo "${CMDS}" | ftp -inv ${HOST})
I'm writing a bash script to grab an archive (specifically, the gcc-4.9.1 source) from ftp.gnu.org, which it does using a here document. I'd also like to direct ftp's output to a log file, in order to make the script's output cleaner while keeping any information that might be needed. I'd also like to use bash's double-bar to run an error-catching function that prints some output and exits if ftp returns unsuccessfully.
#get the archive
echo "getting the archive from ftp://${HOST}/${FTPPATH}${ARCHIVE}"
/usr/bin/ftp -inv $HOST > ${VERNAME}_logs/ftp.log || errorcatcher "failed to connect to ftp server, check ${VERNAME}_logs/ftp.log" <<FTPSCRIPT
user anonymous
cd $FTPPATH
binary
get $ARCHIVE
FTPSCRIPT
The problem is, this hangs, and ftp.log looks like this:
Trying 208.118.235.20...
Connected to ftp.gnu.org (208.118.235.20).
220 GNU FTP server ready.
ftp>
ftp>
ftp>
The commands are clearly not getting passed to the ftp client, and I imagine either the output redirection or the double pipes are causing this, as without them both the script successfully gets the archive.
Is there any syntax that allows me to pass interactive commands to ftp while still allowing output redirection and conditional execution following the return?
You are feeding the wrong command with the here document.
/usr/bin/ftp [ ...args... ] || errorcatcher [...] <<FTPSCRIPT
should be
/usr/bin/ftp [ ...args... ] <<FTPSCRIPT || errorcatcher [...]
The contents of the here document do not begin until the line following the <<. You could even write
/usr/bin/ftp -inv $HOST > ${VERNAME}_logs/ftp.log <<FTPSCRIPT ||
user anonymous
cd $FTPPATH
binary
get $ARCHIVE
FTPSCRIPT
errorcatcher "failed to connect to ftp server, check ${VERNAME}_logs/ftp.log"
if you find that more readable (I'm not sure that I do, but it's an option), or also
/usr/bin/ftp -inv $HOST > ${VERNAME}_logs/ftp.log <<FTPSCRIPT \
|| errorcatcher "failed to connect to ftp server, check ${VERNAME}_logs/ftp.log"
user anonymous
cd $FTPPATH
binary
get $ARCHIVE
FTPSCRIPT
I have a bash script that creates backups incrementally(daily) and full(on Mondays). Every 7 days the script combines the week of backups(full and Incremental) and sends them off to an FTP server, the problem i am having is i want to delete the files from my backup directory after the FTP upload is finished, but i cant do that until i know the file was successfully uploaded. I need to figure out how to capture the '226 transfer complete' so i can use that in an 'IF' statement to delete the backup files. Any help is greatly appreciated. also he is my FTP portion of the script
if [ -a "$WKBKDIR/weekending-$YEAR-$MONTH-$DAY.tar.gz" ]; then
HOST=192.168.40.30 #This is the FTP servers host or IP address.
USER=username #This is the FTP user that has access to the server.
PASS=password #This is the password for the FTP user.
ftp -inv $HOST << EOF
user $USER $PASS
cd /baks
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
fi
I could use whatever mean i needed i suppose, FTP was something already setup for another backup function for something else, thanks
2nd EDIT Ahmed the rsync works great in test from command line, its a lot faster than FTP, the server is on the local network so SSH not that big of a deal but nice to have for added security, i will finish implementing in my script tomorrow, thanks again
FTP OPTION
The simple solution would be to do something like this:
ftp -inv $HOST >ftp-results-$YEAR-$MONTH-$DAY.out 2>&1 <<-EOF
user $USER $PASS
cd /baks
bin
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
Also there is an issue with your here-document syntax; there is no space between << and the delimiter word (in your case EOF) and I added a - because you are putting white-spaces before the ACTUAL delimeter (it's tabbed in for the if / fi block) so the [-] is required
Now when you do this; you can parse the output file to look for the successful put of the file. For example:
if grep -q '226 transfer complete' ftp-results-$YEAR-$MONTH-$DAY.out; then
echo "It seems that FTP transfer completed fine, we can schedule a delete"
echo "rm -f $PWD/weekending-$YEAR-$MONTH-$DAY.tar.gz" >> scheduled_cleanup.sh
fi
and just run scheduled_cleanup.sh using cron at a given time; this way you will have some margin before the files are cleaned up
If your remote FTP server has good SITE or PROXY options you may be able to get the remote FTP to run a checksum on the uploaded file after successful upload and return the result.
SCP / RSYNC OPTION
Using FTP is clunky and dangerous, you should really try and see if you can have scp or ssh access to the remote system.
If you can then generate an ssh key if you don't have one using ssh-keygen:
ssh-keygen -N "" -t rsa -f ftp-rsa
put the ftp-rsa.pub file into the $HOST/home/$USER/.ssh/authorized_keys and you have a much nicer method for uploading files:
if scp -B -C weekending-$YEAR-$MONTH-$DAY.tar.gz $USER#$HOST:/baks/ ; then
echo Upload successful 1>&2
else
echo Upload failed 1>&2
fi
Or better yet using rsync:
if rsync --progress -a weekending-$YEAR-$MONTH-$DAY.tar.gz $HOST:/baks/ ; then
echo Upload successful 1>&2
else
echo Upload failed 1>&2
fi
et voilà you are done since rsync works over ssh you are happy and secure
Try the next
#!/bin/bash
runifok() { echo "will run this when the transfer is OK"; }
if [ -a "$WKBKDIR/weekending-$YEAR-$MONTH-$DAY.tar.gz" ]; then
HOST=192.168.40.30 #This is the FTP servers host or IP address.
USER=username #This is the FTP user that has access to the server.
PASS=password #This is the password for the FTP user.
ftp -inv <<EOF | grep -q '226 transfer complete' && runifok
user $USER $PASS
cd /baks
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
fi
test it and when will run ok - replace the echo in the runifok function for your commands what want execute after the upload is succesful.
Lets assume the script i want to write ssh to 1.2.3.4 and then invokes
ls.
The problem is that when the line "ssh 1.2.3.4" is invoked, a password is
Required, hence, the flow is stopped, even when i fill the password,
The script wont continue.
How can i make the script continue after the password is given?
Thx!
You want to do public key authentication. Here are some resources which should get you going.
http://magicmonster.com/kb/net/ssh/auto_login.html
http://www.cs.rpi.edu/research/groups/vision/doc/auto/ssh/ssh_public_key_authentication.html
I would post a couple more links, but I don't have enough reputation points. ;) Just google on "SSH automated login" or "SSH public key authentication" if you need more help.
Actually you're trying to run ls locally but you have an ssh session opened. So it won't run ls until the session is opened. If you want to run ls remotely, you should use
ssh username#host COMMAND
Where command is the command you want to run. Ssh session will finish as soon as the command is invoked and you can capture its output normally.
I would suggest you to use RSA authentication method for script that needs ssh.
I just tried this script:
#!/bin/sh
ssh vps1 ls
mkdir temp
cd temp
echo test > file.txt
And it works. I can connect to my server and list my home. Then, locally, it creates temp dir, cd into it and then creates file.txt with 'test' inside.
write simple login bash script named login_to and give exec permissions (chmod 744 login_to)
#!/bin/bash
if [ $1 = 'srv1' ]; then
echo 'srv1-pass' | pbcopy
ssh root#11.11.11.11
fi
if [ $1 = 'foo' ]; then
echo 'barbaz' | pbcopy
ssh -t dux#22.22.22.22 'cd ~/somedir/someotherdir; bash'
fi
now use it like this
login_to srv1
login_to foo
When asked for password, just pate (ctrl+v or command+v) and you will be logged in.