I am trying to move files from one directory to another directory on the remote SFTP Server through shell script after downloading files locally. I understand that there is no wildcard move files function, so it seems that my only option is to rename files individually.
Can someone help me with the code below if there is a better way of writing this code.
All i am trying to do is move files to archive directory on the SFTP Server, once files are downloaded to my local directory.
I know there are other ways to do it with python/perl scripts but i am restricted to use Shell scripting on linux box.
#!/usr/bin/ksh
#LOGGING
LOGFILE="/tmp/test.log"
#SFTP INFO
FTP_SERVER="test.rebex.net"
FTP_USER="demo"
FTP_PWD="password"
FTP_PORT=22
FTP_PICKUP_DIR="/"
LOCAL_DIR="/"
#-------DOWNLOAD FILES
expect <<END #> $LOGFILE
send "$(date)\r";
spawn sftp $FTP_USER#$FTP_SERVER
expect "*password: "
send "$FTP_PWD\r";
expect "sftp> "
send "mget *.ext\r"
expect "sftp>"
send "exit\r"
END
#--------- MOVE FILES TO ARCHIVE ON SERVER
cd /home/ravi/Files
for fl in *.ext
do
expect <<END #> $LOGFILE
send "$(date)\r";
spawn sftp $FTP_USER#$FTP_SERVER
expect "*password: "
send "$FTP_PWD\r";
expect "sftp> "
send "rename $fl /ARCHIVE/$fl\r"
expect "sftp>"
send "exit\r"
END
done
#For Loop End
you can use lftp with mmv option
mmv [-O directory] file(s) directory
Move specified files to a target directory. The target directory can be specified after -O
option or as the last argument.
-O <dir> specifies the target directory where files should be placed
Reference
Example usage
lftp -u $FTP_USER,$FTP_PWD sftp://$FTP_SERVER:22 <<EOF
mmv dir/to/path /dir/to/renamed/path
EOF
Related
I am new to unix. My client has a requirement to sftp some data files to a legacy system server. In the legacy server there are the same names files already existing. What we want is that when we sftp file to server, it will append the data of the file in the existing file.
Let's say on legacy server there is a file A.dat with 10 rows. Now I am doing sftp file A.dat with 5 rows. So after sftp, on the legacy server file A.dat should have 15 rows. Also, if the file doesn't exists on the legacy system, then the script should place the file.
Any quick response is highly appreciated. My current sftp script looks like below.
#!/usr/bin/expect -d
set timeout -1
spawn sftp user#server
expect "sftp>"
send "cd /destinationpath\n"
expect "sftp>"
send "lcd /sourcepath\n"
expect "sftp>"
send "put A.dat\n"
expect "sftp>"
send "exit\n"
interact
Try adding -a to put or use
reput. See the manual for your version of sftp. It might not be supported or differ.
If it's just one file I would use ssh directy and append to the file: ssh user#server "cat >> /destinationpath/A.dat" < /sourcepath/A.dat
Make sure to have to use >> as > would overwrite your file.
I am building FTP bash script to generate a .csv file and transfer from a Linux machine to another server, but i have problems because it tigers an error and the file is not transferred on the 2nd server. What could be the problem?
This is the error:
TEST: A file or directory in the path name does not exist.
Filename invalid
And it doesn't matter if i put the / before the TEST, it will trigger the same issue.
This is my script
HOST='ipadress'
USER='user'
PASSWD=''
TARGET='TEST'
#Paramenters
set -x
DATE=`date +%Y%m%d%H%M`
SQL=/home/sql_statement.sql
QUERYCMD=/home/report.sh
CSV=/home/csv/test_$DATE.csv
#Interogate the sql and put in the folder
$QUERYCMD ${SQL} ${CSV}
#Send the .csv file in the target folder
cd /home/csv
ftp -n $HOST <<EOF
quote USER $USER
quote PASS $PASSWD
lcd $TARGET
put $CSV $TARGET
quit
EOF
exit 0
does the symbol TARGET refer to a dir on the remote host?
ftp command lcd would change-dir on the local (client) side, while cd would change-dir on the remote (server) side; also for the remote side there's usually a designated ftp root dir, adjust any path in relation to that starting point; to confirm dir contents, you might add ftp commands ls and !ls on separate lines right after the PASS line
I'm writing a shell script to automate the task of extracting data from a remote oracle database and temporarily storing it in the local machine in a folder say Csvfoder. I used 'expect' command to copy all files in this Csvfolder to the ftp server.
These csv files in the Csvfolder have specific names( In the below example timestamp has two system timestamps joined) -
For example, data extracted from oracle Table1 will have the name ABCD.(timestamp1).csv, after the script runs again after 15 mins, the Csvfolder might have another file extracted from oracle Table1 will have the name ABCD.(timestamp2).csv
Data extracted from oracle Table2 will have the name, say XYZ.(timestamp10).csv, after the script runs again after 15 mins, the Csvfolder might have another file extracted from oracle Table2 will have the name XYZ.(timestamp11).csv
Right now my script just uploads all the *.csv files in Csvfolder to the FTP server using expect.
But I have a mapping of csv file names to some specific directories on the FTP server that the csv file should go to-
From the above example, I want all ABCD.(timestampxx).csv files to be copied from my local machine to the FTP folder HOME_FOLDER/FOLDER_NAME1 (unique folder name).
I want all XYZ.(timestamp).csv files to be copied from my local machine to the FTP folder HOME_FOLDER/FOLDER_NAME2 (unique folder name). In this example I will have the mapping:
ABCD files -> should go to HOME_FOLDER/FOLDER_NAME1 of FTP server
XYZ files -> should go to HOME_FOLDER/FOLDER_NAME2 of FTP server
I'm using expect with mput in my shell script right now:
expect -c '
set username harry
set ip 100.132.123.44
set password sally
set directory /home/harry
set csvfilesFolder /Csvfolder/
spawn sftp $username#$ip
#Password
expect "*password:"
send "$password\n"
expect "sftp>"
#Enter the directory you want to transfer the files to
send "cd $directory\n"
expect "sftp>"
send -r "mput $csvfilesFolder*\n"
expect "sftp>"
send "exit\n"
interact'
Any suggestions as how to go about coping those csv files from local machine to specific directories on FTP server using the mapping?
Instead of:
send -r "mput $csvfilesFolder*\n"
You want
# handle ABCD files
send "cd $directory/FOLDER_NAME1\r"
expect "sftp>"
send "mput $csvfilesFolder/ABCD.*.csv\r"
expect "sftp>"
# handle XYZ files
send "cd $directory/FOLDER_NAME2\r"
expect "sftp>"
send "mput $csvfilesFolder/XYZ.*.csv\r"
expect "sftp>"
Normally, one uses \r to simulate "hitting enter", but I guess \n works.
Use expect eof instead of interact to end the expect script.
Files are being transferred to a directory on my machine by FTP protocol. I need to SCP these files to another machine & delete them on completion.
How can I detect if file trasfer by FTP has been done & the file is safe to do SCP?
There's no reliable way to detect completion of the transfer. Some clients send ALLO command and pass the size of the file before actually uploading the file, but this is not a definite rule, so you can't rely on it. All in all, it's possible that the client streams the data and there's no definite "end" of file on its side.
If the client is under your control, you can make it upload files with extension A and after upload rename the files to extension B. And then you transfer only files with extension B.
You can do a script like this:
#!/bin/bash
EXPECTED_ARGS=1
E_BADARGS=65
#Arguments control
if [ $# -lt $EXPECTED_ARGS ]
then
echo "Usage: `basename $0` <folder_update_1> <folder_update_2> <folder_update_3> ..."
exit $E_BADARGS
fi
folders=( "$#" )
for folder in ${folders[#]}
do
#Send folder or file to new machine
time rsync --update -avrt -e ssh /local/path/of/$folder/ user#192.168.0.10:/remote/path/of/$folder/
#Delete local file or folder
rm -r /local/path/of/$folder/
done
It is configured to send folders. If you want files need make little changes on script as:
time rsync --update -avrt -e ssh /local/path/of/$file user#192.168.0.10:/remote/path/of/$file
rm /local/path/of/$file/
Rsync is similar to scp. I prefer use rsync but you can change it.
Sorry if it's too much simple question. But I am Java developer, no idea of shell scripting.
I googled, but couldn't find exactly what I am looking for.
My requirement
Connect to remote server using Sftp [authentication based on pub/pri
keys]. A variable to point to private key file
Transfer files with
specific extension [.log] to local server folder. Variable to set
remote server path and local folder
Rename the transferred file in
remote server
Log all the transferred files in a .txt file
Can any one give me shell script for this?
This is so far I framed from suggestions.
Still some questions left on my side ;)
export PRIVKEY=${private_key_path}
export RMTHOST=user#remotehost
export RMTDIR=/logs/*.log
export LOCDIR=/downloaded/logs/
export LOG=sucess.txt
scp -i $PRIVKEY $RMTHOST:$RMTDIR $LOCDIR
for i in 'ls -1 $LOCDIR/*.log'
do
echo $i >> $LOG
done
ssh $RMTHOST -c "for i in `ls -1 $RMTDIR; do mv /logs/$i /logs/$i.transferred; done"
What about this approach?
Connect to remote server using Sftp [authentication based on pub/pri keys]. A variable to point to private key file
Transfer files with specific extension [.log] to local server folder. Variable to set remote server path and local folder
scp your_user#server:/dir/of/file/*.log /your/local/dir
Log all the transferred files in a .txt file
for file in /your/local/dir/*.log
do
echo "$file" >> $your_txt
done
Rename the transferred file in remote server
ssh your_user#server -c "for file in /dir/of/file/*.log; do mv /dir/of/file/"$file" /dir/of/file/new_name_based_on"$file"; done"
Use scp(secure copy) command to transfer file. You may also want to add the -C switch, which compresses the file. That can speed things up a bit. i.e. copy file1 from server1 to server2,
On server1:
#!/bin/sh
scp -C /home/user/file1 root#server2.com:/home/user
Edit:
#!/bin/sh
scp -i {path/to/pub/pri/key/file} /home/user/file1 root#server2.com:/home/user