SFTP and Append data of the file to already existing file - linux

I am new to unix. My client has a requirement to sftp some data files to a legacy system server. In the legacy server there are the same names files already existing. What we want is that when we sftp file to server, it will append the data of the file in the existing file.
Let's say on legacy server there is a file A.dat with 10 rows. Now I am doing sftp file A.dat with 5 rows. So after sftp, on the legacy server file A.dat should have 15 rows. Also, if the file doesn't exists on the legacy system, then the script should place the file.
Any quick response is highly appreciated. My current sftp script looks like below.
#!/usr/bin/expect -d
set timeout -1
spawn sftp user#server
expect "sftp>"
send "cd /destinationpath\n"
expect "sftp>"
send "lcd /sourcepath\n"
expect "sftp>"
send "put A.dat\n"
expect "sftp>"
send "exit\n"
interact

Try adding -a to put or use
reput. See the manual for your version of sftp. It might not be supported or differ.
If it's just one file I would use ssh directy and append to the file: ssh user#server "cat >> /destinationpath/A.dat" < /sourcepath/A.dat
Make sure to have to use >> as > would overwrite your file.

Related

Shell script to move files on remote SFTP Server

I am trying to move files from one directory to another directory on the remote SFTP Server through shell script after downloading files locally. I understand that there is no wildcard move files function, so it seems that my only option is to rename files individually.
Can someone help me with the code below if there is a better way of writing this code.
All i am trying to do is move files to archive directory on the SFTP Server, once files are downloaded to my local directory.
I know there are other ways to do it with python/perl scripts but i am restricted to use Shell scripting on linux box.
#!/usr/bin/ksh
#LOGGING
LOGFILE="/tmp/test.log"
#SFTP INFO
FTP_SERVER="test.rebex.net"
FTP_USER="demo"
FTP_PWD="password"
FTP_PORT=22
FTP_PICKUP_DIR="/"
LOCAL_DIR="/"
#-------DOWNLOAD FILES
expect <<END #> $LOGFILE
send "$(date)\r";
spawn sftp $FTP_USER#$FTP_SERVER
expect "*password: "
send "$FTP_PWD\r";
expect "sftp> "
send "mget *.ext\r"
expect "sftp>"
send "exit\r"
END
#--------- MOVE FILES TO ARCHIVE ON SERVER
cd /home/ravi/Files
for fl in *.ext
do
expect <<END #> $LOGFILE
send "$(date)\r";
spawn sftp $FTP_USER#$FTP_SERVER
expect "*password: "
send "$FTP_PWD\r";
expect "sftp> "
send "rename $fl /ARCHIVE/$fl\r"
expect "sftp>"
send "exit\r"
END
done
#For Loop End
you can use lftp with mmv option
mmv [-O directory] file(s) directory
Move specified files to a target directory. The target directory can be specified after -O
option or as the last argument.
-O <dir> specifies the target directory where files should be placed
Reference
Example usage
lftp -u $FTP_USER,$FTP_PWD sftp://$FTP_SERVER:22 <<EOF
mmv dir/to/path /dir/to/renamed/path
EOF

shell ftp expect to intelligently copy files from local to ftp server

I'm writing a shell script to automate the task of extracting data from a remote oracle database and temporarily storing it in the local machine in a folder say Csvfoder. I used 'expect' command to copy all files in this Csvfolder to the ftp server.
These csv files in the Csvfolder have specific names( In the below example timestamp has two system timestamps joined) -
For example, data extracted from oracle Table1 will have the name ABCD.(timestamp1).csv, after the script runs again after 15 mins, the Csvfolder might have another file extracted from oracle Table1 will have the name ABCD.(timestamp2).csv
Data extracted from oracle Table2 will have the name, say XYZ.(timestamp10).csv, after the script runs again after 15 mins, the Csvfolder might have another file extracted from oracle Table2 will have the name XYZ.(timestamp11).csv
Right now my script just uploads all the *.csv files in Csvfolder to the FTP server using expect.
But I have a mapping of csv file names to some specific directories on the FTP server that the csv file should go to-
From the above example, I want all ABCD.(timestampxx).csv files to be copied from my local machine to the FTP folder HOME_FOLDER/FOLDER_NAME1 (unique folder name).
I want all XYZ.(timestamp).csv files to be copied from my local machine to the FTP folder HOME_FOLDER/FOLDER_NAME2 (unique folder name). In this example I will have the mapping:
ABCD files -> should go to HOME_FOLDER/FOLDER_NAME1 of FTP server
XYZ files -> should go to HOME_FOLDER/FOLDER_NAME2 of FTP server
I'm using expect with mput in my shell script right now:
expect -c '
set username harry
set ip 100.132.123.44
set password sally
set directory /home/harry
set csvfilesFolder /Csvfolder/
spawn sftp $username#$ip
#Password
expect "*password:"
send "$password\n"
expect "sftp>"
#Enter the directory you want to transfer the files to
send "cd $directory\n"
expect "sftp>"
send -r "mput $csvfilesFolder*\n"
expect "sftp>"
send "exit\n"
interact'
Any suggestions as how to go about coping those csv files from local machine to specific directories on FTP server using the mapping?
Instead of:
send -r "mput $csvfilesFolder*\n"
You want
# handle ABCD files
send "cd $directory/FOLDER_NAME1\r"
expect "sftp>"
send "mput $csvfilesFolder/ABCD.*.csv\r"
expect "sftp>"
# handle XYZ files
send "cd $directory/FOLDER_NAME2\r"
expect "sftp>"
send "mput $csvfilesFolder/XYZ.*.csv\r"
expect "sftp>"
Normally, one uses \r to simulate "hitting enter", but I guess \n works.
Use expect eof instead of interact to end the expect script.

Sending file by reference to a remote server?

Trying to either send a file by reference or somehow append a file on a remote server.
Using scp which just sends a copy of the file right now. My code doesn't make sense since I'm sending a copy of a file, but I can't do anything with it, because it doesn't change the original file.
while read p;do
scp sample_file.txt $p"./home/user"
#Log onto remote server
#Get last entry of directory and put it in sample_file.txt
done <usernames_list.txt
Basically I want list_of_stuff.txt to look like
entry1
entry2
entry3
entry4
...etc
Does anyone know how to send the actual file (instead of scp which just sends a secure copy) to a remote server in UNIX? Or does anyone know how to append on a remote server?
From the sound of things, you don't need to copy the file to the remote server at all! You only need the output from running some commands on that server to append it to a file, so you can do that locally by piping the output from ssh:
while read p; do
ssh $p "get_last_entry_of_directory" >> sample_file.txt
done < usernames_list.txt
I've dummied out the command get_last_entry_of_directory because I'm not sure what's supposed to be involved there.

Sftp files from Remote server to local server

Sorry if it's too much simple question. But I am Java developer, no idea of shell scripting.
I googled, but couldn't find exactly what I am looking for.
My requirement
Connect to remote server using Sftp [authentication based on pub/pri
keys]. A variable to point to private key file
Transfer files with
specific extension [.log] to local server folder. Variable to set
remote server path and local folder
Rename the transferred file in
remote server
Log all the transferred files in a .txt file
Can any one give me shell script for this?
This is so far I framed from suggestions.
Still some questions left on my side ;)
export PRIVKEY=${private_key_path}
export RMTHOST=user#remotehost
export RMTDIR=/logs/*.log
export LOCDIR=/downloaded/logs/
export LOG=sucess.txt
scp -i $PRIVKEY $RMTHOST:$RMTDIR $LOCDIR
for i in 'ls -1 $LOCDIR/*.log'
do
echo $i >> $LOG
done
ssh $RMTHOST -c "for i in `ls -1 $RMTDIR; do mv /logs/$i /logs/$i.transferred; done"
What about this approach?
Connect to remote server using Sftp [authentication based on pub/pri keys]. A variable to point to private key file
Transfer files with specific extension [.log] to local server folder. Variable to set remote server path and local folder
scp your_user#server:/dir/of/file/*.log /your/local/dir
Log all the transferred files in a .txt file
for file in /your/local/dir/*.log
do
echo "$file" >> $your_txt
done
Rename the transferred file in remote server
ssh your_user#server -c "for file in /dir/of/file/*.log; do mv /dir/of/file/"$file" /dir/of/file/new_name_based_on"$file"; done"
Use scp(secure copy) command to transfer file. You may also want to add the -C switch, which compresses the file. That can speed things up a bit. i.e. copy file1 from server1 to server2,
On server1:
#!/bin/sh
scp -C /home/user/file1 root#server2.com:/home/user
Edit:
#!/bin/sh
scp -i {path/to/pub/pri/key/file} /home/user/file1 root#server2.com:/home/user

FTP specific files

Can we ftp specific files from a directory. And these specific files that needs to be transferred will be specified in config file.
Can we use a for loop once logged into ftp (in a script) for this purpose.
Will a normal ftp work when transferring files from Unix to win ftp server.
Thanks,
Ravi
You can use straight shell. This assumes your login directory is /home/ravi
Try this one time only:
echo "machine serverB user ravi password ravipasswd" > /home/ravi/.netrc
chmod 600 /home/ravi/.netrc
test that .netrc works - ftp serverB should log you straight in.
Shell script that reads config.file, which is just a list of files to send
while read fname
do
ftp serverB <<EOF
get $fname
bye
EOF # leave the EOF in column #1 of the script file
done < config.file
This gets file from serverB. Change get $fname to put $fname to send files from serverA to serverB
That certainly is possible. You can transfeer files listed in some file by implementing a script using an ftp client (buildin or via calling a cli client). The protocol is system independant, therefore it is possible to transfer files between systems running different operating systems. There is only one catch: remember that MS-Windows uses a case insensitive file system, other systems differ in that.

Resources