Can we ftp specific files from a directory. And these specific files that needs to be transferred will be specified in config file.
Can we use a for loop once logged into ftp (in a script) for this purpose.
Will a normal ftp work when transferring files from Unix to win ftp server.
Thanks,
Ravi
You can use straight shell. This assumes your login directory is /home/ravi
Try this one time only:
echo "machine serverB user ravi password ravipasswd" > /home/ravi/.netrc
chmod 600 /home/ravi/.netrc
test that .netrc works - ftp serverB should log you straight in.
Shell script that reads config.file, which is just a list of files to send
while read fname
do
ftp serverB <<EOF
get $fname
bye
EOF # leave the EOF in column #1 of the script file
done < config.file
This gets file from serverB. Change get $fname to put $fname to send files from serverA to serverB
That certainly is possible. You can transfeer files listed in some file by implementing a script using an ftp client (buildin or via calling a cli client). The protocol is system independant, therefore it is possible to transfer files between systems running different operating systems. There is only one catch: remember that MS-Windows uses a case insensitive file system, other systems differ in that.
Related
I need to upload a file from source unix server to destination unix server (supports sftp). i'm using simple script below:-
cd /usr/bin
sftp userid#destination_server <<EOF
put myfile /
EOF
I get host key verification failed, Couldn't read packet: Connection reset by peer
I know this must have got something to do with correct public ssh key of my source being not set under destination server. But otherwise , is my script correct. Or do you suggest any other script based on my simple requirement stated above. Please note this doesn't need any password, just user name is sufficient and remote directory is just the root directory, hence using /.
Simply use a SFTP batch file:
sftp -b batchfile.sftp userid#destination_server
with batchfile.sftp containing exactly one line (or whatever more commands you should need)
put myfile /
In my work, I use 2 Linux servers.
The first one is used for web-crawling and create it as a text file.
The other one is used for analyzing the text file from the web crawler.
So the issue is that when a text file is created on web-crawling server,
it needs to be transferred automatically on the analysis server.
I've used shell programming guides referring some tips,
and set up the crawling server to be able to execute the scp command without requiring the password (By using ssh-keygen command, Add ssh-key on authorized_keys file located in /root/.ssh directory)
But I cannot figure out how to programmatically transfer the file when it is created.
My job position is just data analyze (Not programming)
So, the lack of background programming knowledge is my big concern
If there is a way to trigger the scp to copy the file when it is created, please let me know.
You could use inotifywait to monitor the directory and run a command every time a file is created in the directory. In this case, you would fire off the scp command. IF you have it set up to not prompt for the password, you should be all set.
inotifywait -mrq -e CREATE --format %w%f /path/to/dir | while read FILE; do scp "$FILE"analysis_server:/path/on/anaylsis/server/; done
You can find out more about inotifywait at http://techarena51.com/index.php/inotify-tools-example/
I need to upload a whole folder to SFTP server. I see one way only - via sftp prompt. So I execute command
sftp> put /var/sites/c/public_html/wp-content/uploads/* /wp-content/uploads/
but I get
skipping non-regular file /var/sites/c/public_html/wp-content/uploads/2010
and no files copying. what is need to do to achieve my goal, upload whole folder(subfolders and files) to the SFTP server.
put is used to upload a single file
to upload multiple files use mput
if that doesn't work try switching to scp instead of sftp
put supports the -r switch on my machine (I'm using OpenSSH_6.4p1, OpenSSL 1.0.1e 11 Feb 2013). If your sftp doesn't support -r you can also use scp. This should work as both sftp and scp use ssh to push files to the remote side, and scp is able to push files recursive on almost every system I've seen so far.
Sorry if it's too much simple question. But I am Java developer, no idea of shell scripting.
I googled, but couldn't find exactly what I am looking for.
My requirement
Connect to remote server using Sftp [authentication based on pub/pri
keys]. A variable to point to private key file
Transfer files with
specific extension [.log] to local server folder. Variable to set
remote server path and local folder
Rename the transferred file in
remote server
Log all the transferred files in a .txt file
Can any one give me shell script for this?
This is so far I framed from suggestions.
Still some questions left on my side ;)
export PRIVKEY=${private_key_path}
export RMTHOST=user#remotehost
export RMTDIR=/logs/*.log
export LOCDIR=/downloaded/logs/
export LOG=sucess.txt
scp -i $PRIVKEY $RMTHOST:$RMTDIR $LOCDIR
for i in 'ls -1 $LOCDIR/*.log'
do
echo $i >> $LOG
done
ssh $RMTHOST -c "for i in `ls -1 $RMTDIR; do mv /logs/$i /logs/$i.transferred; done"
What about this approach?
Connect to remote server using Sftp [authentication based on pub/pri keys]. A variable to point to private key file
Transfer files with specific extension [.log] to local server folder. Variable to set remote server path and local folder
scp your_user#server:/dir/of/file/*.log /your/local/dir
Log all the transferred files in a .txt file
for file in /your/local/dir/*.log
do
echo "$file" >> $your_txt
done
Rename the transferred file in remote server
ssh your_user#server -c "for file in /dir/of/file/*.log; do mv /dir/of/file/"$file" /dir/of/file/new_name_based_on"$file"; done"
Use scp(secure copy) command to transfer file. You may also want to add the -C switch, which compresses the file. That can speed things up a bit. i.e. copy file1 from server1 to server2,
On server1:
#!/bin/sh
scp -C /home/user/file1 root#server2.com:/home/user
Edit:
#!/bin/sh
scp -i {path/to/pub/pri/key/file} /home/user/file1 root#server2.com:/home/user
I have a linux server that receives data files via sftp. These files contain data that is immediately imported into an application for use. The directory which the files are sent to is constantly read by another process looking for the new files to process.
The problem I am having is that the files are getting read before they are completely transferred. Is there a way to hide the files before they have transferred?
One thought I had is by leveraging the .filepart concept that many sftp clients use to rename files before they are complete. I don't have control of the clients though, so is there a way to do this on the server side?
Or is there another way to do this by permissions or such?
We have solved a similar problem by creating a directory on the same file-system that the files will be read from by the clients, and use inotifywait.
You sftp to the staging directory and have inotifywait watch that staging directory.
Once inotify sees the "FILE_CLOSE" event for any received file you simply "mv" the file to the directory the client reads from.
#!/bin/bash
inotifywait -m -e close --format "%f\n" /path/to/tmp | while read newfile
do
mv /path/to/tmp/"$newfile" ~/real
done