How to use lftp to transfer segmented files? - linux

I want to transfer a file from my server to another.The network between these servers isn't very well,so I want to use lftp to speed up.My script is like this:
lftp -u user,password -e "set sftp:connect-program 'ssh -a -x -i /key'; mirror --use-pget=5 -i data.tar.gz -r -R /data/ /tmp; quit" sftp://**.**.**.**:22
I found data.tar.gz wasn't segmented, But When I use it to download a file, that will works.
What should I do?

Segmented uploads are not implemented in lftp. If you have ssh access to the server, login there and use lftp to download the file. If there were many files, you could also upload different files in parallel using -P mirror option.

Related

Script to transfer a folder through SSH property

I want to tranfer a folder through SSH from a client to server and from a server to a client as well. The name of the folder is always the same. Before the transfer I need to create the backup of the folder which I want overwrite.
So I've created two sripts, one for the download (servet to client) and one for the upload (client to server).
down_src.sh
#!/bin/bash
rm -rf ~/Projects/GeoPump/project_bk
mv ~/Projects/GeoPump/project ~/Projects/GeoPump/project_bk
rsync --delete -azvv --rsync-path="mkdir -p ~/Projects/GeoPump/ && rsync" -e ssh pi#25.30.116.202:~/Projects/GeoPump/project ~/Projects/GeoPump/
up_src.sh
#!/bin/bash
ssh pi#25.30.116.202 'rm -rf ~/Projects/GeoPump/project_bk'
ssh pi#25.30.116.202 'mv ~/Projects/GeoPump/project ~/Projects/GeoPump/project_bk'
rsync --delete -azvv --rsync-path="mkdir -p ~/Progetti/GeoPump/ && rsync" -e ssh ~/Projects/GeoPump/project pi#25.30.116.202:~/Projects/GeoPump/
When I run up_src.sh, for example, I need to insert server password three times
andrea#andrea-GL552VW:~/Projects/GeoPump$ sudo ./up_src.sh
pi#25.30.116.202's password:
pi#25.30.116.202's password:
opening connection using: ssh -l pi 25.30.116.202 "mkdir -p ~/Projects/GeoPump/ && rsync" --server -vvlogDtprze.iLsfx --delete . "~/Projects/GeoPump/" (10 args)
pi#25.30.116.202's password:
sending incremental file list
Now my questions are:
Is this the correct way to do this kind of tranfer?
Can anyone suggest me the proper way to create these scripts without insert password multiple times?
For the up_src.sh script to work, you will have to ensure that the user on 25.30.116.202 has permissions without using ... && sudo rsync

How can I copy file from local server to remote with creating directories which absent via SSH?

I can copy file via SSH by using SCP like this:
cd /root/dir1/dir2/
scp filename root#192.168.0.19:$PWD/
But if on remote server some directories are absent, in example remote server has only /root/ and havn't dir1 and dir2, then I can't do it and I get an error.
How can I do this - to copy file with creating directories which absent via SSH, and how to make it the easiest way?
The easiest way mean that I can get current path only by $PWD, i.e. script must be light moveable without any changes.
This command will do it:
rsync -ahHv --rsync-path="mkdir -p $PWD && rsync" filename -e "ssh -v" root#192.168.0.19:"$PWD/"
I can make the same directories on the remote servers and copy file to it via SSH by using SCP like this:
cd /root/dir1/dir2/
ssh -n root#192.168.0.19 "mkdir -p '$PWD'"
scp -p filename root#192.168.0.19:$PWD/

mput Not Transferring All Files During FTP Transfer

I'm having issues with my Unix FTP script...
It's only transferring the first three files in the directory that I'm local cd'ing into during the FTP session.
Here's the bash script that I'm using:
#!/bin/sh
YMD=$(date +%Y%m%d)
HOST='***'
USER='***'
PASSWD=***
FILE=*.png
RUNHR=19
ftp -inv ${HOST} <<EOF
quote USER ${USER}
quote PASS ${PASSWD}
cd /models/rtma/t2m/${YMD}/${RUNHR}/
mkdir /models/rtma/t2m/${YMD}/
mkdir /models/rtma/t2m/${YMD}/${RUNHR}/
lcd /home/aaron/grads/syndicated/rtma/t2m/${YMD}/${RUNHR}Z/
binary
prompt
mput ${FILE}
quit
EOF
exit 0
Any ideas?
I had encountered same issue, I have to transfer 400K files but mput * or mput *.pdf was not moving all files in one go
tried timeout :fails
tried -r recursive :fails
tried increasing Data/control timeout in IIS :fails
tried -i
Prompt
scripting fails
Finally went to use portable filezilla connect to from source and transferred the all files

Rsync to Amazon Ec2 Instance

I have an EC2 instance running and I am able to SSH into it.
However, when I try to rsync, it gives me the error Permission denied (publickey).
The command I'm using is:
rsync -avL --progress -e ssh -i ~/mykeypair.pem ~/Sites/my_site/* root#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com:/var/www/html/
I also tried
rsync -avz ~/Sites/mysite/* -e "ssh -i ~/.ssh/id_rsa.pub" root#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com:/var/www/html/
Thanks,
I just received that same error. I had been consistently able to ssh with:
ssh -i ~/path/mykeypair.pem \
ubuntu#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com
But when using the longer rsync construction, it seemed to cause errors. I ended up encasing the ssh statement in quotations and using the full path to the key. In your example:
rsync -avL --progress -e "ssh -i /path/to/mykeypair.pem" \
~/Sites/my_site/* \
root#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com:/var/www/html/
That seemed to do the trick.
Below is what I used and it worked. Source was ec2 and target was home machine.
sudo rsync -azvv -e "ssh -i /home/ubuntu/key-to-ec2.pem" ec2-user#xx.xxx.xxx.xx:/home/ec2-user/source/ /home/ubuntu/target/
use rsync to copy files between servers
copy file from local machine to server
rsync -avz -e "ssh -i /path/to/key.pem" /path/to/file.txt <username>#<ip/domain>:/path/to/directory/
copy file from server to local machine
rsync -avz -e "ssh -i /path/to/key.pem" <username>#<ip/domain>:/path/to/directory/file.txt /path/to/directory/
note: use command with sudo if you are not a root user
After suffering a little bit, I believe this will help:
I am using the below command and it has worked without problems:
rsync -av --progress -e ssh /folder1/folder2/* root#xxx.xxx.xxx.xxx:/folder1/folder2
First consideration:
Use the --rsync-path
I prefer in a shell script:
#!/bin/bash
RSYNC = /usr/bin/rsync
$RSYNC [options] [source] [destination]
Second consideration:
Create a publick key by command below for communication between the servers in question. She will not be the same as provided by Amazon.
ssh-keygen -t rsa
Do not forget to enable permission on the target server in /etc/ssh/sshd_config (UBUNTU and CENTOS).
Sync files from one EC2 instance to another
http://ask-leo.com/how_can_i_automate_an_sftp_transfer_between_two_servers.html
Use -v option for verbose and better identify errors.
Third Consideration
If both servers are on EC2 make a restraint by security group
In the security group Server Destination:
inbound:
Source / TCP port
22 / IP Security (or group name) of the source server
This worked for me:
nohup rsync -zravu --partial --progress -e "ssh -i xxxx.pem" ubuntu#xx.xx.xx.xx:/mnt/data /mnt2/ &

Remote Linux server to remote linux server dir copy. How? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What is the best way to copy a directory (with sub-dirs and files) from one remote Linux server to another remote Linux server? I have connected to both using SSH client (like Putty). I have root access to both.
There are two ways I usually do this, both use ssh:
scp -r sourcedir/ user#dest.com:/dest/dir/
or, the more robust and faster (in terms of transfer speed) method:
rsync -auv -e ssh --progress sourcedir/ user#dest.com:/dest/dir/
Read the man pages for each command if you want more details about how they work.
I would modify a previously suggested reply:
rsync -avlzp /path/to/sfolder name#remote.server:/path/to/remote/dfolder
as follows:
-a (for archive) implies -rlptgoD so the l and p above are superfluous. I also like to include -H, which copies hard links. It is not part of -a by default because it's expensive. So now we have this:
rsync -aHvz /path/to/sfolder name#remote.server:/path/to/remote/dfolder
You also have to be careful about trailing slashes. You probably want
rsync -aHvz /path/to/sfolder/ name#remote.server:/path/to/remote/dfolder
if the desire is for the contents of the source "sfolder" to appear in the destination "dfolder". Without the trailing slash, an "sfolder" subdirectory would be created in the destination "dfolder".
rsync -avlzp /path/to/folder name#remote.server:/path/to/remote/folder
scp -r <directory> <username>#<targethost>:<targetdir>
Log in to one machine
$ scp -r /path/to/top/directory user#server:/path/to/copy
Use rsync so that you can continue if the connection gets broken. And if something changes you can copy them much faster too!
Rsync works with SSH so your copy operation is secure.
Try unison if the task is recurring.
http://www.cis.upenn.edu/~bcpierce/unison/
I used rdiffbackup http://www.nongnu.org/rdiff-backup/index.html because it does all you need without any fancy options. It's based on the rsync algorithm.
If you only need to copy one time, you can later remove the rdiff-backup-data directory on the destination host.
rdiff-backup user1#host1::/source-dir user2#host2::/dest-dir
from the doc:
rdiff-backup also preserves
subdirectories, hard links, dev files,
permissions, uid/gid ownership,
modification times, extended
attributes, acls, and resource forks.
which is an bonus to the scp -p proposals as the -p option does not preserve all (e.g. rights on directories are set badly)
install on ubuntu:
sudo apt-get install rdiff-backup
Check out scp or rsync,
man scp
man rsync
scp file1 file2 dir3 user#remotehost:path
Well, quick answer would to take a look at the 'scp' manpage, or perhaps rsync - depending exactly on what you need to copy. If you had to, you could even do tar-over-ssh:
tar cvf - | ssh server tar xf -
I think you can try with:
rsync -azvu -e ssh user#host1:/directory/ user#host2:/directory2/
(and I assume you are on host0 and you want to copy from host1 to host2 directly)
If the above does not work, you could try:
ssh user#host1 "/usr/bin/rsync -azvu -e ssh /directory/ user#host2:/directory2/"
in the this, it would work, if you already have setup passwordless SSH login from host1 to host2
scp will do the job, but there is one wrinkle: the connection to the second remote destination will use the configuration on the first remote destination, so if you use .ssh/config on the local environment, and you expect rsa and dsa keys to work, you have to forward your agent to the first remote host.
As non-root user ideally:
scp -r src $host:$path
If you already some of the content on $host consider using rsync with ssh as a tunnel.
/Allan
If you are serious about wanting an exact copy, you probably also want to use the -p switch to scp, if you're using that. I've found that scp reads from devices, and I've had problems with cpio, so I personally always use tar, like this:
cd /origin; find . -xdev -depth -not -path ./lost+found -print0 \
| tar --create --atime-preserve=system --null --files-from=- --format=posix \
--no-recursion --sparse | ssh targethost 'cd /target; tar --extract \
--overwrite --preserve-permissions --sparse'
I keep this incantation around in a file with various other means of copying files around. This one is for copying over SSH; the other ones are for copying to a compressed archive, for copying within the same computer, and for copying over an unencrypted TCP socket when SSH is too slow.
scp as mentioned above is usually a best way, but don't forget colon in the remote directory spec otherwise you'll get copy of source directory on local machine.
I like to pipe tar through ssh.
tar cf - [directory] | ssh [username]#[hostname] tar xf - -C [destination on remote box]
This method gives you lots of options. Since you should have root ssh disabled copying files for multiple user accounts is hard since you are logging into the remote server as a normal user. To get around this you can create a tar file on the remote box that still hold that preserves ownership.
tar cf - [directory] | ssh [username]#[hostname] "cat > output.tar"
For slow connections you can add compression, z for gzip or j for bzip2.
tar cjf - [directory] | ssh [username]#[hostname] "cat > output.tar.bz2"
tar czf - [directory] | ssh [username]#[hostname] "cat > output.tar.gz"
tar czf - [directory] | ssh [username]#[hostname] tar xzf - -C [destination on remote box]

Resources