linux tar command for remote machine - linux

How can I create a .tar archive of a file (say /root/bugzilla) on a remote machine and store it on a local machine. SSH-KEYGEN is installed, so I can by pass authentication.
I am looking for something along the lines of:
tar -zcvf Localmachine_bugzilla.tar.gz /root/bugzilla

ssh <host> tar -zcvf - /root/bugzilla > bugzilla.tar.gz
avoids an intermediary copy.
See also this post for a couple of variants: Remote Linux server to remote linux server dir copy. How?

Something like:
ssh <host> tar -zcvf bugzilla.tar.gz /root/bugzilla
scp <host>:bugzilla.tar.gz Localmachine_bugzilla.tar.gz
Or, if you are compressing it just for the sake of transfer, scp compression option can be useful:
scp -R -C <host>:/root/bugzilla .
This is going to copy the whole /root/bugzilla directory using compression on the wire.

Related

How can I copy file from local server to remote with creating directories which absent via SSH?

I can copy file via SSH by using SCP like this:
cd /root/dir1/dir2/
scp filename root#192.168.0.19:$PWD/
But if on remote server some directories are absent, in example remote server has only /root/ and havn't dir1 and dir2, then I can't do it and I get an error.
How can I do this - to copy file with creating directories which absent via SSH, and how to make it the easiest way?
The easiest way mean that I can get current path only by $PWD, i.e. script must be light moveable without any changes.
This command will do it:
rsync -ahHv --rsync-path="mkdir -p $PWD && rsync" filename -e "ssh -v" root#192.168.0.19:"$PWD/"
I can make the same directories on the remote servers and copy file to it via SSH by using SCP like this:
cd /root/dir1/dir2/
ssh -n root#192.168.0.19 "mkdir -p '$PWD'"
scp -p filename root#192.168.0.19:$PWD/

How to copy entire folder from Amazon EC2 Linux instance to local Linux machine?

I connected to Amazon's linux instance from ssh using private key. I am trying to copy entire folder from that instance to my local linux machine .
Can anyone tell me the correct scp command to do this?
Or do I need something more than scp?
Both machines are Ubuntu 10.04 LTS
another way to do it is
scp -i "insert key file here" -r "insert ec2 instance here" "your local directory"
One mistake I made was scp -ir. The key has to be after the -i, and the -r after that.
so
scp -i amazon.pem -r ec2-user#ec2-##-##-##:/source/dir /destination/dir
Call scp from client machine with recursive option:
scp -r user#remote:src_directory dst_directory
scp -i {key path} -r ec2-user#54.159.147.19:{remote path} {local path}
For EC2 ubuntu
go to your .pem file directory
scp -i "yourkey.pem" -r ec2user#DNS_name:/home/ubuntu/foldername ~/Desktop/localfolder
You could even use rsync.
rsync -aPSHiv remote:directory .
This's how I copied file from amazon ec2 service to local window pc:
pscp -i "your-key-pair.pem" username#ec2-ip-compute.amazonaws.com:/home/username/file.txt C:\Documents\
For Linux to copy a directory:
scp -i "your-key-pair.pem" -r username#ec2-ip-compute.amazonaws.com:/home/username/dirtocopy /var/www/
To connect to amazon it requires key pair authentication.
Note:
Username most probably is ubuntu.
I use sshfs and mount remote directory to local machine and do whatever you want. Here is a small guide, commands may change on your system
This is also important and related to the above answer.
Copying all files in a local directory to EC2. This is a Unix answer.
Copy the entire local folder to a folder in EC2:
scp -i "key-pair.pem" -r /home/Projects/myfiles ubuntu#ec2.amazonaws.com:/home/dir
Copy only the entire contents of local folder to folder in EC2:
scp -i "key-pair.pem" -r /home/Projects/myfiles/* ubuntu#ec2.amazonaws.com:/home/dir
I do not like to use scp for large number of files as it does a 'transaction' for each file. The following is much better:
cd local_dir; ssh user#server 'cd remote_dir_parent; tar -c remote_dir' | tar -x
You can add a z flag to tar to compress on server and uncompress on client.
One way I found on youtube is to connect a local folder with a shared folder in EC2 instance. Please view this video for the full instruction. The sharing is instantaneous.

Copying entire contents of a server

I need to copy the whole contents of a linux server, but I'm not sure how to do it recursively.
I have a migration script to run on the server itself, but it won't run because the disc is full, so I need something I can run remotely which just gets everything.
I need to copy the whole contents of a linux server, but I'm not sure how to do it recursively.
How about
scp -r root#remotebox:/ your_local_copy
sudo rsync -hxDPavil -H --stats --delete / remote:/backup/
this will copy everything (permissions, owners, timestamps, devices, sockets, hardlinks etc). It will also delete stuff that no longer exists in source. (note that -x indicates to only copy files within the same mountpoint)
If you want to preserve owners but the receiving end is not on the same domain, use --numeric-ids
To automate incremental backup w/snapshots, look at rdiff-backup or rsnapshot.
Also, gnu tar is highly underrated
sudo tar cpf / | ssh remote 'cd /backup && tar xv'

Copying files and dirs on remote server while excluding some of them

Server 1 is connected to Server 2 via SSH.
We know this:
I can execute a command such as
" ssh server2 "cp -rv /var/www /tmp" "
which will copy the entire /var/www dir to /tmp. However inside of /var/www we have the following structure(sample LS output below)
$ ls
/web1
/web2
/web3
file1.php
file2.php
file3.php
How can I execute a cp command that will exclude /web1, /web3, file1.php and file3.php (obviously just copying web2 and file2 is not an option since there are significantly more files than just 6)
Note: I am using this to backup Server2 prior to RSYNCing from Server1.
The first two poster's both have good suggestions about rsync. Here's a more complete outline of the process.
(1) You want to backup server 2 before you sync from server 1, so let's do that with rsync. Here's the command as seen from server 1 (assuming it has access to server 2):
ssh user#server2 "rsync $RSYNC_OPTS /var/www/ /path/to/backup"
(2) With server 2 backed up, let's now sync from server 1 (again, as seen from server 1)
rsync $RSYNC_OPTS /path/to/www/ user#server2:/var/www/
As long as you use sane RSYNC_OPTS, the backup and sync should both be reasonable. Richard had a reasonable suggestion for the options:
RSYNC_OPTS="--exclude-from rsync-exclude.txt --stats -avz --numeric-ids -e ssh"
If you want an accurate reproduction, I'd recommend --delete or --delete-after as well. Be sure to lookup details on any options you're unfamiliar with.
For this you should really be using rsync.
I tend to uye an rsync-exclude.txt file to specify what I don't want as it's more future proof.
/public_ftp/.ftpquota
/tmp
/var/local/backups/rsyncs
/backup/rsync
/proc
/dev
so a command could be
rsync --exclude-from rsync-exclude.txt --stats -avz -e ssh \
--numeric-ids /syncfrom/dir user#example.com:/backup/sync-to-dir
edit::
In the case of a local server you can still use rsync, however you could also use tar and exclude what you don't want.
(cd dir1;tar --exclude 'web2/*' -cf -) | (cd dir2; tar -xvf -)
or
find dir1 dir2 >exclude-files (cd dir1;tar --exclude-from exclude-files -cf -) | (cd dir2; tar -xvf -)
I do the same thing when deploying new releases to my webserver. Is it possible for you to use rsync over ssh? rsync allows you to specify an --exclude option and specify either the dirs/files on the command line or via a file. Here's a pretty good writeup that I've used in the past:
http://articles.slicehost.com/2007/10/10/rsync-exclude-files-and-folders
Since what you want to do is "copy all files except those", following your example you could do this :
ssh server2 "cp -rv /var/www/!(web1|web3|file1.php|file3.php) /tmp"
But remember that this is very ugly to backup your server :p

Remote Linux server to remote linux server dir copy. How? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What is the best way to copy a directory (with sub-dirs and files) from one remote Linux server to another remote Linux server? I have connected to both using SSH client (like Putty). I have root access to both.
There are two ways I usually do this, both use ssh:
scp -r sourcedir/ user#dest.com:/dest/dir/
or, the more robust and faster (in terms of transfer speed) method:
rsync -auv -e ssh --progress sourcedir/ user#dest.com:/dest/dir/
Read the man pages for each command if you want more details about how they work.
I would modify a previously suggested reply:
rsync -avlzp /path/to/sfolder name#remote.server:/path/to/remote/dfolder
as follows:
-a (for archive) implies -rlptgoD so the l and p above are superfluous. I also like to include -H, which copies hard links. It is not part of -a by default because it's expensive. So now we have this:
rsync -aHvz /path/to/sfolder name#remote.server:/path/to/remote/dfolder
You also have to be careful about trailing slashes. You probably want
rsync -aHvz /path/to/sfolder/ name#remote.server:/path/to/remote/dfolder
if the desire is for the contents of the source "sfolder" to appear in the destination "dfolder". Without the trailing slash, an "sfolder" subdirectory would be created in the destination "dfolder".
rsync -avlzp /path/to/folder name#remote.server:/path/to/remote/folder
scp -r <directory> <username>#<targethost>:<targetdir>
Log in to one machine
$ scp -r /path/to/top/directory user#server:/path/to/copy
Use rsync so that you can continue if the connection gets broken. And if something changes you can copy them much faster too!
Rsync works with SSH so your copy operation is secure.
Try unison if the task is recurring.
http://www.cis.upenn.edu/~bcpierce/unison/
I used rdiffbackup http://www.nongnu.org/rdiff-backup/index.html because it does all you need without any fancy options. It's based on the rsync algorithm.
If you only need to copy one time, you can later remove the rdiff-backup-data directory on the destination host.
rdiff-backup user1#host1::/source-dir user2#host2::/dest-dir
from the doc:
rdiff-backup also preserves
subdirectories, hard links, dev files,
permissions, uid/gid ownership,
modification times, extended
attributes, acls, and resource forks.
which is an bonus to the scp -p proposals as the -p option does not preserve all (e.g. rights on directories are set badly)
install on ubuntu:
sudo apt-get install rdiff-backup
Check out scp or rsync,
man scp
man rsync
scp file1 file2 dir3 user#remotehost:path
Well, quick answer would to take a look at the 'scp' manpage, or perhaps rsync - depending exactly on what you need to copy. If you had to, you could even do tar-over-ssh:
tar cvf - | ssh server tar xf -
I think you can try with:
rsync -azvu -e ssh user#host1:/directory/ user#host2:/directory2/
(and I assume you are on host0 and you want to copy from host1 to host2 directly)
If the above does not work, you could try:
ssh user#host1 "/usr/bin/rsync -azvu -e ssh /directory/ user#host2:/directory2/"
in the this, it would work, if you already have setup passwordless SSH login from host1 to host2
scp will do the job, but there is one wrinkle: the connection to the second remote destination will use the configuration on the first remote destination, so if you use .ssh/config on the local environment, and you expect rsa and dsa keys to work, you have to forward your agent to the first remote host.
As non-root user ideally:
scp -r src $host:$path
If you already some of the content on $host consider using rsync with ssh as a tunnel.
/Allan
If you are serious about wanting an exact copy, you probably also want to use the -p switch to scp, if you're using that. I've found that scp reads from devices, and I've had problems with cpio, so I personally always use tar, like this:
cd /origin; find . -xdev -depth -not -path ./lost+found -print0 \
| tar --create --atime-preserve=system --null --files-from=- --format=posix \
--no-recursion --sparse | ssh targethost 'cd /target; tar --extract \
--overwrite --preserve-permissions --sparse'
I keep this incantation around in a file with various other means of copying files around. This one is for copying over SSH; the other ones are for copying to a compressed archive, for copying within the same computer, and for copying over an unencrypted TCP socket when SSH is too slow.
scp as mentioned above is usually a best way, but don't forget colon in the remote directory spec otherwise you'll get copy of source directory on local machine.
I like to pipe tar through ssh.
tar cf - [directory] | ssh [username]#[hostname] tar xf - -C [destination on remote box]
This method gives you lots of options. Since you should have root ssh disabled copying files for multiple user accounts is hard since you are logging into the remote server as a normal user. To get around this you can create a tar file on the remote box that still hold that preserves ownership.
tar cf - [directory] | ssh [username]#[hostname] "cat > output.tar"
For slow connections you can add compression, z for gzip or j for bzip2.
tar cjf - [directory] | ssh [username]#[hostname] "cat > output.tar.bz2"
tar czf - [directory] | ssh [username]#[hostname] "cat > output.tar.gz"
tar czf - [directory] | ssh [username]#[hostname] tar xzf - -C [destination on remote box]

Resources