Distributed uncompression of a file archive - linux

A bash scripting question. Suppose we have a calling host H and a remote server S. Is it possible (using a ssh remote invocation of tar from H to S) uncompress a file archive residing on S (and thus using computing resources of S) such that the files and directories of an archive are created only on H?

If your tarball is gzipped, you can remotely gunzip it and locally untar it with
ssh S gzip -dc < archive.tar.gz | tar xvf -
For this to actually be fast you need a very fast network and a very slow workstation.
You can't untar the archive remotely unless you have a shared filesystem (NFS, CIFS, ...).

Related

what options to use with rsync to sync online new files to remote ntfs drive?

i run a mixed windows and linux network with different desktops, notebooks and raspberry pis. i am trying to establish an off-site backup between a local raspberry pi and an remote raspberry pi. both run on dietpi/raspbian and have an external hdd with ntfs to store the backup data. as the data to be backuped is around 800GB i already initially mirrored the data on the external hdd, so that only the new files have to be sent to the remote drive via rsync.
i now tried various combinations of options including --ignore-existing --size-only -u -c und of course combinations of other options like -avz etc.
the problem is: nothing of the above really changes anything, the system tries to upload all the files (although they are remotely existing) or at least a good number of them.
could you give me hint how to solve this?
I do this exact thing. Here is my solution to this task.
rsync -re "ssh -p 1234” -K -L --copy-links --append --size-only --delete pi#remote.server.ip:/home/pi/source-directory/* /home/pi/target-directory/
The options I am using are:
-r - recursive
-e - specifies to utilize ssh / scp as a method to transmit data, a note, my ssh command uses a non-standard port 1234 as is specified by -p in the -e flag
-K - keep directory links
-L - copy links
--copy-links - a duplicate flag it would seem...
--append - this will append data onto smaller files in case of a partial copy
--size-only - this skips files that match in size
--delete - CAREFUL - this will delete local files that are not present on the remote device..
This solution will run on a schedule and will "sync" the files in the target directories with the files from the source directory. to test it out, you can always run the command with --dry-run which will not make any changes at all and only show you what will be transferred and/or deleted...
all of this info and additional details can be found in the rsync man page man rsync
**NOTE: I use ssh keys to allow connection/transfer without having to respond to a password prompt between these devices.

How to scp and compress simultaneously without decompressing on the destination machine

I would like a efficient method to scp a huge directory to a machine, while simultaneously compressing the directory. I need only the compressed directory in the destination machine.
Is it possible without having to do this in 2 steps manually?
Use tar:
tar cfz - /path/to/local|ssh user#remotehost 'cd /desired/location; tar xfz -'
the local tar will create/compress your file structure, and output it to stdout (- for the filename), which gets piped through ssh to a tar on the remote host, which reads the compressed stream from stdin (- filename, again) and extracts the contents
If you only want the compressed file written out, then
tar ... | ssh user#remotehoust 'cat - > file.tar.gz'

Copying entire contents of a server

I need to copy the whole contents of a linux server, but I'm not sure how to do it recursively.
I have a migration script to run on the server itself, but it won't run because the disc is full, so I need something I can run remotely which just gets everything.
I need to copy the whole contents of a linux server, but I'm not sure how to do it recursively.
How about
scp -r root#remotebox:/ your_local_copy
sudo rsync -hxDPavil -H --stats --delete / remote:/backup/
this will copy everything (permissions, owners, timestamps, devices, sockets, hardlinks etc). It will also delete stuff that no longer exists in source. (note that -x indicates to only copy files within the same mountpoint)
If you want to preserve owners but the receiving end is not on the same domain, use --numeric-ids
To automate incremental backup w/snapshots, look at rdiff-backup or rsnapshot.
Also, gnu tar is highly underrated
sudo tar cpf / | ssh remote 'cd /backup && tar xv'

linux tar command for remote machine

How can I create a .tar archive of a file (say /root/bugzilla) on a remote machine and store it on a local machine. SSH-KEYGEN is installed, so I can by pass authentication.
I am looking for something along the lines of:
tar -zcvf Localmachine_bugzilla.tar.gz /root/bugzilla
ssh <host> tar -zcvf - /root/bugzilla > bugzilla.tar.gz
avoids an intermediary copy.
See also this post for a couple of variants: Remote Linux server to remote linux server dir copy. How?
Something like:
ssh <host> tar -zcvf bugzilla.tar.gz /root/bugzilla
scp <host>:bugzilla.tar.gz Localmachine_bugzilla.tar.gz
Or, if you are compressing it just for the sake of transfer, scp compression option can be useful:
scp -R -C <host>:/root/bugzilla .
This is going to copy the whole /root/bugzilla directory using compression on the wire.

Remote Linux server to remote linux server dir copy. How? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What is the best way to copy a directory (with sub-dirs and files) from one remote Linux server to another remote Linux server? I have connected to both using SSH client (like Putty). I have root access to both.
There are two ways I usually do this, both use ssh:
scp -r sourcedir/ user#dest.com:/dest/dir/
or, the more robust and faster (in terms of transfer speed) method:
rsync -auv -e ssh --progress sourcedir/ user#dest.com:/dest/dir/
Read the man pages for each command if you want more details about how they work.
I would modify a previously suggested reply:
rsync -avlzp /path/to/sfolder name#remote.server:/path/to/remote/dfolder
as follows:
-a (for archive) implies -rlptgoD so the l and p above are superfluous. I also like to include -H, which copies hard links. It is not part of -a by default because it's expensive. So now we have this:
rsync -aHvz /path/to/sfolder name#remote.server:/path/to/remote/dfolder
You also have to be careful about trailing slashes. You probably want
rsync -aHvz /path/to/sfolder/ name#remote.server:/path/to/remote/dfolder
if the desire is for the contents of the source "sfolder" to appear in the destination "dfolder". Without the trailing slash, an "sfolder" subdirectory would be created in the destination "dfolder".
rsync -avlzp /path/to/folder name#remote.server:/path/to/remote/folder
scp -r <directory> <username>#<targethost>:<targetdir>
Log in to one machine
$ scp -r /path/to/top/directory user#server:/path/to/copy
Use rsync so that you can continue if the connection gets broken. And if something changes you can copy them much faster too!
Rsync works with SSH so your copy operation is secure.
Try unison if the task is recurring.
http://www.cis.upenn.edu/~bcpierce/unison/
I used rdiffbackup http://www.nongnu.org/rdiff-backup/index.html because it does all you need without any fancy options. It's based on the rsync algorithm.
If you only need to copy one time, you can later remove the rdiff-backup-data directory on the destination host.
rdiff-backup user1#host1::/source-dir user2#host2::/dest-dir
from the doc:
rdiff-backup also preserves
subdirectories, hard links, dev files,
permissions, uid/gid ownership,
modification times, extended
attributes, acls, and resource forks.
which is an bonus to the scp -p proposals as the -p option does not preserve all (e.g. rights on directories are set badly)
install on ubuntu:
sudo apt-get install rdiff-backup
Check out scp or rsync,
man scp
man rsync
scp file1 file2 dir3 user#remotehost:path
Well, quick answer would to take a look at the 'scp' manpage, or perhaps rsync - depending exactly on what you need to copy. If you had to, you could even do tar-over-ssh:
tar cvf - | ssh server tar xf -
I think you can try with:
rsync -azvu -e ssh user#host1:/directory/ user#host2:/directory2/
(and I assume you are on host0 and you want to copy from host1 to host2 directly)
If the above does not work, you could try:
ssh user#host1 "/usr/bin/rsync -azvu -e ssh /directory/ user#host2:/directory2/"
in the this, it would work, if you already have setup passwordless SSH login from host1 to host2
scp will do the job, but there is one wrinkle: the connection to the second remote destination will use the configuration on the first remote destination, so if you use .ssh/config on the local environment, and you expect rsa and dsa keys to work, you have to forward your agent to the first remote host.
As non-root user ideally:
scp -r src $host:$path
If you already some of the content on $host consider using rsync with ssh as a tunnel.
/Allan
If you are serious about wanting an exact copy, you probably also want to use the -p switch to scp, if you're using that. I've found that scp reads from devices, and I've had problems with cpio, so I personally always use tar, like this:
cd /origin; find . -xdev -depth -not -path ./lost+found -print0 \
| tar --create --atime-preserve=system --null --files-from=- --format=posix \
--no-recursion --sparse | ssh targethost 'cd /target; tar --extract \
--overwrite --preserve-permissions --sparse'
I keep this incantation around in a file with various other means of copying files around. This one is for copying over SSH; the other ones are for copying to a compressed archive, for copying within the same computer, and for copying over an unencrypted TCP socket when SSH is too slow.
scp as mentioned above is usually a best way, but don't forget colon in the remote directory spec otherwise you'll get copy of source directory on local machine.
I like to pipe tar through ssh.
tar cf - [directory] | ssh [username]#[hostname] tar xf - -C [destination on remote box]
This method gives you lots of options. Since you should have root ssh disabled copying files for multiple user accounts is hard since you are logging into the remote server as a normal user. To get around this you can create a tar file on the remote box that still hold that preserves ownership.
tar cf - [directory] | ssh [username]#[hostname] "cat > output.tar"
For slow connections you can add compression, z for gzip or j for bzip2.
tar cjf - [directory] | ssh [username]#[hostname] "cat > output.tar.bz2"
tar czf - [directory] | ssh [username]#[hostname] "cat > output.tar.gz"
tar czf - [directory] | ssh [username]#[hostname] tar xzf - -C [destination on remote box]

Resources