Whats faster? Copy via nfs-mount or via scp? - linux

we have a network of several machines and we want to distribute a big directory (ca. 10 GB) to every box.
It is located on an nfs-server and is mounted on all machines, so first approach is to just use normal cp to copy the files from the mounted to a local directory. This is easy, but unfortunately there is no progress bar, because it is not intended to use it for network copies (or is it?).
Using scp is intended for copying across network, but it may encrypt everything and therefore be slow.
Should one be faster, and if so, which: cp on nfs-mount or scp?

You can always use rsync, it can show you the progress (with --progress option) and is more lightweight than scp.
You can enable compression manually with -z.

Related

Cluster (sharing file storage) for file storage for PHP application

I am using a load balancer in aws and want to sync files in real time. I was trying to do it by rsync but it's not a real time we set it by cron. I want to do it by real time, I am using it in Singapore region and there is no EFS option.
There is a daemon called lsyncd, which does exactly what you need.
You can read further about it here
"rsync is an excellent and versatile backup tool, but it does have one drawback: you have to run it manually when you want to back up your data. Sure, you can use cron to create scheduled backups, but even this solution cannot provide seamless live synchronization. If this is what you want, then you need the lsyncd tool, a command-line utility which uses rsync to synchronize (or rather mirror) local directories with a remote machine in real time. To install lsyncd on your machine, download the latest .tar.gz archive from the project's Web site, unpack it, and use the terminal to switch to the resulted directory. Run then the ./configure command followed by make, and make install (the latter command requires root privileges). lsyncd is rather straightforward in use, as it features just one command and a handful of options"

GNU make's install target to push files on a remote SSH?

I'm working on a project that needs to be tested on an embedded Linux system. After every little change, I have to scp all files to the device over a SSH connection. Can you suggest a more convenient way to deploy files on a remote target? For example some trick on make's install command:
make install INSTALL='scp 192.168.1.100:/'
or something.
if you can use scp, you can probably also use rsync, specifically rsync over ssh. Use of rsync has as advantage is that it builds a delta of source and destination files, and transfers only what is necessary. In case of transfer after changing very little this would be of considerable benefit. I'd probably invoke it if building completes without error, like make ... && upload (where upload could be a script covering the details of transfer)
Just for completeness, sshfs is often quite useful. You can mount a remote folder visible over ssh on to a folder on your local hard disk. Performance is not great, but certainly serviceable enough for a deploy step, and it's transparent to all tools.

rsync -z with remote share mounted locally

I have a local Linux server that I'm using to backup two remote Windows 7 boxes over an IPsec VPN tunnel connection. I have the user's Documents folders shared on the remote PC's and have mounted those shares (CIFS) on my local Linux server.
I'm going to use a cron job to run rsync on my local Linux server to create backups of these folders and am currently considering the -avz args to accomplish this.
My question is this: does the -z arg do anything for me since the mount is to a remote machine? As I understand it, -z compresses the data before sending it which definitely makes sense if the job were being run from the remote PC but, it seems like I'm compressing data that's already been pulled through the network given my setup (which seems like it would increase the backup time by adding an unnecessary step).
What are your thoughts? Should I use -z given my setup?
Thanks!
It won't save you anything. To compress the file, rsync needs to read it's contents (in blocks) and then compress them. Since reading the blocks is going to happen over the wire, pre-compression, you save no bandwidth and gain a bit of overhead from the compression itself.

Linux Mirror Some files for dev environment

What's a good approach to mirroring a production environment's data for dev? We have one production server in place that mounts many smb shares which several scripts run against routinely.
We now have a separate server for development that we want to keep separate for testing. How do I get sample data from all those smb shares without copying them all? The dev server couldn't hold all that data so I'm looking for something that could routinely run and just copy the first X files out of each directory.
The goal is to have the dev server be "safe" and not mount those same shares during testing.
For a development environment I like to have:
Known good data
Known (constructed) bad data
Random sample of live data
What I mean by "constructed" is data that I have put together in a certain way so I know exactly how it is bad.
In your case I'd have my good and bad data and then write a little Bash script to copy data from the SMB shares to the local dev machine. Maybe run a ls -t on each of the shares so you can grab the newest files, save that output to a file and use head or some other utility to read the first N lines - and copy those files to your dev machine.
pseudo code
clear data directory
copy known good data from some local directory
copy known bad data from some local directory
begin loop: for every SMB share
run `ls -t` and output the results to a file
run `head` or some other utility to get the first N lines (ie file names)
copy those files from the SMB share to my local data directory
end loop
You could set up cron to execute this little script however often you want.

Programmatically copy files between servers: scp or mount?

I run a process that generates some files on a server and want to copy them to another server. The two servers are on the same network.
What are the pros/cons of using scp or a network share?
I'm not talking about a one-time copy (which I'd do manually with scp), but about programmatically copying the files after they are generated.
rsync is a third possibility, and very easily scriptable. like scp, it uses ssh by default, and if you have already set up key-based authentication, it doesn't get any easier: rsync -avuz /local/dir/ my.example.com:/remote/dir/
some advantages over scp are the --dry-run and --delete options; the first is self-explanatory, the second deletes anything in the target that's not in the source.
network shares work great when they work, but when they break it can be a major hassle.
as pst said, scp can also be easily scripted, so if you have to choose between the two options you gave, I'd say go with scp simply because it's more reliable and just as easily scripted as copying from a network share.

Resources