I'm working on a project that needs to be tested on an embedded Linux system. After every little change, I have to scp all files to the device over a SSH connection. Can you suggest a more convenient way to deploy files on a remote target? For example some trick on make's install command:
make install INSTALL='scp 192.168.1.100:/'
or something.
if you can use scp, you can probably also use rsync, specifically rsync over ssh. Use of rsync has as advantage is that it builds a delta of source and destination files, and transfers only what is necessary. In case of transfer after changing very little this would be of considerable benefit. I'd probably invoke it if building completes without error, like make ... && upload (where upload could be a script covering the details of transfer)
Just for completeness, sshfs is often quite useful. You can mount a remote folder visible over ssh on to a folder on your local hard disk. Performance is not great, but certainly serviceable enough for a deploy step, and it's transparent to all tools.
Related
I am using a load balancer in aws and want to sync files in real time. I was trying to do it by rsync but it's not a real time we set it by cron. I want to do it by real time, I am using it in Singapore region and there is no EFS option.
There is a daemon called lsyncd, which does exactly what you need.
You can read further about it here
"rsync is an excellent and versatile backup tool, but it does have one drawback: you have to run it manually when you want to back up your data. Sure, you can use cron to create scheduled backups, but even this solution cannot provide seamless live synchronization. If this is what you want, then you need the lsyncd tool, a command-line utility which uses rsync to synchronize (or rather mirror) local directories with a remote machine in real time. To install lsyncd on your machine, download the latest .tar.gz archive from the project's Web site, unpack it, and use the terminal to switch to the resulted directory. Run then the ./configure command followed by make, and make install (the latter command requires root privileges). lsyncd is rather straightforward in use, as it features just one command and a handful of options"
I'm only partially familiar with shell and my command line, but I understand the usage of * when uploading and downloading files.
My question is this: If I have updated multiple files within my website's directory on my local device, is there some simple way to re-upload every file and directory through the put command to just update every single file and place files not previously there?
I'd imagine that i'd have to somehow
put */ (to put all of the directories)
put * (to put all of the files)
and change permissions accordingly
It may also be in my best interests to first clear the directory to I have a true update, but then there's the problem of resetting all permissions for every file and directory. I would think it would work in a similar manner, but I've had problems with it and I do not understand the use of the -r recursive option.
Basically such functionality is perfected within the rsync tool. And that tool can also be used in a "secure shell way"; as lined out in this tutorial.
As an alternative, you could also look into sshfs. That is a utility that allows you to "mount" a remote file system (using ssh) in your local system. So it would be completely transparent to rsync that it is syncing a local and a remote file system; for rsync, you would just be syncing to different directories!
Long story short: don't even think about implementing such "sync" code yourself. Yes, rsync itself requires some studying, as many unix tools it is extremely powerful; thus you have to be very diligent when using it. But thing is: this is a robust, well tested tool. The time required to learn about it will pay out pretty quickly.
I have a local Linux server that I'm using to backup two remote Windows 7 boxes over an IPsec VPN tunnel connection. I have the user's Documents folders shared on the remote PC's and have mounted those shares (CIFS) on my local Linux server.
I'm going to use a cron job to run rsync on my local Linux server to create backups of these folders and am currently considering the -avz args to accomplish this.
My question is this: does the -z arg do anything for me since the mount is to a remote machine? As I understand it, -z compresses the data before sending it which definitely makes sense if the job were being run from the remote PC but, it seems like I'm compressing data that's already been pulled through the network given my setup (which seems like it would increase the backup time by adding an unnecessary step).
What are your thoughts? Should I use -z given my setup?
Thanks!
It won't save you anything. To compress the file, rsync needs to read it's contents (in blocks) and then compress them. Since reading the blocks is going to happen over the wire, pre-compression, you save no bandwidth and gain a bit of overhead from the compression itself.
I run a process that generates some files on a server and want to copy them to another server. The two servers are on the same network.
What are the pros/cons of using scp or a network share?
I'm not talking about a one-time copy (which I'd do manually with scp), but about programmatically copying the files after they are generated.
rsync is a third possibility, and very easily scriptable. like scp, it uses ssh by default, and if you have already set up key-based authentication, it doesn't get any easier: rsync -avuz /local/dir/ my.example.com:/remote/dir/
some advantages over scp are the --dry-run and --delete options; the first is self-explanatory, the second deletes anything in the target that's not in the source.
network shares work great when they work, but when they break it can be a major hassle.
as pst said, scp can also be easily scripted, so if you have to choose between the two options you gave, I'd say go with scp simply because it's more reliable and just as easily scripted as copying from a network share.
I'm trying to figure out how to do this with Eclipse. We currently run SVN and everything works great, but I'd really like to cut my SSH requests in half and use Eclipse to modify some files directly on the server. I'm using the below build of eclipse... how can I do this?
Eclipse for PHP Developers
Build id: 20100218-1602
Update
I have no intention of eliminating SVN from the equation, but when we need to make a hotfix, or run a specific report or function as a one-time thing, I'd much rather use Eclipse than terminal for modifying that kind of thing.
Have a look at How can I use a remote workspace over SSH? on the Eclipse wiki. I'm just quoting the summary below (read the whole section):
Summing up, I would recommend the
following (in order of preference):
VNC or NX when available remotely, Eclipse can be started remotely and
the network is fast enough (try it
out).
Mounted filesystem (Samba or SSHFS) when possible, the network is fast
enough and the workspace is not too
huge.
rsync when offline editing is desired, sufficient tooling is
available locally, and no merge issues
are expected (single user scenario).
RSE on very slow connections or huge workspaces where minimal data
transfer is desired.
EFS on fast connections when all tooling supports it, and options
like VNC or mounted filesystem or
rsync are not available.
But whatever you'll experiment, don't bypassing the version control system.
You could use something like SSHFS, but really, it's a better idea to use some kind of source control system instead of editing files directly on the server. If Subversion isn't sufficient, perhaps you might try a DVCS like Git or Mercurial.