Remote backup solution for openSuSE - linux

I'm looking for some backup solution. My request is pretty simple:
Source - FTP credentials (ftp://user:pass#server.tld/dir1/dir2)
Destination on local HDD (/var/backup/server-tld)
Possibility of packing to archive (tar.gz/zip)
Plan this "script" as a cron job with defined period (e.g. once a day)
I know, that all this can be done using bash scripts, but it seems to be a little bit uncomfortable.
I don't believe there's no simple solution for this.

I've finally found a really "simple-to-use" solution:
ncftp
http://www.cyberciti.biz/tips/linux-download-all-file-from-ftp-server-recursively.html

Related

SCP equivalent in chef

I am trying to scp a file from a server to another server both on Azure. This is the command I want to replace:
cp /tmp/openvpn/EasyRSA-3.0.4/pki/reqs/server.req jkirby29#40.121.47.3:/tmp
I have tried remote_file already, I am not sure of anything else that is even close to what I need. Is this one of those where I need to put it in a bash block? I am new to chef so excuse my lack of knowledge.
This is not really something Chef supports. The remote_file resource does support SFTP, but it expects to be pulling from a remote server, not pushing as this is. In general this kind of approach leads to a lot of complexity when scaling up/out, like when you need to push to a dozen machines instead of just one, so you need to work out which IPs, which might be changing, etc etc. You can use an execute or script resource to do it as written though, if that's really what you want.

Updating a website through SSH

I'm only partially familiar with shell and my command line, but I understand the usage of * when uploading and downloading files.
My question is this: If I have updated multiple files within my website's directory on my local device, is there some simple way to re-upload every file and directory through the put command to just update every single file and place files not previously there?
I'd imagine that i'd have to somehow
put */ (to put all of the directories)
put * (to put all of the files)
and change permissions accordingly
It may also be in my best interests to first clear the directory to I have a true update, but then there's the problem of resetting all permissions for every file and directory. I would think it would work in a similar manner, but I've had problems with it and I do not understand the use of the -r recursive option.
Basically such functionality is perfected within the rsync tool. And that tool can also be used in a "secure shell way"; as lined out in this tutorial.
As an alternative, you could also look into sshfs. That is a utility that allows you to "mount" a remote file system (using ssh) in your local system. So it would be completely transparent to rsync that it is syncing a local and a remote file system; for rsync, you would just be syncing to different directories!
Long story short: don't even think about implementing such "sync" code yourself. Yes, rsync itself requires some studying, as many unix tools it is extremely powerful; thus you have to be very diligent when using it. But thing is: this is a robust, well tested tool. The time required to learn about it will pay out pretty quickly.

GNU make's install target to push files on a remote SSH?

I'm working on a project that needs to be tested on an embedded Linux system. After every little change, I have to scp all files to the device over a SSH connection. Can you suggest a more convenient way to deploy files on a remote target? For example some trick on make's install command:
make install INSTALL='scp 192.168.1.100:/'
or something.
if you can use scp, you can probably also use rsync, specifically rsync over ssh. Use of rsync has as advantage is that it builds a delta of source and destination files, and transfers only what is necessary. In case of transfer after changing very little this would be of considerable benefit. I'd probably invoke it if building completes without error, like make ... && upload (where upload could be a script covering the details of transfer)
Just for completeness, sshfs is often quite useful. You can mount a remote folder visible over ssh on to a folder on your local hard disk. Performance is not great, but certainly serviceable enough for a deploy step, and it's transparent to all tools.

how can i tell (in bash script) if a clonezilla batch mode backup succeeded?

This is my first ever post to stackoverflow, so be gentle please. ;>
Ok, I'm using slightly customized Clonezilla-Live cd's to backup the drives on four PCs. Each cd is for a specific PC, saving an image of its disk(s) to a box-specific backup folder on a samba server. That's all pretty much working. But once in a while, Something Goes Wrong, and the backup isn't completed properly. Things like: the cat bit through a cat5e cable; I forgot to check if the samba server had run out of room; etc. And it is not always readily apparent that a failure happened.
I will admit right now that I am pretty much a noob as far as linux system administration goes, even though i managed somehow to setup a centos 6 box (i wish i'd picked ubuntu...) with samba, git, ssh, and bitnami-gitlab back in february.
I've spent days and days and days trying to figure out if clonezilla leaves a simple clue in a backup as to whether it succeeded completely or not, and have come up dry. Looking in the folder for a particular backup job (on the samba server) I see that the last file written is named "clonezilla-img". It seems to be a console dump that covers the backup itself. But it does not seem to include the verification pass.
Regardless of whether the batch backup task succeeded or failed, I can run a post-process bash script automagically, that I place on my clonezilla cds. I have this set to run just fine, though its not doing a whole lot right now. What I would like this post-process script to do is determine if the backup job succeeded or not, and then rename (mv) the backup job directory to include some word like "SUCCESS" or "FAILURE". I know how to do the renaming part. It's the test for success or failure that I'm at a loss about.
Thanks for any help!
I know this is old, but I've just started on looking into doing something very similar.
For your case i think you could do what you are looking for with ocs_prerun and ocs_postrun scripts.
For my setup I'm using a pen/falsh drive for some test systems and also pxe with NFSmount. PXE and nfs are much easier to test and modify quickly.
I haven't tested this yet, but I was thinking that I might be able to search the logs in /var/log/{clonezilla.log,partclone.log} via an ocs_postrun script to validate success or failure. I haven't seen anything that indicates the result is set in the environment so I'm thinking the logs might be the quick easy method over mounting or running a crc check. Clonezilla does have an option to validate the image, the results of which might be in the local logs.
Another option might be to create a custom ocs_live_run script to do something similar. There is an example at this URL http://clonezilla.org/fine-print-live-doc.php?path=./clonezilla-live/doc/07_Customized_script_with_PXE/00_customized_script_with_PXE.doc#00_customized_script_with_PXE.doc
Maybe in the script the exit code of ocs-sr can be checked? As I said I haven't tried any of this, just some thoughts.
I updated the above to reflect the log location (/var/log). The logs are in the log folder of course. :p
Regards

Linux: Uploading files to a live server - How to automate process?

I'm developing on my local machine (apache2, php, mysql). When I want to upload files to my live server (nginx, mysql, php5-fpm), I first backup my www folder, extract the databases, scp everything to my server (which is tedious, because it's protected with opiekey), log myself in, copy the files from my home directory on the server to my www directory and if I'm lucky and the file permissions and everything else works out, I can view the changes online. If I'm unlucky I'll have to research what went wrong.
Today, I changed only one file, and had to go through the entire process just for this file. You can imagine how annoying that is. Is there a faster way to do this? A way to automate it all? Maybe something like "commit" in SVN and off you fly?
How do you guys handle these types of things?
PS: I'm very very new to all this, so bear with me! For example I'm always copying files into my home directory on the server, because scp cannot seem to copy them directly into the /var/www folder?!
There are many utilities which will do that for you. If you know python, try fabric. If you know ruby, you may prefer capistrano. They allow you to script both local and remote operations.
If you have a farm of servers to take care of, those two might not work at the scale you want. For over 10 servers, have a look at chef or puppet to manage your servers completely.
Whether you deploy from local checkout, packaged source (my preferred solution), remote repository, or something entirely different is up to you. Whatever works for you is ok. Just make sure your deployments are reproducible (that is you can always say "5 minutes ago it wasn't broken, I want to have what now what I had 5 minutes ago"). Whatever way of versioning you use is better than no versioning (tagged releases are probably the most comfortable).
I think the "SVN" approach is very close to what you really want. You make a cron job that will run "svn update" every few minutes (or hg pull -u if using mercurial, similar with git). Another option is to use dropbox (we use it for our web servers sometimes) - this one is very easy to setyp and share with non-developers (like UI designers)...
rsync will send only the changes between your local machine and the remote machine. It would be an alternative to scp. You can look into how to set it up to do what you need.
You can't copy to /var/www because the credentials you're using to log in for the copy session doesn't have access to write on /var/www. Assuming you have root access, change the group (chown) on /var/www (or better yet, a sub directory) to your group and change the permissions to allow your group write access (chmod g+w).
rsync is fairly lightweight, so it should be simple to get going.

Resources