Will rclone --ignore-existing prevent ransomware damages? - security

I am rclone backing up files multiple times a day. I would like my backup server to be a recovery point from ransomware or any other error.
Am I correct that if I do a
rclone copy --ignore-existing
, my backup server is safe from the ransomware. If all of my files on my main server get encrypted the file name would stay the same and they wouldn't overwrite my backup server files with the encrypted files because I have --ignore-existing. It will ignore any size/time/checksum changes and not transfer those files over because they already exist on the back up? It won't transfer over the encrypted files that overwrite my existing good files?
I could then delete my main server and copy everything from my recovery over to the main and restore everything?

I just read the rclone documentation and it looks like the --ignore-existing is almost especially for preventing ransomware/encryption attacks according to the docs:
--ignore-existing Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of
these files.
While this isn't a generally recommended option, it can be useful in
cases where your files change due to encryption. However, it cannot
correct partial transfers in case a transfer was interrupted.
So I think it will work to prevent that.

Related

Updating a website through SSH

I'm only partially familiar with shell and my command line, but I understand the usage of * when uploading and downloading files.
My question is this: If I have updated multiple files within my website's directory on my local device, is there some simple way to re-upload every file and directory through the put command to just update every single file and place files not previously there?
I'd imagine that i'd have to somehow
put */ (to put all of the directories)
put * (to put all of the files)
and change permissions accordingly
It may also be in my best interests to first clear the directory to I have a true update, but then there's the problem of resetting all permissions for every file and directory. I would think it would work in a similar manner, but I've had problems with it and I do not understand the use of the -r recursive option.
Basically such functionality is perfected within the rsync tool. And that tool can also be used in a "secure shell way"; as lined out in this tutorial.
As an alternative, you could also look into sshfs. That is a utility that allows you to "mount" a remote file system (using ssh) in your local system. So it would be completely transparent to rsync that it is syncing a local and a remote file system; for rsync, you would just be syncing to different directories!
Long story short: don't even think about implementing such "sync" code yourself. Yes, rsync itself requires some studying, as many unix tools it is extremely powerful; thus you have to be very diligent when using it. But thing is: this is a robust, well tested tool. The time required to learn about it will pay out pretty quickly.

backing up entire linux server on external hard drive or another cluster

We have a linux server for the database. Most of our data are on /var/. I would like to backup entire directory on external hard drive or on another linux system so that if something goes wrong I can entirely replace the directory. Since the directory has many files, I do not want to copy and paste every time instead I like to sync them.
Is there easy way to do that? rsyn can do that, how do I avoid of login every time the server? BTW I have limited knowledge of linux system.
Appreciated any comments and suggestions.
Bikesh
Rsyncing database files is not a recommended way of backing them up. I believe you have mysql running on the server. In that case, you can take a full database dump in the server, using steps mentioned in following link:
http://www.microhowto.info/howto/dump_a_complete_mysql_database_as_sql.html#idp134080
And, then syncing these files to your backup server. You can use rsych command for this purpose:
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
Make sure that you have installed mysql in the backup server too. You can also copy the mysql configuration file /etc/my.cnf file to the database backup server. In case you require your database to be updated always, you can setup mysql replication. You can follow the below mentioned guide to do the same.
http://kbforlinux.blogspot.in/2011/09/setup-mysql-master-slave-replication.html

FTP folders mounted locally used for SVN repository

I would like to create a SVN repository remotely using FTP protocol.
Is it advisable to do the following steps
mount the FTP directory as local with culftpfs
create a repository as if it is local with svnadmin create
use it like in everyday life?
Do you know any issue with that approach?
RESULT AFTER MY ATTEMPT
I did try an attempy but I get an errro that looks like a timeout. THe real problem is that this approach is too slow. The solution of copying the repository each time looks more feasable or a simple script to back-up the folder.
It is a dangerous approach, however if you are working alone(as in "single user"), it would work. The biggest problems are:
You cannot provide exclusive locking mechanics over network
All Users will have direct access to all repositorie's internal files, if somebody deletes a file in revs, your repository is damaged beyond repair
You should setup an apache with
SVNAutoversioning on
then you could mount your repoURL as WebDav folder. Each change on these files will result in a single commit without need of a workingcopy

Verify Perforce client file copies

I have a large Perforce depot and I believe my client currently has about 2GB of files that are in sync with the server, but what's the best way to verify my files are complete, in-sync, and up to date to a given change level (which is perhaps higher then a handful of files on the client currently)?
I see the p4 verify command, and it's MD5s, but these just seem to be from the server's various revisions for the file. Is there a way to compare the MD5 on the server with the MD5 of the revision required on my client?
I am basically trying to minimize bandwidth and time consumed to achieve a complete verification. I don't want to have to sync -f to a specific revision number. I'd just like a list of any files that are inconsistent with the change level I am attempting to attain. Then I can programmatically force a sync of those few files.
You want "p4 diff -se".
This should do an md5 hash of the client's file and compare it to the stored hash on the server.
Perforce is designed to work when you keep it informed about the checked out status of all your files. If you or other programmers in your team are using perforce and editing files that are not checked out then that is the real issue you should fix.
There is p4 clean -n (equivalent to p4 reconcile -w -n)
which would also get you a list of files that p4 would update. Of course you could also pass a changelist to align to.
You might want to disable checking for local files that it would delete tho!
If you don't have many incoming updates one might consider an offline local manifest file with sizes and hashes of all the files in the repository. Iterating over it and checking for existence, size and hash yielding missing or changed files.
In our company, having the p4 server on the intranet checking via local manifest it's actually not much faster than asking for p4 clean. But a little!! And it uses no bandwidth at all. Now over internet and VPN even better!!

Linux: Uploading files to a live server - How to automate process?

I'm developing on my local machine (apache2, php, mysql). When I want to upload files to my live server (nginx, mysql, php5-fpm), I first backup my www folder, extract the databases, scp everything to my server (which is tedious, because it's protected with opiekey), log myself in, copy the files from my home directory on the server to my www directory and if I'm lucky and the file permissions and everything else works out, I can view the changes online. If I'm unlucky I'll have to research what went wrong.
Today, I changed only one file, and had to go through the entire process just for this file. You can imagine how annoying that is. Is there a faster way to do this? A way to automate it all? Maybe something like "commit" in SVN and off you fly?
How do you guys handle these types of things?
PS: I'm very very new to all this, so bear with me! For example I'm always copying files into my home directory on the server, because scp cannot seem to copy them directly into the /var/www folder?!
There are many utilities which will do that for you. If you know python, try fabric. If you know ruby, you may prefer capistrano. They allow you to script both local and remote operations.
If you have a farm of servers to take care of, those two might not work at the scale you want. For over 10 servers, have a look at chef or puppet to manage your servers completely.
Whether you deploy from local checkout, packaged source (my preferred solution), remote repository, or something entirely different is up to you. Whatever works for you is ok. Just make sure your deployments are reproducible (that is you can always say "5 minutes ago it wasn't broken, I want to have what now what I had 5 minutes ago"). Whatever way of versioning you use is better than no versioning (tagged releases are probably the most comfortable).
I think the "SVN" approach is very close to what you really want. You make a cron job that will run "svn update" every few minutes (or hg pull -u if using mercurial, similar with git). Another option is to use dropbox (we use it for our web servers sometimes) - this one is very easy to setyp and share with non-developers (like UI designers)...
rsync will send only the changes between your local machine and the remote machine. It would be an alternative to scp. You can look into how to set it up to do what you need.
You can't copy to /var/www because the credentials you're using to log in for the copy session doesn't have access to write on /var/www. Assuming you have root access, change the group (chown) on /var/www (or better yet, a sub directory) to your group and change the permissions to allow your group write access (chmod g+w).
rsync is fairly lightweight, so it should be simple to get going.

Resources