Backup a Lacie 2 Big NAS on a remote linux server - linux

I want to backup my Lacie OS 3.x NAS 4TB on a remote server using the native web interface.
The best solution for me would be to use rsync, unfortunatly i do not have ssh shell access on the disk.
I tried to backup my device with a "compatible rsync server" but without success :
Going to backup > New Backup, Network backup, selecting all my shares, Rsync compatible server.
I'm typing working ssh credentials of my debian backup server (which have rsync 3.0.9) and it doesn't list any rsync destination so i can't continue the backup shcedule.
The web interface also provide a solution on a "NetBackup Server", but i don't know how I can install it on Debian (not sure it's the symantec product).
Also, the NAS provide a working SFTP access, but i only want to backup modified files (Because backup 4TB each time is a bit greedy).
Any solution ?

With some help, i finaly discover that Rsync could be used as a daemon with preconfigured destinations :
On my debian side, by creating a /etc/rsyncd.conf containning
lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
[documents]
path = /home/juan/Documents
comment = The documents folder of Juan
uid = juan
gid = juan
read only = no
list = yes
auth users = rsyncclient
secrets file = /etc/rsyncd.secrets
hosts allow = 192.168.1.0/255.255.255.0
/etc/rsyncd.secrets
rsyncclient:passWord
user:password
Do not forget
chmod 600 /etc/rsyncd.secrets
And then launch
rsync --daemon
After that, i can finaly view rsync destination when configuring Backup on my Nas.
Source : http://www.jveweb.net/en/archives/2011/01/running-rsync-as-a-daemon.html

Related

Rsync without password file without ssh

I would like to establish rsync between Windows 7 and linux server for file transfer. I am trying to make this as simple as possible. As topic, is there any way to use rsync without password file and ssh? I was searching for few days but only found solution with either password-file or ssh.
I am using:
Client Env
Windows 7
cwRsync 5.5.0
Server Env
Linux Redhat 6.3 Santiago
rsync 3.1.1
If you want to use rsync without using SSH at all, then you can do it using an rsync server.
It requires installing rsyncd on the Linux server, and setting up and configuring an always-running service, but there's a tutorial here (from 1999!) that says how to do it.
You can set it up to allow access without a username and password, but only do that within a trusted network! Note that, even with a password, there won't be any encryption, so use with caution.
on the source system:
vim /etc/rsyncd.conf
then add your path
[your_path_name]
path = /any_directory/your_path_name
comment = My fast rsync server
read only = yes
list = yes
start the rsync server
sudo systemctl start rsync
and on the destination server:
rsync -r rsync://X.X.X.X:/your_path_name ./my_directory/ --progress

enable rsync to run permanently

I have two machines, both running on linux with centos 7.
I have installed the rsync packages on both of them and i am able to sync a directory from one machine to the other.
Right now i am doing the syncing manually, each time i want to sync i am running the next line:
rsync -r /home/stuff root#123.0.0.99/home
I was wondering if there is a way of configuring the rsync to do the syncing automatically, every some amount of time or preferably when there is a new file of sub directory in the home directory?
Thank you for your help.
Any help would be appreciated.
If you want to rsync every some amount of time you can use cronjobs which can be configured to run a specific command each amount of time and if you want to run rsync when there is an update or modification you can use lsyncd. check this article about use lsyncd
Update:
As links might get outdated, I will add this brief example (You are free to modify it with what works best for you):
First create an ssh key on the source machine and then add the public key at the ~/.ssh/authorized_keys file on the destination machine.
In the source machine update this file ~/.ssh/config with the following content:
# ~/.ssh/config
...
Host my.remote.server
identityfile ~/.ssh/id_rsa
IdentitiesOnly yes
hostname 123.0.0.99
user root
port 22
...
And configure your lsyncd with the following then restart lsyncd's service
# lsyncd.conf
...
sync {
default.rsyncssh,
source="/home/stuff",
host="my.remote.server",
targetdir="/home/stuff",
excludeFrom="/etc/lsyncd/lsyncd.exclude",
rsync = {
archive = true,
}
}
...
You can setup an hourly cron job to do this.
rsync in itself is quite efficient in that it only transfers changes.
You can find more info about cron here: cron

Sharing folder within Linux machine to use it in Database Directory

I need to refresh the database with new dump files. But, unfortunately, that server machine doesn't have enough space. So, now trying to import same dump files, which is already present in the other machine (same network). Both machine has same OS running (Linux) with same version.
Now, I'm planning to share the source dump folder and create new directory in destination database, which will point network folder. But, I'm not sure how to share folder in Linux.
Any suggestion will be appreciated.
You probably want to share the directory with NFS. Here is a basic outline of the process.
On the server (where the files are):
yum -y install nfs-utils nfs-utils-lib // your pkg manager may vary
vi /etc/exports
// add a line like below
/directory/I/am/sharing *(ro,sync) // can replace * with an IP addr
service rpcbind start
service nfs start
chkconfig --levels 235 rpcbind on // so they auto-start at boot
chkconfig --levels 235 nfs on
(open your firewall, if needed!)
On the client (who wants to see the files):
yum -y install nfs-utils nfs-utils-lib
mkdir -p /the/mount/point // you choose the name
mount name.of.your.server:/directory/I/am/sharing /the/mount/point
(to make the mount happen at boot, add this info in /etc/fstab):
name.of.your.server /directory/I/am/sharing /the/mount/point nfs ro
Notes:
* You may need portmap in place of rpcbind
* ro means read-only, I assumed you wanted 1-way sharing. You may want rw
* There are more detailed instructions all over the 'net -- google them

rsync local root to remote server with non root preserve user:group it's possible

I'm writing code for create a backup rsync based.
On server a run code how root, and send with rsync some question about system, and all users accounts.
On backup server put content (via rsync) on one user account (user)
Try -azhEX --numeric-ids and -azh, y others.. but in any case I can keep the user and group id for when making a restore.
It's possible with rsync on this scenario, restore with original user:group ?
I run on both sides latest version 3.1.1 of rsync.
rsync alone cannot do this.
A very close solution to your problem is rdiff-backup which uses librsync internally and stores the user permissions and other meta-data in a separate directory.
http://www.nongnu.org/rdiff-backup/

File permissions changing on save ( using root )

Using a fresh installation of CENTOS 6.2, when I connect to the server ( SFTP mount with nautilus ) and edit files, no matter what permission the file had before, it is reset to 700, read+write+execute only for the owner.
When SSHing directly into the machine and editing files on the command line - no permissions are changed.
The files I am editing are website scripts sitting in my Apache folders.
Why is this behavior happening? Any suggestions are welcome.
Your FTP client might be "downloading and reuploading" your files when you edit them. Change your umask if you want different permissions, or use SSH and a proper editor if you want to keep the permissions...

Resources