Using rsync to keep two servers in sync - linux

I have two AWS EC2 instances that I'm trying to implement a two way sync. So if a file or folder on server1 is created or updated it should sync that file/folder to server2. If it's a new folder it should be created on the server. The problem I'm having is I can't get rsync to create the folders on the 'local' server.
For example, server 1: /rootdir/1/2/3/4, where directories 3 and 4 do not exist on server2. When I run rsync on server2 I want those new directories to be created.
Here is the code I'm trying to use, running from Server2:
$sudo rsync -avzP -e "ssh -i /home/ec2-user/.ssh/Key.pem" ec2-user#IPADDRESS OF SERVER1:/rootdir/1/2/ /rootdir/1/2
I'm not getting an error but the directories aren't being copied.
I also tried -r but it made no difference.

I finally figured out what I was doing wrong. The servers were configured with a non-standard port and I needed to tell rsync which port to use.

Related

Backup files and folders from remote host using Rsync Nodejs

I want to create a backup script using nodejs, cronjob. I use npm rsync to make copies of my files and folders. The code works inside the local driver, but I can't connect to the source remote host:
rsync = new Rsync()
.flags("e")
.source("192.168.1.140:/home/test/YDA")
.destination("../Desktop/fff/");
How I could provide username and password for the remote host?

Linux AWS EC2 Permissions with rsync

I am running a default t2.nano ec2 linux ami. Nothing is changed on it. I am trying to rsync my local changes to the server. There is a permissions issue that I don't know enough about to fix.
My structure is as follows. I'm trying to push my work to the technology directory. The technology directory is mapped to a staging domain. i.e. technology.staging.com
:/var/www/html/technology
this is from the root, and it does work fine, it's the rsync that is failing.
when I push locally to that directory I get a "failed: Permission denied (13)" error.
I'm running an nginx server and assigned permissions to the www directory as follows:
sudo chown -R nginx:nginx /var/www
My user is ec2-user which is the normal default. Here is where I am tripped up. You can see the var directory is given root access.
You can see that the www directory then has permissions set to nginx so our server can access the files. I believe I need to add the ec2-user to this directory as well as the nginx user so that I can rsync my files there and the server will still have access I'm just unsure of how to do that.
As a test, I created a test directory at this location and it worked successfully.
:/home/ec2-user/test
you can see the permission here are set for the ec2-user which is why it works i'm sure.
Here's the command I'm running on my local machine to rsync my files which fails.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/var/www/html/technology
Here's the command that was working.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/home/ec2-user/test
I have done enough research and testing to know that it's a permissions error, I just can't figure out the right way to solve it. Do I need to create a group and assign both the nginx and ec2-user to the group and then give that group the same permissions level on the :/var directory.
Side note, what permissions level do I set for the chown to make these permissions that are currently set?
I have server config files in the :/etc/nginx/conf.d/ directory that map to the directories I create inside of :/var/www/html directory so I can have multiple sites hosted on the server.
So in this example, I have a config file at :/etc/nginx/conf.d/technology.conf which maps to the directory at :/var/www/html/technology
Thank you in advance, again, I do feel like I have put forth the research and effort to show that I've gone as far as I know how to do.
The answer made sense after I spent roughly a day playing around. You have to give access to both the ec2-user and the nginx group. I believe you never want to put a user in a group that involves the server itself, I think things would go south.
After changing the owner to both the ec2-user and nginx group, it still didn't work exactly the way I wanted it to. The reason was, I needed the nginx permissions to be updated to what they had when they were assigned the user role.
Basically, theec2-user had write permissions and the server did not. we wanted the user to have write permissions so they could rsync my local files to the directory on the server, and the nginx group needed the same level of permissions to display the pages. Now that I think about it, the nginx group may have only needed read permissions to display things, but this at least solved the problem for now.
Here is the command I ran on the server to update the ownership and the permissions, as well as the output.
modify ownership
sudo chown -R ec2-user:nginx :/var/www/html/technology
modify permissions
sudo chmod -R o=rwx,g+rwx,o-w technology
The end result looks like this
You can see the permissions match, and the ownership is as we expected. The only thing I have to figure out is after I rsync new files to the server, I need to run the previous code to update the permissions again. I'm sure that will come to me later, but I hope this helps anyone in the same situation.

pywatchdog and pyinotify not detecting changes on files inside ftp created directories

I have an application monitoring files sent to a FTP server (proftpd 1.3.5a). I am using pywatchdog to monitor file creation on FTP server root (app running locally), but under some very specific circumstance it does not issue a notification: when I create a new dir through ftp and, after that, create a file under this directory. The file creation/modification events are not caught!
In order to reproduce it in a simple way I've used pyinotify (0.9.6) itself and it looks like the problem comes from there. So, a simple way to reproduce the problem:
Install proftpd and pyinotify (python3) on the server with default settings
In the server, run the following command to monitor ftp root (recursive and autoadd turned on - considering user "user"):
python3 -m pyinotify -v -r -a /home/user
In the client, create a sample.txt, connect in the ftp server and issue the following commands, in this order:
mkdir dir_a
cd dir_a
put sample.txt
There will be no events related to sample.txt - neither create nor modify!
I've tried to remove the ftp factor from the issue by manually creating and moving directories inside the observed target and creating files inside these directories, but the issue does not happen - it all works smoothly.
Any help will be appreciated!

enable rsync to run permanently

I have two machines, both running on linux with centos 7.
I have installed the rsync packages on both of them and i am able to sync a directory from one machine to the other.
Right now i am doing the syncing manually, each time i want to sync i am running the next line:
rsync -r /home/stuff root#123.0.0.99/home
I was wondering if there is a way of configuring the rsync to do the syncing automatically, every some amount of time or preferably when there is a new file of sub directory in the home directory?
Thank you for your help.
Any help would be appreciated.
If you want to rsync every some amount of time you can use cronjobs which can be configured to run a specific command each amount of time and if you want to run rsync when there is an update or modification you can use lsyncd. check this article about use lsyncd
Update:
As links might get outdated, I will add this brief example (You are free to modify it with what works best for you):
First create an ssh key on the source machine and then add the public key at the ~/.ssh/authorized_keys file on the destination machine.
In the source machine update this file ~/.ssh/config with the following content:
# ~/.ssh/config
...
Host my.remote.server
identityfile ~/.ssh/id_rsa
IdentitiesOnly yes
hostname 123.0.0.99
user root
port 22
...
And configure your lsyncd with the following then restart lsyncd's service
# lsyncd.conf
...
sync {
default.rsyncssh,
source="/home/stuff",
host="my.remote.server",
targetdir="/home/stuff",
excludeFrom="/etc/lsyncd/lsyncd.exclude",
rsync = {
archive = true,
}
}
...
You can setup an hourly cron job to do this.
rsync in itself is quite efficient in that it only transfers changes.
You can find more info about cron here: cron

SCP command not working - need to copy file from Windows localhost to Linux

I need to copy file admin.zip from C:\wamp\www\jdhemumbai060714\webfiles (Windows) to /var/www/html/ (Linux). I am using following command::
scp C:\wamp\www\jdhemumbai060714\webfiles\admin.zip username#hostname:/var/www/html/
But it does not work and gives error::
ssh: Could not resolve hostname C: Temporary failure in name resolution
I am logged in Linux server using SSH
I think that it is bug in SCP port.
Only way is skip "C:" and use only "\wamp\www\jdhemumbai060714\webfiles\admin.zip"
It will work if current directory is on the same disk like file for upload.
Or you can use pscp.exe
Well firstly is your DNS server able to resolve the HOSTNAME your copying too? My Advice would be to use IP Address.
scp C:\wamp\www\jdhemumbai060714\webfiles\admin.zip username#192.168.0.2:/var/www/html/
BELOW ANSWER APPLICABLE ONLY FOR EC2 OR WHICH HAS PEM KEY.
Open Windows CMD, and Type
scp -i Keypair_Along_with_Path.pem YOUR_FILENAME_ALONG_WITH_PATH.txt USERNAME#PUBLIC-IP:DESTINATION_PATH
Real Example:
scp -i C:\Users\Keypair.pem C:\Users\File.txt ubuntu#1.1.1.1:/tmp/.
You are done.

Resources