enable rsync to run permanently - linux

I have two machines, both running on linux with centos 7.
I have installed the rsync packages on both of them and i am able to sync a directory from one machine to the other.
Right now i am doing the syncing manually, each time i want to sync i am running the next line:
rsync -r /home/stuff root#123.0.0.99/home
I was wondering if there is a way of configuring the rsync to do the syncing automatically, every some amount of time or preferably when there is a new file of sub directory in the home directory?
Thank you for your help.
Any help would be appreciated.

If you want to rsync every some amount of time you can use cronjobs which can be configured to run a specific command each amount of time and if you want to run rsync when there is an update or modification you can use lsyncd. check this article about use lsyncd
Update:
As links might get outdated, I will add this brief example (You are free to modify it with what works best for you):
First create an ssh key on the source machine and then add the public key at the ~/.ssh/authorized_keys file on the destination machine.
In the source machine update this file ~/.ssh/config with the following content:
# ~/.ssh/config
...
Host my.remote.server
identityfile ~/.ssh/id_rsa
IdentitiesOnly yes
hostname 123.0.0.99
user root
port 22
...
And configure your lsyncd with the following then restart lsyncd's service
# lsyncd.conf
...
sync {
default.rsyncssh,
source="/home/stuff",
host="my.remote.server",
targetdir="/home/stuff",
excludeFrom="/etc/lsyncd/lsyncd.exclude",
rsync = {
archive = true,
}
}
...

You can setup an hourly cron job to do this.
rsync in itself is quite efficient in that it only transfers changes.
You can find more info about cron here: cron

Related

cPanel cron job, no input file specified?

I've just set up my first cron-jon to run a stock script every night.
Running it manually works fine.
It's stored in /admin/stock_update.php
The command i'm running is /usr/bin/php -q /admin/stock_update.php
But I'm getting emails saying no input file is specified?
Any ideas?
Cheers
Network services almost never expose actual paths on the server's hard disk drive and even if they could it isn't a behaviour you can rely on. So the fact that your file is located at /admin/stock_update.php in the FTP server doesn't say much about actual location on disk, which is what local command-line utilities expect.
In PHP, you can find path on disk of current file with the __FILE__ magic constant. You can create a test script:
<?php
var_dump(__FILE__);
... upload it to the same FTP location and execute through the web server. If that's not an option because files in your FTP account in not visible from the web you can run the file from cron and check the email.
Do you have CloudLinux kernel installed on that server and CageFS filestyem? If yes try running this:
cagefsctl -w cpaneluser; cagefsctl -m cpaneluser
Then try running the cron again

How do I restore CronTab to my WebMin system

I don't know if this was an effect of the shellshock attack which my server was victim to (or another attack that worked) but it basically enabled the hacker to overwrite my SSH config file when the server rebooted.
This new file used wget to load in a file from a website, then another library of hack functions which I guessed he then used to run hacks/DOS from my server. I caught it pretty fast and ideally want to upgrade but because I have cancer and just had a big operation it is too much effort at the moment.
Therefore I did a lot of house keeping, changing passwords, removing shell access, reverting back to DASH, replacing the default shell for root and any other users to another folder with symbolic links, restoring the config file for SSH, removing CGI functionality from config files e.g
ScriptAlias /cgi-bin/ /home/searchmysite/cgi-bin/
#
allow from all
#
Removed AW stats and Webalizer for all virtual min sites.
I already had DenyHosts and Fail2Ban installed.
I also blocked in/outbound traffic to the IPs of the sites he was getting the files from.
However it seems since this change I have lost the visual cron manager from webmin.
When I go to the menu item "Scheduled Cron Jobs", it says, "The command crontab for managing user Cron configurations was not found. Maybe Cron is not installed on this system?"
However I can see in the file system it exists.
When I run crontab -l or crontab -e I get "Permission Denied"
whoami shows "root"
I did think at the time of the hack this was all related and he had used SSH and a Cron job to get his hack running.
What I want to know is how I can get the CronTab manager back.
All the cron jobs are still running such as importing feeds into my websites, running scheduled emails and so on, what I don't know is how to resolve this without a full rebuild.
If I had the time and energy I would do that but I am totally drained and before this hack everything was just running smoothly and my websites which bring me in money were working fine.
They currently are still working fine and I regularly check my logs for IPs that look odd, have strong htacess rules for xss/sql/path travesal/file hacks and ban whole countries from Cloudflare which the site sits behind. So I don't "think" the machine is compromised at the moment even if it is old - could be wrong though!
details of box
Operating system Debian Linux 5.0 Virtualmin version 3.98.gpl GPL WebMin Version: 1.610 Kernel and CPU Linux 2.6.32.9-rscloud on x86_64
So if anyone can help me get my crontab manager back that would be great.
Thanks
1) check if chattr exists, if not, download a new one.
2) type whereis crontab, then chattr -isa /path/to/crontab.(usually /usr/bin/cron) then chmod crontab back to it original settings.
3) navigate to /var/spool/ and
chattr -isa cron
cd cron
chattr -isa crontabs
4) remove cron entry in /etc/cron.weekly
Look in /etc/cron.weekly for any new

Git push/pull fails on GitLab in Google Compute Engine

I've installed GitLab on Google Compute Engine using "Click to Deploy" from the project interface. The deployment is successful after a few minutes. I can SSH into the instance, and muck around with it as expected.
I can also log in to GitLab using the web interface, and add SSH keys to my profile. So far, so good. However, when I attempt to push or pull to a new example repository, I receive this message:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've removed my local SSH config so it doesn't interfere. Do I need to setup an SSH tunnel of some sort? What am I missing?
UPDATE: Wiping out my local ~/.ssh folder, and regenerating an SSH key (which I've added to my profile in GitLab) produces the following error:
Received disconnect from {GITLAB_IP_ADDRESS}: 2: Too many authentication failures for git
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPDATE 2: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
You need to create an SSH tunnel to communicate with GitLab.
1. Log into your development server as your user, and create a key.
ssh-keygen -t rsa
Follow the steps, and create a passcode (that you can remember) as you'd need this to pull and push code from/to GitLab.
2. Now that you've created your key, we can copy it;
cat id_rsa.pub
Copy the output of that command (including ssh-rsa), and add it to your GitLab profile. (http://my-gitlab-server.com/profile/keys/new).
3. Ensure you have the correct privilege to the project(s)
Ensure you are at role developer at the very least. (Screengrab of roles: http://i.stack.imgur.com/DSSvl.jpg)
4. Now, copy the project link
Go into your project, and find the SSH link in the top right;
5. Now back to your development server
Navigate to your directory where you'd like to work, and run the following;
$ git init
$ git remote add origin <<project_url>>
$ git fetch
Where <<project_url>> is the link we copied in step 4.
You will be prompted your password (this is your ssh key password, not your server password) and to add the host to your known_hosts file. After that, the project will start to download and you can enjoy development.
I did these steps on a CentOS 6.4 machine with Digital Ocean. But they shouldn't differ from using Google CE.
Edit
Quote from Marty Penner answer as per this comment
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
UPDATE: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
In my situation the git user wasn´t set up completely. If you get in your log files messages like "User git not allowed because account is locked" (Under Centos or Redhat it´s /var/log/secure) than you simply need to activate the user via "passwd -d git"

Using rsync to keep two servers in sync

I have two AWS EC2 instances that I'm trying to implement a two way sync. So if a file or folder on server1 is created or updated it should sync that file/folder to server2. If it's a new folder it should be created on the server. The problem I'm having is I can't get rsync to create the folders on the 'local' server.
For example, server 1: /rootdir/1/2/3/4, where directories 3 and 4 do not exist on server2. When I run rsync on server2 I want those new directories to be created.
Here is the code I'm trying to use, running from Server2:
$sudo rsync -avzP -e "ssh -i /home/ec2-user/.ssh/Key.pem" ec2-user#IPADDRESS OF SERVER1:/rootdir/1/2/ /rootdir/1/2
I'm not getting an error but the directories aren't being copied.
I also tried -r but it made no difference.
I finally figured out what I was doing wrong. The servers were configured with a non-standard port and I needed to tell rsync which port to use.

Upload files from linux vps to web host

I want to somehow automatically upload files every 5 minutes. I want to upload/transfer the files from my linux vps to my web host.
What I'm trying to do is upload some logs files generated on my vps to my web host so administrators can access it with an .htaccess file.
use wput along with cron to ftp files to your host
wput [options] [file]... ftp://[username[:password]#]hostname[:port][/[path/][file]]
You will probably have to install the tool as its not included by default (at least it hasnt been on most of my installs)
You'll want to set up a cron job for this. The Wikipedia page for this has a nice overview of how the crontab file is laid out. However, you should check your distribution's documentation for better information (they could be using a different version or a completely different cron daemon).
The line you'd add to the crontab would look something like this:
*/5 * * * * <user to run command as> <your command>
See also: http://www.unixgeeks.org/security/newbie/unix/cron-1.html
Hopefully your web host provides SCP or FTP servers to allow you to copy files over. How do you transfer files when you're uploading your web site files?
If it's ftp, use the ftp command:
ftp -u user:password#host/destination_folder/ sourcefile.txt
If it's scp, use the scp command:
scp foobar.txt username#host:/some/remote/directory

Resources