Cron to read from multiple crontab files for each system - linux

I have a large landscape of servers. There are logical groupings for some servers (clusters). I'd find a way to run crontabs on specific clusters. Specifically, I'd like to have a centralized location were I can edit their crontabs at the same time.
Currently, what I'm doing is accessing each server and editing their crontabs the manual way.
Thanks

Maybe you should consider a tool like Ansible which would allow you to manage the cron entries or cron files locally, and distribute them to your servers.
Ansible is quite easy to learn and all you need is Ansible, Python and a working ssh connection.

Related

should I configure my EC2 using user_data or Ansible

When launching EC2 using Terraform (or cloud formation), we can configure EC2 by putting some scripts in user_data/remote-exec. Alternatively, we can configure EC2 using Ansible/Chef, etc. What are the difference of configuring EC2 in user_data/remote-exec and do that with Ansible/Chef? when to use the former, when to use the latter (I know Ansible/Chef is idempotent)?
In my case, the EC2 is originally manually launched, then manually configured using a lot of linux commands. and the commands are not configured by me. Now I am the person to automate the whole structure using terraform, and configure EC2s. Using user_data/remote-exec to configure EC2 is straightforward. I just need to put all the existing linux commands they have in some scripts with a little change. And if the configuration result using my script is not successful, at least I can quickly figure out whether I miss some commands by comparing my script and the original linux commands. But if I use ansible/chef, I have to rewrite all the steps using different language. And if the configuration is not what expected, it is hard for me to figure out which steps are not correct, because the syntax of ansible/chef and linux commands are totally different.
My question is, in my case, should I use ansible/chef or user_data/remote-exec for configuration?
User Data is good for initial configuration of the system. If you need longer term maintenance a configuration management software like Ansible/Chef/Salt/Puppet is a great option.
Packer can be used for immutable infrastructure, i.e. doesn't change after creation. You can run all the scripts and installs on the system for it to be ready to just boot, this is also faster because you don't have to wait for user data to run.
A few questions you have to ask as well, how often are you going to patch these? Are you going to just update existing or replace with new. Ansible is great for configuration since it's just yaml files an
Blue/Green deployments generally replace servers with all new ones and gradually move traffic over to the new servers.
Some more things to consider with your Infrastructure as code

Moving files from multiple Linux servers to a central windows storage server

I have multiple Linux servers with limited storage space that create very big daily logs. I need to keep these logs but can't afford to keep them on my server for very long before it fills up. The plan is to move them to a central windows server that is mirrored.
I'm looking for suggestions on the best way to this. What I've considered so far are rsync and writing a script in python or something similar.
The ideal method of backup that I want is for the files to be copied from the Linux servers to the Windows server, then verified for size/integrity, and subsequently deleted from the Linux servers. Can rsync do that? If not, can anyone suggest a superior method?
You may want to look into using rsyslog on the linux servers to send logs elsewhere. I don't believe you can configure it to delete logged lines with a verification step - I'm not sure you'd want to either. Instead, you might be best off with an aggressive logrotate schedule + rsyslog.

How to write a shell script to run scripts on several remote machines without ssh?

Can anyone please tell me how I can write a bash shell script that executes another script on several remote machines without ssh.
The scenario is I've a couple of scripts that I should run on 100 Amazon Ec2 cloud instances. The naive approach is to write a script to scp both the source scripts to all the instances and then run them by doing a ssh on each instance. Is there a better way of doing this?
Thanks in advance.
If you just want to do stuff in parallel, you can use Parallel SSH or Cluster SSH. If you really don't want to use SSH, you can install a task queue system like celery. You could even go old school and just have a cron job that periodically checks a location in s3 and if the key exists, download the file and run it, though you have to be careful to only run it once. You can also use tools like Puppet and Chef if you're generally trying to manage a bunch of machines.
Another option is rsh, but be careful, it has security implications. See rsh vs. ssh.

Linux: Uploading files to a live server - How to automate process?

I'm developing on my local machine (apache2, php, mysql). When I want to upload files to my live server (nginx, mysql, php5-fpm), I first backup my www folder, extract the databases, scp everything to my server (which is tedious, because it's protected with opiekey), log myself in, copy the files from my home directory on the server to my www directory and if I'm lucky and the file permissions and everything else works out, I can view the changes online. If I'm unlucky I'll have to research what went wrong.
Today, I changed only one file, and had to go through the entire process just for this file. You can imagine how annoying that is. Is there a faster way to do this? A way to automate it all? Maybe something like "commit" in SVN and off you fly?
How do you guys handle these types of things?
PS: I'm very very new to all this, so bear with me! For example I'm always copying files into my home directory on the server, because scp cannot seem to copy them directly into the /var/www folder?!
There are many utilities which will do that for you. If you know python, try fabric. If you know ruby, you may prefer capistrano. They allow you to script both local and remote operations.
If you have a farm of servers to take care of, those two might not work at the scale you want. For over 10 servers, have a look at chef or puppet to manage your servers completely.
Whether you deploy from local checkout, packaged source (my preferred solution), remote repository, or something entirely different is up to you. Whatever works for you is ok. Just make sure your deployments are reproducible (that is you can always say "5 minutes ago it wasn't broken, I want to have what now what I had 5 minutes ago"). Whatever way of versioning you use is better than no versioning (tagged releases are probably the most comfortable).
I think the "SVN" approach is very close to what you really want. You make a cron job that will run "svn update" every few minutes (or hg pull -u if using mercurial, similar with git). Another option is to use dropbox (we use it for our web servers sometimes) - this one is very easy to setyp and share with non-developers (like UI designers)...
rsync will send only the changes between your local machine and the remote machine. It would be an alternative to scp. You can look into how to set it up to do what you need.
You can't copy to /var/www because the credentials you're using to log in for the copy session doesn't have access to write on /var/www. Assuming you have root access, change the group (chown) on /var/www (or better yet, a sub directory) to your group and change the permissions to allow your group write access (chmod g+w).
rsync is fairly lightweight, so it should be simple to get going.

Automated deployment of files to multiple Macs

We have a set of Mac machines (mostly PPC) that are used for running Java applications for experiments. The applications consist of folders with a bunch of jar files, some documentation, and some shell scripts.
I'd like to be able to push out new version of our experiments to a directory on one Linux server, and then instruct the Macs to update their versions, or retrieve an entire new experiment if they don't yet have it.
../deployment/
../deployment/experiment1/
../deployment/experiment2/
and so on
I'd like to come up with a way to automate the update process. The Macs are not always on, and they have their IP addresses assigned by DHCP, so the server (which has a domain name) can't contact them directly. I imagine that I would need some sort of daemon running full-time on the Macs, pinging the server every minute or so, to find out whether some "experiments have been updated" announcement has been set.
Can anyone think of an efficient way to manage this? Solutions can involve either existing Mac applications, or shell scripts that I can write.
You might have some success with a simple Subversion setup; if you have the dev tools on your farm of Macs, then they'll already have Subversion installed.
Your script is as simple as running svn up on the deployment directory as often as you want and checking your changes in to the Subversion server from your machine. You can do this without any special setup on the server.
If you don't care about history and a version control system seems too "heavy", the traditional Unix tool for this is called rsync, and there's lots of information on its website.
Perhaps you're looking for a solution that doesn't involve any polling; in that case, maybe you could have a process that runs on each Mac and registers a local network Bonjour service; DNS-SD libraries are probably available for your language of choice, and it's a pretty simple matter to get a list of active machines in this case. I wrote this script in Ruby to find local machines running SSH:
#!/usr/bin/env ruby
require 'rubygems'
require 'dnssd'
handle = DNSSD.browse('_ssh._tcp') do |reply|
puts "#{reply.name}.#{reply.domain}"
end
sleep 1
handle.stop
You can use AppleScript remotely if you turn on Remote Events on the client machines. As an example, you can control programs like iTunes remotely.
I'd suggest that you put an update script on your remote machines (AppleScript or otherwise) and then use remote AppleScript to trigger running your update script as needed.
If you update often then Jim Puls idea is a great one. If you'd rather have direct control over when the machines start looking for an update then remote AppleScript is the simplest solution I can think of.

Resources