How to use multiple different puppet masters from one puppet agent? - puppet

There is the need that one puppet agent contacts some different puppet masters.
Reason: there are different groups that create different and independent sets of manifests.
Possible groups and their tasks
Application Vendor: configuration of application
Security: hardening
Operations: routing tables, monitoring tools
Each of these groups should run it's own puppet master - the data (manifests and appropriate data) should be strictly separated. If it is possible, one group should even not see / have access to the manifests of the others (we are using MAC on the puppet agent OSes).
Thoughts and ideas that all failed:
using (only) hira is not flexible as needed - there is the need to have different manifests.
r10k: supports more than one environment, but in each environment can only access one set of manifests.
multi but same puppet server using e.g. DNS round robin: this is the other way round. We need different puppet masters.
Some ways that might be possible but...
running multiple instances of puppet agents. That 'feels' strange. Advantage: the access rights can be limited in the way as needed (e.g. the application puppet agent can run under the application user).
patching puppet that it can handle more than one puppet master. Disadvantage: might be some work.
using other mechanisms to split responsibility. Example: use different git-repositories. Create one puppet master. The puppet master pulls all the different repositories and serves the manifests.
My questions:
Is there a straight forward way implementing this requirement with puppet?
If not, is there some best practice how to do this?

While I think what you are trying to do here is better tackled by incorporating all of your modules and data onto a single master, and that utilizing environments will be effectively the exact same situation (different masters will provide a different set of modules/data) this can be achieved by implementing a standard multi-master infrastructure (one CA master for cert signing, multiple compile masters with certs signed by the same CA master, configured to forward cert traffic elsewhere) and configure each master to have whatever you need. You then end up having to specify which master you want to check in to on each run (a cronjob or some other approach), and have the potential for one checkin to change settings set by another (kinda eliminating the hardening/security concept).
I would urge you to think deeper on how to collaborate your varied aspects (git repos for each division's hiera data and modules that have access control) so that a central master can serve your needs (and access to that master would be the only way to get data/modules from everywhere).
This type of setup will be complex to implement, but the end result will be more reliable and maintainable. Puppet inc. may even be able to do consultation to help you get it right.
There are likely other approaches too, just fyi.

I've often found it convenient to multi-home a puppet agent for development purposes, because with a localĀ puppet server you can instantly test manifest changes - there's no requirement to commit, push and r10k deploy environment like there is if you're just using directory environments and a single (remote) puppet server.
I've found the best way to do that is to just vary the path configuration (otherwise you run into problems with e.g. the CA certs failing to verify against the other server) - a form of your "running multiple instances of puppet agents" suggestion. (I still run them all privileged, so they can all use apt package {} etc.)
For Puppet 3, I'd do this by varying the libdir with --libdir (because the ssldir was under the libdir), but now (Puppet 4+) it looks more sensible to vary the --confdir. So, for example:
$ sudo puppet agent -t # Runs against main puppet server
$ sudo puppet agent -t \
--server=puppet.dev.example.com \
--confdir=/etc/puppetlabs/puppet-dev # Runs against dev puppet server

Related

Trigger puppet run on update of manifest / facts

I'm working on a tool which manages WordPress instances using puppet. The flow is the following: the user adds the data of the new WordPress installation in the web interface and then that web interface is supposed to send a message to the puppet master to tell it to deploy it to the selected machine.
Currently the setup is done via a manifest file which contains the declaration of all WordPress instances, and that is applied manually via puppet apply on the puppet agent. This brings me to my 2 questions:
Are manifests the correct way of doing this? If so, is it possible to apply them from the puppet master to a specific node instead of going to the agent?
Is it possible to automatically have a puppet run triggered once the list of instances is altered?
To answer your first question, yes there's absolutely a way of doing this via a puppetmaster, what you have at the moment is a masterless setup which assumes you're distributing your configuration with some kind of version control (like git) or manual process. This is a totally legitimate way of doing things if you don't want a centralized master.
If you want to use a master, you'll need to drop your manifest in the $modulepath of your master (it varies depending on your version, you can find it using puppet config print modulepath on your master) and then point the puppet agent at the master.
If you want to go down the master route, I'd suggest following the puppet documentation which will help you get started.
The second question brings me on to a philosphical argument of 'is this really want you want to do?'
Puppet traditionally (in my opinion) is a declarative config management tool that is designed to make your systems look a certain way. You write code to determine 'this is how I want it to look' and Puppet will converge to make it look that way. What you're looking to do is more of an orchestration task (ie when X do Y). There are ways of doing this with Puppet like using mcollective (to trigger a puppet run) which is managed by a webhook, but I think there are better tools for the job.
I'd suggest looking at ansible, saltstack or Chef's knife tool to do deploys like this.

Opinion: Multiple puppet master environments

I hope you guys will permit me an opinion question here as well. If not, please let me know and I'll delete the post.
Due to policy I need to have two separate puppet masters, one for dev and one for pre-prod, prod etc.
The best practice route is to use r10k to manage these environments, but I am dubious to put something into production that I can not comfortably support - so that will the goal I work towards, but I do need an interim solution.
What would the drawbacks and concerns be by implementing the following:
puppetmaster1 - local git repo, contains all the manifests and hieradata for all environments.
puppetmaster_prod - Client of puppetmaster1, the manifest and hieradata directories is served, enforced and managed by puppetmaster1 using the file resource to manage the manifests and hieradata directories. This will manage all non-dev servers.
puppetmaster_dev - Similar to _prod, this will receive a recursive directory from puppetmaster1 enforcing modules, manifests etc
All actual servers then connect to the appropriate puppetmaster[prod|dev]
This provides me with source control as well as a single backup point and central configuration point.
It also satisfies the requirement of complete separation of dev and prod variables such as tokens and passwords. Each 'slave' puppetmaster will only contain the manifests and variables it manages, enforced by puppetmaster1.
This will be a stop-gap only until I am comfortable with r10k, its installation and usage.

puppet configuration help needed

I need your help to understand the better implementation approach for the below requirement:
Suppose my puppet master server name is: server.example.com which I need to update in 500 puppet agent nodes to contact to puppet master server. One way is to add server=server.example.com in puppet.conf on all the agent nodes and second way is to run the command "puppet agent --test --server server.example.com" on all agent nodes. But this needs to be performed either manually or some kind of automation needs to be performed. Is there some better way?
Second option is I can create a CNAME with name 'puppet' on puppet master server so that all agent nodes automatically make the communication with the puppet master. But in case I have multiple puppet master in the same domain than how I can manage it?
I will highly appreciate if someone can throw some light on the best practice to achieve this.
Thanks,
Sanjiv
The best practice is to take full advantage of puppet automation by adding server=server.example.com which is the address of the master. Since you are dealing with 500 nodes, manual approach is not encouraged.
By default puppet agents communicate with the master every 30minutes. But in some cases if you want to force puppet agents to communicate with master within this default time period, then use a parallel ssh or similar tool to invoke puppet agent --test
If you are considering multiple puppet masters then you need to ensure that DNS or the proxy server is properly configured in the network and point to right puppet master at a given point of time.
This might be helpful: https://docs.puppetlabs.com/guides/scaling_multiple_masters.html
You can have the client's puppet.conf as a template where server can take a variable in puppet or reading it from hiera. The server name will get propagated to your clients during the next puppet run by agents.

Puppet agent mass deployment

Is there a built-in way to mass deploy the Puppet agent on hundreds of nodes, in an unattended, automated way? (providing user/pass/cert.)
There is no built in way to do so. But, you can always use kickstart/pre-seed to deploy puppet agent as part of os provisioning and hand it to puppet to manage your hosts.
Or as an alternate you can write custom shell script to deploy puppet agent's on hundreds on machines, I personally use this method to manage puppet. For reference here is the script.
Also, you may be interested in project razor which automatically deploys puppet as part of bare-metal provisioning and hands it to puppet for configuration management.
Basically the only thing you need to do is to install the Puppet Agent on those machines. I assume that you don't install software packages manually for hundreds of nodes, right?
Once you installed the Agent, it will automatically find the Puppet Master (if puppet.yourdomain.com points to that host), sends certificate requests to the Master where you need to sign them. You can also use the autosign feature of Puppet.
Furthermore, Puppet Enterprise and The Foreman are bases on Puppet and they come along with additional provisioning features.
I suggest that you use the parallel SSH. There are plenty of flavours, I prefer clush, see https://github.com/cea-hpc/clustershell/wiki/clush
You need to create your /etc/clustershell/groups file with groups, e.g.:
all: node[1-2000]
Then you can install the puppet on all the nodes easily with something like this:
clush -bw #all yum -y install puppet

automated deployment on production with puppet

I would like to know how automated deployment to production works with puppet.
Do I need a puppet-slave on my production server? If thats the case, is that insecure and what rights do puppet get with that?
A use-case could be to get a package from a repository manager and then to deploy it to the production server. What are the main steps on this way with puppet?
Puppet can run in solo-mode where you apply a set of configurations in config file on the host in which you run it, as long as puppet (client/agent) is already installed there.
You can also run puppet in a client-server mode, where an agent runs on your production server and obtains configuration details from a puppet server (or puppet master)
If you run in client-server mode, how do you ensure security?
Well, in client-server mode, you pre-register a client/agent to a server you nominate and the exchange ssl certificates before any actions can be applied on that agent. Again, you would have to (on your pupper server or master) associate a set of actions or manifests to the production server running the agent. I suppose that provides sufficient security, assuming you already took care of standard OS security for both systems in the first instance.
Also, additional security can be provided by the puppet file server as suggested in the link suggested by bagheera. If you are even more paranoid than that, then you would need to consider using puppet librarian with a Puppetfile that is assembled and used at run time.
In either case, the bigger challenge for you is that the set of instructions (or manifests) applied have undergone testing (on a test or staging server) before being applied to a production system.
So, you need to be sure what you are doing when you start trying to apply puppet manifests to production servers. I would not recommend just downloading puppet modules and using them without a decent insight into what you are doing and a clear understanding of what each module you intend to use does.
Puppetlabs have great introduction documentation for using puppet, and that would be an excellent place to start learning more about puppet. A good book would also be useful.

Resources