automated deployment on production with puppet - puppet

I would like to know how automated deployment to production works with puppet.
Do I need a puppet-slave on my production server? If thats the case, is that insecure and what rights do puppet get with that?
A use-case could be to get a package from a repository manager and then to deploy it to the production server. What are the main steps on this way with puppet?

Puppet can run in solo-mode where you apply a set of configurations in config file on the host in which you run it, as long as puppet (client/agent) is already installed there.
You can also run puppet in a client-server mode, where an agent runs on your production server and obtains configuration details from a puppet server (or puppet master)
If you run in client-server mode, how do you ensure security?
Well, in client-server mode, you pre-register a client/agent to a server you nominate and the exchange ssl certificates before any actions can be applied on that agent. Again, you would have to (on your pupper server or master) associate a set of actions or manifests to the production server running the agent. I suppose that provides sufficient security, assuming you already took care of standard OS security for both systems in the first instance.
Also, additional security can be provided by the puppet file server as suggested in the link suggested by bagheera. If you are even more paranoid than that, then you would need to consider using puppet librarian with a Puppetfile that is assembled and used at run time.
In either case, the bigger challenge for you is that the set of instructions (or manifests) applied have undergone testing (on a test or staging server) before being applied to a production system.
So, you need to be sure what you are doing when you start trying to apply puppet manifests to production servers. I would not recommend just downloading puppet modules and using them without a decent insight into what you are doing and a clear understanding of what each module you intend to use does.
Puppetlabs have great introduction documentation for using puppet, and that would be an excellent place to start learning more about puppet. A good book would also be useful.

Related

How to run puppet forge modules in linux ubuntu machine

I'm new to Puppet. I want to install any package or software on my new linux machine where Ubuntu installed. I have gone through puppet forge modules in their portal.
There are plenty of modules available but I'm not getting how to run them.
Looks like in all puppet forge modules, puppet language script used. I guess we need to install puppet first in linux machine.
I came to know that we have server and client puppet master and puppet agent. Do we need to install both on my linux machine to run puppet forge scripts?
How to install puppet on linux ubuntu machine and where to run puppet forge module scripts among master and agent?
Do we need 2 linux machines each for puppet server and client?
Puppet is targeted at managing multi-computer installations. It can be used on an isolated machine (you would install both the master and the agent on that machine), but you are likely to make more work for yourself that way, not less, especially given that you have no prior experience with Puppet.
Looks like in all puppet forge modules, puppet language script used.I guess we need to install puppet first in linux machine.
Pedantically, the Puppet language is not a scripting language. But yes, Puppet modules are written primarily in Puppet's domain-specific language. You need Puppet to use them.
I came to know that we have server and client puppet master and puppet
agent.Do we need to install both on my linux machine to run puppet
forge scripts.
Unless you want to set up a second machine for the master to run on, yes, you would need to install both the master and the agent on your machine. Puppet used to support a direct-apply mode, but that is no longer an option.
How to install puppet on linux ubuntu machine and where to run puppet
forge module scripts among master and agent.
Puppet has extensive online documentation. The section on installing Puppet is here: https://puppet.com/docs/puppet/latest/installing_and_upgrading.html.
Note also that installing the software is not all you would need to do. Puppet modules are not programs. They are somewhat like subroutines. You would also need at least to write some Puppet code of your own to specify just how (using the modules of your choice) you want Puppet to configure your machine.
Do we need 2 linux machines each for puppet server and client.
No. You can run the agent on the machine that hosts the master. Many sites do that, in fact, but it is rare for that to be the only place where the agent runs.
Generally speaking, you need to have several machines under Puppet management to achieve a net win relative to managing your machines directly. It really doesn't sound to me like Puppet would be a good fit for you.
For your use case, it seems like using Puppet Bolt is the better option.
As stated by John Bollinger, Puppet has very good online documentation on their products, and it's no different with Bolt:
Installing Bolt on Ubuntu
Once Bolt is installed, you can use its built-in package task to manage packages on your machine, e.g. Apache, by running:
bolt task run package action=install name=apache2
(you can find more examples here)
But if you intend to use Puppet Forge Apache module with Bolt, you can start by installing the module, but this is a more advanced use case, as you'd probably would have to write a plan or manifest to actually use the module's full potential, and you'd still have to deal with some limitations.
As you're new to Puppet and Bolt, I'd recommend you start simple and also take this hand-on lab provided by PuppetLabs.
I hope that gets you going!

How to use multiple different puppet masters from one puppet agent?

There is the need that one puppet agent contacts some different puppet masters.
Reason: there are different groups that create different and independent sets of manifests.
Possible groups and their tasks
Application Vendor: configuration of application
Security: hardening
Operations: routing tables, monitoring tools
Each of these groups should run it's own puppet master - the data (manifests and appropriate data) should be strictly separated. If it is possible, one group should even not see / have access to the manifests of the others (we are using MAC on the puppet agent OSes).
Thoughts and ideas that all failed:
using (only) hira is not flexible as needed - there is the need to have different manifests.
r10k: supports more than one environment, but in each environment can only access one set of manifests.
multi but same puppet server using e.g. DNS round robin: this is the other way round. We need different puppet masters.
Some ways that might be possible but...
running multiple instances of puppet agents. That 'feels' strange. Advantage: the access rights can be limited in the way as needed (e.g. the application puppet agent can run under the application user).
patching puppet that it can handle more than one puppet master. Disadvantage: might be some work.
using other mechanisms to split responsibility. Example: use different git-repositories. Create one puppet master. The puppet master pulls all the different repositories and serves the manifests.
My questions:
Is there a straight forward way implementing this requirement with puppet?
If not, is there some best practice how to do this?
While I think what you are trying to do here is better tackled by incorporating all of your modules and data onto a single master, and that utilizing environments will be effectively the exact same situation (different masters will provide a different set of modules/data) this can be achieved by implementing a standard multi-master infrastructure (one CA master for cert signing, multiple compile masters with certs signed by the same CA master, configured to forward cert traffic elsewhere) and configure each master to have whatever you need. You then end up having to specify which master you want to check in to on each run (a cronjob or some other approach), and have the potential for one checkin to change settings set by another (kinda eliminating the hardening/security concept).
I would urge you to think deeper on how to collaborate your varied aspects (git repos for each division's hiera data and modules that have access control) so that a central master can serve your needs (and access to that master would be the only way to get data/modules from everywhere).
This type of setup will be complex to implement, but the end result will be more reliable and maintainable. Puppet inc. may even be able to do consultation to help you get it right.
There are likely other approaches too, just fyi.
I've often found it convenient to multi-home a puppet agent for development purposes, because with a localĀ puppet server you can instantly test manifest changes - there's no requirement to commit, push and r10k deploy environment like there is if you're just using directory environments and a single (remote) puppet server.
I've found the best way to do that is to just vary the path configuration (otherwise you run into problems with e.g. the CA certs failing to verify against the other server) - a form of your "running multiple instances of puppet agents" suggestion. (I still run them all privileged, so they can all use apt package {} etc.)
For Puppet 3, I'd do this by varying the libdir with --libdir (because the ssldir was under the libdir), but now (Puppet 4+) it looks more sensible to vary the --confdir. So, for example:
$ sudo puppet agent -t # Runs against main puppet server
$ sudo puppet agent -t \
--server=puppet.dev.example.com \
--confdir=/etc/puppetlabs/puppet-dev # Runs against dev puppet server

Trigger puppet run on update of manifest / facts

I'm working on a tool which manages WordPress instances using puppet. The flow is the following: the user adds the data of the new WordPress installation in the web interface and then that web interface is supposed to send a message to the puppet master to tell it to deploy it to the selected machine.
Currently the setup is done via a manifest file which contains the declaration of all WordPress instances, and that is applied manually via puppet apply on the puppet agent. This brings me to my 2 questions:
Are manifests the correct way of doing this? If so, is it possible to apply them from the puppet master to a specific node instead of going to the agent?
Is it possible to automatically have a puppet run triggered once the list of instances is altered?
To answer your first question, yes there's absolutely a way of doing this via a puppetmaster, what you have at the moment is a masterless setup which assumes you're distributing your configuration with some kind of version control (like git) or manual process. This is a totally legitimate way of doing things if you don't want a centralized master.
If you want to use a master, you'll need to drop your manifest in the $modulepath of your master (it varies depending on your version, you can find it using puppet config print modulepath on your master) and then point the puppet agent at the master.
If you want to go down the master route, I'd suggest following the puppet documentation which will help you get started.
The second question brings me on to a philosphical argument of 'is this really want you want to do?'
Puppet traditionally (in my opinion) is a declarative config management tool that is designed to make your systems look a certain way. You write code to determine 'this is how I want it to look' and Puppet will converge to make it look that way. What you're looking to do is more of an orchestration task (ie when X do Y). There are ways of doing this with Puppet like using mcollective (to trigger a puppet run) which is managed by a webhook, but I think there are better tools for the job.
I'd suggest looking at ansible, saltstack or Chef's knife tool to do deploys like this.

Puppet agent mass deployment

Is there a built-in way to mass deploy the Puppet agent on hundreds of nodes, in an unattended, automated way? (providing user/pass/cert.)
There is no built in way to do so. But, you can always use kickstart/pre-seed to deploy puppet agent as part of os provisioning and hand it to puppet to manage your hosts.
Or as an alternate you can write custom shell script to deploy puppet agent's on hundreds on machines, I personally use this method to manage puppet. For reference here is the script.
Also, you may be interested in project razor which automatically deploys puppet as part of bare-metal provisioning and hands it to puppet for configuration management.
Basically the only thing you need to do is to install the Puppet Agent on those machines. I assume that you don't install software packages manually for hundreds of nodes, right?
Once you installed the Agent, it will automatically find the Puppet Master (if puppet.yourdomain.com points to that host), sends certificate requests to the Master where you need to sign them. You can also use the autosign feature of Puppet.
Furthermore, Puppet Enterprise and The Foreman are bases on Puppet and they come along with additional provisioning features.
I suggest that you use the parallel SSH. There are plenty of flavours, I prefer clush, see https://github.com/cea-hpc/clustershell/wiki/clush
You need to create your /etc/clustershell/groups file with groups, e.g.:
all: node[1-2000]
Then you can install the puppet on all the nodes easily with something like this:
clush -bw #all yum -y install puppet

Is there really no easy way to test puppet scripts on a remote machine?

I'm experimenting with Puppet scripts for deployment.
I find the hardest part about the process of writing those scripts is iteratively testing them.
I don't want to puppet apply on my local development machine, that liable to screw stuff up. I have a clean-slate remote box where I want to apply. I also don't see how a puppetmaster can help me; I might be using a puppetmaster at a later point for production deployments, but for now, I just want to get my code working.
So I put together a quick shell script that would rsync the different directories from my local puppet module path to /tmp on the remote machine, and then run puppet apply. This is terribly inconvenient. It's slow, especially if we're talking about a syntax error.
I think what I want really is something like a puppetd <-> puppetmaster connection, where puppetd on the remote machine receives an already compiled manifest. Just an adhoc-one over a SSH connection, without having to actual setup an Puppetmaster, dealing with certificates etc. puppet apply user#host.
There seems to be nothing of the sort, but how do other people deal with this? I experience of working on a Puppet script is incredibly frustrating to me, as is.
I'd recommend using Vagrant. If you're not testing the puppet master setup you can use the built in provisioner integration.
Once you have everything setup you can run vagrant provision or just run puppet apply on the vagrant vm.
Here's a related article you may find helpful as well.
I would also take a look at puppet rpsec tests, using rspec-puppet and puppetlabs-spec-helper. The rspec-puppet-init will break puppet doc and geppetto and maybe some other things due to the symlinks, and there are some issues with hiera, but the tests are easy to setup otherwise and work well, and can also be tied into jenkins/hudson.
I usually have two levels of testing for my Puppet scripts.
Unit tests for quick feedback: Written using rspec-puppet, these compile a Puppet catalog for the class/define/etc being tested, and make assertions about it. Run locally each time I make a minor change, and on the build server each time I check in. The tests run quickly (<10 seconds), and pick up syntax and dependency issues.
Functional tests to make sure it really works: Written using Cucumber with the Aruba library. When I'm finished implementing a feature and the unit tests for it pass, these tests provision a VM (using Vagrant) with the appropriate Puppet manifest(s), log in, and make assertions about the VM's state. The tests themselves look something like:
Given I am SSHed into Vagrant box "webserver"
When I type "php --version"
Then the output should include "PHP 5.4.11"
Vagrant is the most useful environment for rapid infrastructure development that I've found. It most closely (99%) will mirror your production setup, and you can account for those tiny differences in puppet so everything works as expected. It takes about 30 minutes to get going with it and will pay you back many times over in saved time messing around with file copy scripts :)
If it's helpful to visualize, on my desktop I have 3 terminals side by side:
Terminal 1) Editing puppet manifests, classes, ruby code, etc
Terminal 2) Running 'vagrant provision' which simply does a puppet apply along with any facts you want to pass, etc.
Terminal 3) 'vagrant ssh' into the box so I can poke around as puppet is doing its work
Hope this helps!
Why don't you want to run a puppetmaster? It's created for exactly this situation.
If you absolutely cannot run a puppetmaster, then you would have to wrap your puppet calls in another script that first downloads the file (with curl or wget) and apply them after a successful download. Given that the puppetmaster is a fairly simple application to run, I don't see how not using it would be any better.
I stumbled across rump while looking at another question. If you're using git, it might be useful. There's a slide deck available.
From the README.md: "Rump helps you run Puppet locally against a Git checkout."
You may be interested in citac, a toolkit for automated testing of Puppet scripts. It is available on Github: https://github.com/citac/citac
Citac systematically executes your Puppet manifest in various configurations, imitating transient system faults, different resource execution orders, and more. The generated test reports inform you about issues with non-idempotent resources, convergence-related issues, etc.
The tool uses Docker containers for execution, hence your system remains untouched while testing. State changes are tracked during execution of the Puppet script, and detailed test reports are generated.
To get an idea of which bugs the tool is able to detect, a large-scale evaluation with more than 150 public Puppet scripts has been performed. The results are available here: http://citac.github.io/eval/
Please feel free to provide feedback, pull requests, etc. Happy testing!

Resources