How to integrate a deployment automation tool into puppet? - puppet

We are a mixed linux/windows shop that successfully adopted Puppet for Config Mgmt a while ago. We'd like to drop ansible in as our deployment orchestration tool (research suggests that puppet doesn't do this very well) but have questions about how to integrate the two products.
Today, puppet is the source of truth with respect to environment info (which nodes belong to which groups etc). I want to avoid duplicating this information in ansible. Are there any best practices with regards to sharing environment information between the two products?

One way to reduce the amount of duplicated state between the systems is to use Ansible's "Dynamic Inventory" support. Instead of defining your hosts/groups in a text file, you use a script that pulls the same data from somewhere else. This could be PuppetDB, Foreman, etc and is going to depend on your environment.
Writing a new script is also pretty simple, it just needs to be any executable (bash/python/ruby/etc) that returns json in a specific format.
Lastly, it is possible to roll out new releases with puppet, but it is easier with a "microservice" like release process. Ensuring apps/services/databases remain backwards compatible across versions can make pushing out releases trivial with puppet and your favorite package manager.

Using Puppet and Mcollective should be the way to go if you are looking for a solution from puppetlabs
https://puppetlabs.com/mcollective

Related

Dialogflow agent git versioning

I'm looking for a solution that allows my team to collaborate in the development of a single Dialogflow agent, because the typical scenario is that 2 or more developers work on the same agent.
Usually for other kind of technologies we adopt git for source code versioning (also for webhooks in example) applying the proper branching strategy.
We tried to use git also for agents exports, but in this case it seems to be impossible because we are facing issues in merging intents ids (that seems to be generated with a not well known way).
Then in this scenario, we need to synchronize each other, for consistency checks, before exporting and committing the zip to git, losing the chance to leverage the power of git.
Searching with Google I got that we can solve the problem start using such framework like Narratory.
Said that, my questions are:
Do you know if is there any way ther to accomplish our need without using Narratoy?
Do you know if exist alternative to Narratory?

Find Puppet code to retrieve user information

Is there a way I can get comeplete user information using puppet code.
Like group id, id, password expiration details, etc.
May be using facter.
You can run any command using facter and centralize it using Puppet, so technically is possible, but is not advised (as correctly stated by boxdog)
If you do that, the fact collected will be huge (as .json) and Puppet will have to store that in the DB and your performance is going to be slow down.
Think of Puppet's facter as the temperature for your AC. You need the temperature fact to be collected by the configuration management tool so that you can drive decision in order to turn on or off the heating. The reason you collect facts is to drive configuration decisions. In order to monitor your infrastructure you need a System Monitoring tool and not a Configuration Management tool.
If you want to collect this information one time, Puppet has support for:
Bolt
MCollective (deprecated for newer versions, available on <= 2018.1)

Terraform Folder Structure - Modules vs Files

Not sure there is going to be a right or wrong answer for this one, but I am just interested how people manage Terraform in the real world? In terms of do you use modules, different environments and collaborations.
At the moment we are planning on having a production, dev and test environments. All similar.
Now at the moment I have made my terraform files in a way that define individual components of AWS, so say one for, VPC, IAM, EC2, Monitoring (CloudWatch + CloudTrail + CloudConfig) etc. And there is one variable file and .tfvars for the above, so the files are portable (all environments will be the same). So if you need to change something its all in one place. Also means if we have a specific project running I can create a tf file defining all the resource for the project and drop it in, then once its completed remove it.
Each environment has its own folder structure on our Terraform server.
Is this too simplistic? I keep looking at module.
Also does anyone have experience of collaboration with Terraform, as in different teams? I have been looking at things like Atlantis to tie into GitHub, so any changes need to be approved. But also at the sametime with the correct IAM role I can limit what Terraform can change.
Like I said may not be a wrong of right answer just interested in how people are managing terraform and their experiences.
Thanks
My answer is just an use case...
We are using terraform for an application deployed for several customers each having small specific configuration features.
We have only one CVS repository. We don't use CVS branches mechanism.
For each folder, we have remote states at least to share states between developers.
We are using one global folder having remote states also to share states between customers configurations
We are using one folder per customer and using workspaces (former environment) for each context for each customer (prod:blue/green, stage)
For common infrastructure chunks shared by all customers, we use module
We mainly use variables to reduce the number of specific files in each customer folders.
Hope this will help you...

Using Terraform as an API

I would like to use Terraform programmatically like an API/function calls to create and teardown infrastructure in multiple specific steps. e.g reserve a couple of eips, add an instance to a region and assign one of the IPs all in separate steps. Terraform will currently run locally and not on a server.
I would like to know if there is a recommended way/best practices for creating the configuration to support this? So far it seems that my options are:
Properly define input/output, heavily rely on resource separation, modules, the count parameter and interpolation.
Generate the configuration files as JSON which appears to be less common
Thanks!
Instead of using Terraform directly, I would recommend a 3rd party build/deploy tool such as Jenkins, Bamboo, Travis CI, etc. to manage the release of your infrastructure managed by Terraform. Reason being is that you should treat your Terraform code in the exact same manner as you would application code (i.e. have a proper build/release pipeline). As an added bonus, these tools come integrated with a standard api that can be used to execute your build and deploy processes.
If you choose not to create a build/deploy pipeline, your other options are to use a tool such as RunDeck which allows you to execute arbitrary commands on a server. It also has the added bonus of having a excellent privilege control system to only allow specified users to execute commands. Your other option could be to upgrade from the Open Source version of Terraform to the Pro/Premium version. This version includes an integrated GUI and extensive API.
As for best practices for using an API to automate creation/teardown of your infrastructure with Terraform, the best practices are the same regardless of what tools you are using. You mentioned some good practices such as clearly defining input/output and creating a separation of concerns which are excellent practices! Some others I can recommend are:
Create all of your infrastructure code with idempotency in mind.
Use modules to separate the common shared portions of your code. This reduces the number of places that you will have to update code and therefore the number of points of error when pushing an update.
Write your code with scalability in mind from the beginning. It is much simpler to start with this than to adjust later on when it is too late.

How far should I go with puppet?

I'd like to preface by saying I'm new to puppet. I have been working with it via vagrant and am starting to feel comfortable writing manifests, but I lack perhaps the experience, or intuition, that can answer my question.
I am trying to grasp how puppet is scoped, and where lines are drawn. I am specifically interested in how this applies to modules and their creation and use.
A more concrete example: the puppletlabs-nginx module. So hypothetically I'm going along my merry way, creating a manifest for a given server role; say it's a dead-simple static file webserver, and I'd like to use nginx. The module will clearly help me with that; there's try_files support and such. I can even ramp up to reverse-proxying via this module. But what if things get stickier? What if there's something I want to do programmatically that I cannot do with the module?
Well, perhaps the short answer is to fix it myself, do a pull request, and go along my merry way. But where does that stop? Is the goal of a community puppet module to support every facet of a given software package? That seems unmanageable. On the other hand, doesn't that create a bunch of mostly-baked modules, build solely from use cases?
Then, there's an analog to Android UI: there are setter methods for what I think is most XML UI definitions. In puppet if feels similar. You can build a config file programmatically, or create it by filling in an ERB template. In other words, I feel the line in puppet between programmatic creation of configuration files and the templated creation of configuration files is blurred; I found no best way with Android and so I don't know which is the way to go with puppet.
So, to the question: what constitutes the ideal puppet module? Should it rely more on templates? On the manifest? Should it account for all configuration scenarios?
From a further-withdrawn perspective it almost seems I want something more opinionated. Puppet's power seems to be flexibility and abstraction, but the modules that are out there feel inconsistent and not as fleshed out.
Thanks for reading...
Thanks to Mark. In just a small amount of time I've switched over to playing with Chef and the modules seem better in regards to many of the concerns I voiced.
In short i can explain you about puppet.
Puppet is nothing but it is an IT automation tool where we can install software's on other machines by creating manifests (recipes or scripts) on master for those softwares to be installed on target machine.
Here master indicates implementation of puppet manifests for softwares.
Target machine indicates agent where softwares to be installed.
Puppet module constitutes of following structure where we do this in master.
In master the path is /etc/puppet/modules to enter into modules directory,you have mentioned puppletlabs-nginx module.so now we can take this module as an example.
After modules directory we have to create files and manifests directories.furthermore,in manifests directory we will create .pp files.For instance, install.pp,uninstall.pp.this is how module structure will be.we usually run these scripts by using few resources like package,service,file,exec etc.
Templates play a minor role in puppet manifests just to hardcore the values.it is not a major part of puppet.Manifests has a great importance in puppet.
For automating any software by using puppet we can follow the above structure.
Thank you.
The PuppetLabs solution here is to use different types of modules for each function -- Components, Profiles, and Roles. See the presentation Designing Puppet: Roles/Profiles Pattern for more information.
The modules available from PuppetForge are of the "Component" type. Their purpose is to be as flexible as possible, while focusing on a single logical piece of software -- e.g., either the apache httpd server OR apache tomcat, but not both.
The kinds of modules that you might write to wrap around one of those Component modules would be a perfect example of a "Profile" module. This could tie apache httpd together along with tomcat and jboss and maybe also some other components like mysql and php. But it's one logical stack, with multiple components. Or maybe you have a LAMP Profile module, and a separate tomcat+jboss Profile module.
The next level up is "Role" modules. They shouldn't have anything in them other than "include" statements that point at the appropriate Profile modules.
See the PuppetLabs presentation for more details, but this logic is pretty similar to what is seen in the Chef world with "wrapper cookbooks".

Resources