Using Terraform as an API - terraform

I would like to use Terraform programmatically like an API/function calls to create and teardown infrastructure in multiple specific steps. e.g reserve a couple of eips, add an instance to a region and assign one of the IPs all in separate steps. Terraform will currently run locally and not on a server.
I would like to know if there is a recommended way/best practices for creating the configuration to support this? So far it seems that my options are:
Properly define input/output, heavily rely on resource separation, modules, the count parameter and interpolation.
Generate the configuration files as JSON which appears to be less common
Thanks!

Instead of using Terraform directly, I would recommend a 3rd party build/deploy tool such as Jenkins, Bamboo, Travis CI, etc. to manage the release of your infrastructure managed by Terraform. Reason being is that you should treat your Terraform code in the exact same manner as you would application code (i.e. have a proper build/release pipeline). As an added bonus, these tools come integrated with a standard api that can be used to execute your build and deploy processes.
If you choose not to create a build/deploy pipeline, your other options are to use a tool such as RunDeck which allows you to execute arbitrary commands on a server. It also has the added bonus of having a excellent privilege control system to only allow specified users to execute commands. Your other option could be to upgrade from the Open Source version of Terraform to the Pro/Premium version. This version includes an integrated GUI and extensive API.
As for best practices for using an API to automate creation/teardown of your infrastructure with Terraform, the best practices are the same regardless of what tools you are using. You mentioned some good practices such as clearly defining input/output and creating a separation of concerns which are excellent practices! Some others I can recommend are:
Create all of your infrastructure code with idempotency in mind.
Use modules to separate the common shared portions of your code. This reduces the number of places that you will have to update code and therefore the number of points of error when pushing an update.
Write your code with scalability in mind from the beginning. It is much simpler to start with this than to adjust later on when it is too late.

Related

AWS EKS from scratch - terraform or eksctl?

Are there any benefits to spawn a new AWKS EKS cluster by using terraform or eksctl?
Are there some long-term maintenance benefits of one vs another?
Well, although I haven't actually tried this out with Terraform, I can definitely say that the eksctl way is not recommended. At least not if you're interested in manageing your infrastructure as code.
With eksctl, most changes to an existing cluster need to be made with specific eksctl commands. Just changing the (declarative) cluster.yaml (or whatever you name) does not apply anything relevant. You want to scale a nodeGroup? Well, please use eksctl scale nodegroup, as changing the size in the YAML file is not applying anything. I think you get the pattern.
It's really sad that, of all companies, Weaveworks, the "inventors" of GitOps, provide a tool that does not even support basic IaC :(
I would highly recommend using terraform. It is declarative and provides an interface that can be used to support all of your infrastructure and not just your EKS cluster(s).
The time and effort you put into learning terraform and implementing it in your pipeline can be easily re-used for other infrastructure needs unlike eksctl.

Is Pulumi that magical when compared to using Azure .NET SDK?

I'm with a dilema here about which SE site to ask this question so please help me out if it should be somewhere else.
I've been looking into Infrastructure as Code solutions.
Didn't like Terraform too much. The lack of intellisense makes discoberability harder than programmers have been used to.
I've been considering ARM templates. I like it that the templates are made available as we create resources in the portal but it seems way less readable and harder to maintain afterwards.
Then I found out Pulumi and love their idea compared to Terraform. The way I see it, they're approach is also declarative like the above options but we can use decent programming languages to get the job done.
The for loops is a must.
Cool, I like that! But since we like using C# (or other alternatives), then why don't we SDKs to manage our infrastructure as code?
Pulumi has compared themselves with cloud SKDs by positioning their solution as much safer advocating that, if we just use a cloud SDK ourselves, then our solution wouldn't be that reliable.
To what extent is this really true, I wonder?
Last year, I wrote some libraries that used Azure service bus queues/topics. There were several integration tests that would run in parallel and I needed to isolate them by creating new queues/topics and used Microsoft.Azure.ServiceBus.Management.ManagementClient to do this.
It really didn't seem like I had to learn anything at all.
Going to the point now. Not discarding Pulumi's innovation which I think is great:
Will Pulumi's really add that much benefit compared to using Azure SDKs?
What's been your experience with it?
A Pulumi developer here, so I'm definitely biased. I suspect the SO community may find your question violating some of the guidance, but I hope my answer survives :)
One upside of using Pulumi is that you get access to multiple providers with consistent developer experience. You may be using exclusively Azure, but you might at some point start combining it with things like building and publishing Docker images, deploying Kubernetes applications, or Datadog dashboards. All can be done from the same program or solution.
Now, the biggest difference with imperative SDKs is the notion of desired-state configuration. A Pulumi program describes the graph of resources and dependencies between them (what), not the steps to provision them (how). When you have an environment that lives for months and years, there's a big difference between evolving a single definition with baby steps and applying incremental changes (Pulumi) and writing a bunch of update scripts/programs to bring each environment to the new state (SDK).
How do you maintain multiple environments that may be similar but still different? (production vs staging vs test vs dev) How do you make sure that your short-lived infra that you created for nightly tests reflects the reality of production? What happens when an SDK program fails in the middle - can you retry running it again or will it create duplicate resources/fail with another error? How do you get a simple overview of changes over time in git? Concurrency control? Change history?
All the things above are baked into Pulumi and require manual consideration with a cloud SDK.

Terraform Folder Structure - Modules vs Files

Not sure there is going to be a right or wrong answer for this one, but I am just interested how people manage Terraform in the real world? In terms of do you use modules, different environments and collaborations.
At the moment we are planning on having a production, dev and test environments. All similar.
Now at the moment I have made my terraform files in a way that define individual components of AWS, so say one for, VPC, IAM, EC2, Monitoring (CloudWatch + CloudTrail + CloudConfig) etc. And there is one variable file and .tfvars for the above, so the files are portable (all environments will be the same). So if you need to change something its all in one place. Also means if we have a specific project running I can create a tf file defining all the resource for the project and drop it in, then once its completed remove it.
Each environment has its own folder structure on our Terraform server.
Is this too simplistic? I keep looking at module.
Also does anyone have experience of collaboration with Terraform, as in different teams? I have been looking at things like Atlantis to tie into GitHub, so any changes need to be approved. But also at the sametime with the correct IAM role I can limit what Terraform can change.
Like I said may not be a wrong of right answer just interested in how people are managing terraform and their experiences.
Thanks
My answer is just an use case...
We are using terraform for an application deployed for several customers each having small specific configuration features.
We have only one CVS repository. We don't use CVS branches mechanism.
For each folder, we have remote states at least to share states between developers.
We are using one global folder having remote states also to share states between customers configurations
We are using one folder per customer and using workspaces (former environment) for each context for each customer (prod:blue/green, stage)
For common infrastructure chunks shared by all customers, we use module
We mainly use variables to reduce the number of specific files in each customer folders.
Hope this will help you...

Performing multi-cloud (AWS, Azure, GCP ) provisioning using ansible

Best practices to perform multi cloud using ansible
As a best practice, I would implement three separate playbooks and three different inventories to keep things simple. You could put together some logic to do conditionals based on the inventory and cloud provider be used, but why would you need to?
I would then create separate roles for implementing the required resources, (from an AWS perspective) create_vpc (may include dhcp options and IGW), create_routes (and route tables), create_NACLs, create_subnets, create security_groups, launch_asg(includes launch configuration), create_nat_gateway, create_nat_instance, create_elb, get_subnet_ids, get_vpc_id. The reason for creating the separate roles would to allow for flexibility in implementing resources and reuse of code.
You could easily write everything as the one playbook, and I would even recommend doing this initially to see how things work (getting familiar with ansible modules), then turn it into roles to allow for code reuse.
Include a shared variable file, (include_vars) to implement the various subnets and load balancers across the different cloud providers. This would result in three of the same environments implemented in each cloud provider.
I'm looking to implement this as a home project to learn about the differences between the different vendor offerings, based on my AWS knowledge.

How to integrate a deployment automation tool into puppet?

We are a mixed linux/windows shop that successfully adopted Puppet for Config Mgmt a while ago. We'd like to drop ansible in as our deployment orchestration tool (research suggests that puppet doesn't do this very well) but have questions about how to integrate the two products.
Today, puppet is the source of truth with respect to environment info (which nodes belong to which groups etc). I want to avoid duplicating this information in ansible. Are there any best practices with regards to sharing environment information between the two products?
One way to reduce the amount of duplicated state between the systems is to use Ansible's "Dynamic Inventory" support. Instead of defining your hosts/groups in a text file, you use a script that pulls the same data from somewhere else. This could be PuppetDB, Foreman, etc and is going to depend on your environment.
Writing a new script is also pretty simple, it just needs to be any executable (bash/python/ruby/etc) that returns json in a specific format.
Lastly, it is possible to roll out new releases with puppet, but it is easier with a "microservice" like release process. Ensuring apps/services/databases remain backwards compatible across versions can make pushing out releases trivial with puppet and your favorite package manager.
Using Puppet and Mcollective should be the way to go if you are looking for a solution from puppetlabs
https://puppetlabs.com/mcollective

Resources