How to test terraform templates other than trial and error - terraform

I'm creating cloud resources using Terraform. Each resource is expected to be in a particular desired state after provisioning. For example, when I create a Google Cloud Bucket, I would like certain permissions to be applied automatically. So, my plan contains necessary code for this but I wanted to make sure that this works all the time regardless before I apply. Is there any testing tool/library that can help here?

Yes, I had the same thinking before. Currently, I use several ways to reduce the risk when I apply a new terraform change.
They can't guarantee a 100% successful terraform apply, but will fix the most issues before you apply it.
Validate terraform configuration files.
Terraform has the validate function for starting. But it is not smart enough to go through subfolders. I create a small shell function and add in CI/CD pipeline to run it automatically before terraform apply.
validate() {
modules=$(find . -type f -name "*.tf" -exec dirname {} \;|sort -u)
for m in ${modules}
do
(terraform validate "$m" && echo "√ $m") || exit 1
done
}
Of course, do terraform fmt before you submit your change is not bad idea.
terraform plan
#Martin Atkins explained it already, and terraform.io has details about this command.
run automation test kitchen.
That's a test Kitchen plugin for testing Terraform configurations
https://github.com/newcontext-oss/kitchen-terraform
That's an integration test. The test will run in separate VPC with as more as test cases you added. Add the automation test in CI/CD pipeline as well to trigger an automation test every time when you raise merge request to master branch. Apply the change only after getting the test passed.

The terraform plan command is intended to give a preview of what changes Terraform will make when the plan is applied, which is the closest we can get to testing a Terraform configuration without touching the "real" API.
For situations where that isn't enough, it's common to deploy the same config multiple times with different states, thus allowing one to be used as a "staging" environment to test changes without affecting the primary environment. The State Environments feature added in Terraform 0.9 can make this easier, since the multiple environment states can be managed directly with Terraform CLI commands.
When it comes to automated testing of the result, there is currently no full solution to this integrated into Terraform, but there are some building blocks that could be useful to assist in writing tests in a separate programming language.
Terraform produces state files in JSON format that can, in principle, be used by external programs to extract certain data about what Terraform created. While this format is not yet considered officially stable, in practice it changes infrequently enough that people have successfully integrated with it, accepting that they might need to make adjustments as they upgrade Terraform.
What strategy is appropriate here will depend a lot on what exactly you want to test. For example:
In an environment that's spinning up virtual servers, tools like Serverspec can be used to run tests from the perspective of these servers. This can either be run separately from Terraform using some out-of-band process, or as part of the Terraform apply using the remote-exec provisioner. This allows verification of questions like "can the server reach the database?", but is not suitable for questions such as "is the instance's security group restrictive enough?", since robustly checking that requires accessing data from outside of the instance itself.
It's possible to write tests using an existing test framework (such as RSpec for Ruby, unittest for Python, etc) which gather relevant resource ids or addresses from the Terraform state file and then use the relevant platform's SDK to retrieve data about the resources and assert that they are set up as expected. This is a more general form of the previous idea, running the tests from the perspective of a host outside of the infrastructure under test, and can thus collect a broader set of data to make assertions on.
For more modest needs, one can choose to trust that the Terraform state is an accurate representation of reality (a valid assumption in many cases) and simply assert directly on that. This is most appropriate for simple "lint-like" cases, such as verifying that the correct resource tagging scheme is being followed for cost-allocation purposes.
There is some more discussion about this in a relevant Terraform Github issue.
In the latest versions of Terraform it is strongly recommended to use a remote backend for any non-toy application, but that means that the state data is not directly available on local disk. However, a snapshot of it can be retrieved from the remote backend using the terraform state pull command, which prints the JSON-formatted state data to stdout so it can be captured and parsed by a calling program.

We recently open sourced Terratest, our swiss army knife for testing infrastructure code.
Today, you're probably testing all your infrastructure code manually by deploying, validating, and undeploying. Terratest helps you automate this process:
Write tests in Go.
Use helpers in Terratest to execute your real IaC tools (e.g., Terraform, Packer, etc.) to deploy real infrastructure (e.g., servers) in a real environment (e.g., AWS). Note that this environment would be a separate "sandbox" account and not prod!
Use helpers in Terratest to validate that the infrastructure works correctly in that environment by making HTTP requests, API calls, SSH connections, etc.
Use helpers in Terratest to undeploy everything at the end of the test.
Here's an example test for some Terraform code:
terraformOptions := &terraform.Options {
// The path to where your Terraform code is located
TerraformDir: "../examples/terraform-basic-example",
}
// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
terraform.InitAndApply(t, terraformOptions)
// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
// Run `terraform output` to get the value of an output variable
instanceUrl := terraform.Output(t, terraformOptions, "instance_url")
// Verify that we get back a 200 OK with the expected text
// It can take a minute or so for the Instance to boot up, so retry a few times
expected := "Hello, World"
maxRetries := 15
timeBetweenRetries := 5 * time.Second
http_helper.HttpGetWithRetry(t, instanceUrl, 200, expected, maxRetries, timeBetweenRetries)
These are integration tests, and depending on what you're testing, can take 5 - 50 minutes. It's not fast (though using Docker and test stages, you can speed some things up), and you'll have to work to make the tests reliable, but it is well worth the time.
Check out the Terratest repo for docs and lots of examples of various types of infrastructure code and the corresponding tests for them.

Related

Is there a terraform resource which can force an intermediate update of the remote statefile, before continuing?

I set up terraform to use a backend to remotely store the statefile. That works fine.
My project take several minutes for the full terraform apply to complete. During development, sometimes one of the later stages hangs (seemingly) eternally. I need the outputs in order to manually connect to the servers and inspect what is broken. However, the statefile does not get written until the terraform process completes. So there are no outputs available during the first terraform apply.
Is there a way to make terraform update the statefile intermediately, while it is still busy applying things?
I know I could solve this by separating the process into multiple modules, and apply each one after the other. But I am looking for a solution where I can still apply all at once.
When you run
terraform plan
you get the outputs also. What you can do is save that file before applying -
terraform plan -out tf.plan
Then you apply.
You can look into this file to find the changes.
Remember, you won't find the output data that were to be showed after apply, like a thing that does not exist yet.
Best wishes.

Terraform and using provisioners

I have a module that creates some VMs (along with a VApp) and I need to enable users of that module to pass scripts that would get run on creation and/or deletion of each VM. For example, to enroll the VM into a company authentication system, CMDB, monitoring system or other.
And I just cannot find a way to implement it nicely.
There are several approaches that I've tried:
Passing a script that gets passed directly to commands inside provisioner blocks on each VM resource.
Advantages of this approach:
Clean and simple - there are no additional resources.
Works as intended - runs on creation and/or deletion.
The problems with this approach:
You can only pass one script per VM.
You could concatenate them together, but that has obvious drawbacks like the failure in one is then a failure in everything leaving things in an inconsistent state. Retrying gets pretty messy.
It's somewhat ugly since if the command is null, Terraform crashes.
As a workaround, you could pass a one-liner containing only a comment about why it's there but that doesn't exactly make it pretty.
Creating a separate null_resource for each VM-script combination and putting the script there.
Advantages of this approach:
There is a separate resource for each script for each VM that if failed could be retried.
The problems with this approach:
Problems arise when updating the scripts.
Creation script updates can be ignored with a lifecycle ignore statement since if you're changing the enroll procedure after a machine has already been succesfully enrolled, you don't need to enroll it again.
But changing a when = destroy script results in the resource being recreated. Meaning it being destroyed and created with the new version of a script. Which in turn triggers the run of the old script which is very unlikely to be desirable.
Creating "runner" null_resources that only contain the name of a script to run. The script and its content is managed through a set of local_file resources.
Advantages of this approach:
Script content is separated from script execution.
The problems with this approach:
This falls apart in ephemeral environments (like running Terraform in a new container each time) where the files would have to be created each time. Maybe this can be worked around though seems it might be difficult because depends_on fails if something other than a static resource is used - fails when trying to use resource subscripts.
Additional considerations:
Using mechanisms of pushing those scripts to VMs and having them run there is undesirable because:
Additional risk of having to handle secrets passed to a lot of machines.
There is the assumption that the machines can also reach the APIs that is not always true.
The hack of putting information from other resources in triggers (because a when = destroy provisioner can only use references to self) is a bit worrisome because relying on it means everything will break completely if Terraform developers decide to remove it entirely.

Is there any way to run a script inside alredy existing infrastructre using terraform

I want to run a script using terraform inside an existing instance on any cloud which is pre-created .The instance was created manually , is there any way to push my script to this instance and run it using terraform ?
if yes ,then How can i connect to the instance using terraform and push my script and run it ?
I believe ansible is a better option to achieve this easily.
Refer the example give here -
https://docs.ansible.com/ansible/latest/modules/script_module.html
Create a .tf file and describe your already existing resource (e.g. VM) there
Import existing thing using terraform import
If this is a VM then add your script to remote machine using file provisioner and run it using remote-exec - both steps are described in Terraform file, no manual changes needed
Run terraform plan to see if expected changes are ok, then terraform apply if plan was fine
Terraform's core mission is to create, update, and destroy long-lived infrastructure objects. It is not generally concerned with the software running in the compute instances it deploys. Instead, it generally expects each object it is deploying to behave as a sort of specialized "appliance", either by being a managed service provided by your cloud vendor or because you've prepared your own machine image outside of Terraform that is designed to launch the relevant workload immediately when the system boots. Terraform then just provides the system with any configuration information required to find and interact with the surrounding infrastructure.
A less-ideal way to work with Terraform is to use its provisioners feature to do late customization of an image just after it's created, but that's considered to be a last resort because Terraform's lifecycle is not designed to include strong support for such a workflow, and it will tend to require a lot more coupling between your main system and its orchestration layer.
Terraform has no mechanism intended for pushing arbitrary files into existing virtual machines. If your virtual machines need ongoing configuration maintenence after they've been created (by Terraform or otherwise) then that's a use-case for traditional configuration management software such as Ansible, Chef, Puppet, etc, rather than for Terraform.

Can't figure out how to reuse terraform provisioners

I've created some (remote-exec and file) provisioners to bootstrap (GCP) VMs that I'm creating that I want to apply to all my VMs, but I can't seem to figure out how to reuse them...?
Modules seem like the obvious answer, but creating a module to create the VMs means I'd need to make input vars for everything that I'd want to configure on each of the VMs specifically...
Reusing the snippets with the provisioners doesn't seem possible though?
Terraform's Provisioner feature is intended as a sort of "last resort" for situations where there is no alternative but to SSH into a machine and run commands on it remotely, but generally we should explore other options first.
The ideal case is to design your machine images so that they are already correctly configured for what they need to do and so they can immediately start doing that work on boot. If you use HashiCorp Packer then you can potentially run very similar steps at image build time to what you might've otherwise run at Terraform create time with provisioners, perhaps allowing you to easily adapt the work you already did.
If they need some configuration parameters from Terraform in order to start their work, you can use features like the GCP instance metadata argument to pass in those values so that the software in the image can access it as soon as the system boots.
A second-best sort of option is to use features like GCP startup scripts to pass the script to run via metadata so that again it's available immediately on boot, without the need to wait for the SSH server to start up and become available.
In both of these cases, the idea is to rely on features provided by the compute platform to treat the compute instances as a sort of "appliance", so Terraform (and you) can think of them as being similar to a resource modelling a hosted service. Terraform is concerned only with starting and stopping this black box infrastructure and wiring it in with other infrastructure, and the instance handles its implementation details itself. For use-cases where horizontal scaling is appropriate, this also plays nicely with managed autoscaling functionality like google_compute_instance_group, since new instances can be started by that system rather than directly by Terraform.
Because Provisioners are designed as a last-resort for when approaches like the above are not available, their design does not include any means for general reuse. It's expected that each provisioner will be a tailored solution to a specific problem inline in the resource it relates to, not something you use systematically across many separate callers.
With that said, if you are using file and remote-exec in particular you can get partway there by factoring out the specific file to be uploaded and the remote command to execute, in which case your resource blocks will contain just the declaration boilerplate while avoiding repetition of the implementation details. For example, if you had a module that exported outputs local_file_path, remote_file_path, and remote_commands you could write something like this:
module "provisioner_info" {
source = "./modules/provisioner-info"
}
resource "any" "example" {
# ...
provisioner "file" {
source = module.provisioner_info.local_file_path
destination = module.provisioner_info.remote_file_path
}
provisioner "remote-exec" {
inline = module.provisioner_info.remote_commands
}
}
That is the limit for factoring out provisioner details in current versions of Terraform.

Backing up of Terraform statefile

I usually run all my Terraform scripts through Bastion server and all my code including the tf statefile resides on the same server. There happened this incident where my machine accidentally went down (hard reboot) and somehow the root filesystem got corrupted. Now my statefile is gone but my resources still exist and are running. I don't want to again run terraform apply to recreate the whole environment with a downtime. What's the best way to recover from this mess and what can be done so that this doesn't get repeated in future.
I have already taken a look at terraform refresh and terraform import. But are there any better ways to do this ?
and all my code including the tf statefile resides on the same server.
As you don't have .backup file, I'm not sure if you can recover the statefile smoothly in terraform way, do let me know if you find a way :) . However you can take few step which will help you come out from situation like this.
The best practice is keep all your statefiles in some remote storage like S3 or Blob and configure your backend accordingly so that each time you destroy or create a new stack, it will always contact the statefile remotely.
On top of it, you can take the advantage of terraform workspace to avoid the mess of statefile in multi environment scenario. Also consider creating a plan for backtracking and versioning of previous deployments.
terraform plan -var-file "" -out "" -target=module.<blue/green>
what can be done so that this doesn't get repeated in future.
Terraform blue-green deployment is the answer to your question. We implemented this model quite a while and it's running smoothly. The whole idea is modularity and reusability, same templates is working for 5 different component with different architecture without any downtime(The core template remains same and variable files is different).
We are taking advantage of Terraform module. We have two module called blue and green, you can name anything. At any given point of time either blue or green will be taking traffic. If we have some changes to deploy we will bring the alternative stack based on state output( targeted module based on terraform state), auto validate it then move the traffic to the new stack and destroy the old one.
Here is an article you can keep as reference but this exactly doesn't reflect what we do nevertheless good to start with.
Please see this blog post, which, unfortunately, illustrates import being the only solution.
If you are still unable to recover the terraform state. You can create a blueprint of terraform configuration as well as state for a specific aws resources using terraforming But it requires some manual effort to edit the state for managing the resources back. You can have this state file, run terraform plan and compare its output with your infrastructure. It is good to have remote state especially using any object stores like aws s3 or key value store like consul. It has support for locking the state when multiple transactions happened at a same time. Backing up process is also quite simple.

Resources