Custom Resources Composed of Other Resources? - terraform

I'm currently working on a Terraform project to automate infrastructure in AWS. Since we are using a pretty consistent pattern, my idea was to create custom Terraform resources which are composed of multiple AWS resources to DRY things up.
Is there a way within custom Go Terraform resources which are simply composed of multiple AWS resources under the hood? I'd like to have a resource named something like app_stack which is composed of an auto-scaling group, an elastic load balancer, and a Route 53 name. I'd like my module to only accept a bare minimum of parameters so that it shields end-users from the implementation details.
Is this possible in Terraform?

I think you want to use a Terraform module. A module is a collection of resources that is managed as a group.
You can expose whatever variables are necessary for the module to work, so in your case the DNS name, how many instances you want in the ASG, etc. Then when you include it in your terraform config you can specify the block, e.g:
module "myapp" {
source = "./app_stack"
dns_name = "myapp.example.com"
instances = 5
}
module "meteordemo" {
source = "./app_stack"
dns_name = "meteor.example.com"
instances = 1
}
The docs include a much more comprehensive explanation. Here are some example modules on github for reference.

Related

How to access old state value in new Terraform run

I am using Terraform to execute a script to apply some configuration that are not handled via a provider.
This is done using a null_resource.
The executed script will generate a configuration with a name based on current values. Terraform does not keep track of anything created within a null_resource.
The created configuration should also be updatable. But in order to do that I need to know the values used in the prior execution.
Then the old configuration can be targeted by its name, deleted and the new configuration can be added.
So two consecutive runs would look like this:
First execution has the variable setting var.name="ValA". It creates a configuration resource via REST API call called "config-ValA"
Next execution has the variable setting var.name="ValB". It first deletes the configuration "config-ValA" and then creates "config-ValB". But at this point I no longer have access to the var.name state from the "ValA" execution.
I imagine something like this:
resource "null_resource" "start_script" {
# Not actual code! I would like to access the old value with var.name.old
# but I don't know how.
provisioner "local-exec" {
command = "../scripts/dosth.sh ${resource.name.value} ${resource.name.value.old} ${var.name} ${var.name.old}"
interpreter = ["/bin/bash", "-c"]
}
}
Is there a way to access the "old" value of a resource/variable in general, or from the context of a null_resource in particular?
What I know I could already do when adding workarounds:
save the old Value in a separate store. E.g. SSM Parameterstore in AWS and access it as input for the new run.
Use a terraform state show name -raw to extract the value from the state and use it as an input in the new run.
Within the Terraform language itself (.tf files) we only work with the desired new state, and don't have any access to the prior state. Terraform evaluates the configuration first and then, with the help of the associated provider, compares the result of evaluating the configuration to the prior state in order to determine what needs to change.
If you want to respond to changes between the prior state and the desired new state then your logic will need to live inside a provider, rather than inside a module. A provider is the component that handles the transition from one state to another, and so only a provider can "see" both the prior state and the desired state represented by the configuration at the same time, during the planning phase.
The community provider scottwinkler/shell seems to have a resource type shell_script which can act as an adapter to forward the provider requests from Terraform Core to some externally-defined shell scripts. It seems that if you implement an update script then it can in principle access both the prior state and the new desired state, although I'm not personally familiar with the details because I've not actually used this provider. I suggest reviewing the documentation and the code to understand more about how it works, if you choose to use it.

How do terraform providers implement resource lifecycle (resource's STOPPED status)?

Context: I'm developing a terraform provider.
HashiCorp's tutorial describes how to manage a resource that can be in 2 states pretty much: CREATED and DELETED (or such that GET resources/{resource_id} returns 404). It's easy to support CREATING (that means creation is in progress) state as well by adding a retries / wait in resourceCreate() method.
What about implementing a support for more advanced states like STOPPED? Is it a thing to add a computed status attribute or something that can be either CREATED or STOPPED and set it to CREATED in terraform configuration file (or make this attribute computed) and make status attribute updateable?

How to create a resource group that can be shared between modules in terraform?

What is the proper way to create a resource group in terraform for azure that can be shared across different modules? I've been banging my head against this for a while and it's not working. As you can see in this image. I have a resource group in a separate folder. In my main.tf file i load the modules appservice and cosmosdb. I cant seem to figure out how to make the appservice and cosmosdb tf files reference the resource group here that is in this location. How is this done? Any suggestions would be greatly appreciated. Thank you.
In general, it is not recommended to have a module with a single resource like you have organized your code. However, in this situation, you would need to provide the exported resource attributes as an output for that module. In your resource_group module:
output "my_env_rg" {
value = azurerm_resource_group.rg
description = "The my-env-rg Azure resource group."
}
Then, the output containing the map of exported resource attributes for the resource becomes accessible in a config module where you have declared the module. For example, in your root module config (presumably containing your main.tf referenced in the question):
module "azure_resource_group" {
source = "resource-group"
}
would make the output accessible with the namespace module.<MODULE NAME>.<OUTPUT NAME>. In this case, that would be:
module.azure_resource_group.my_env_rg
There's two different kinds of sharing that require different solutions. You need to decide which kind of sharing you're looking for because your example isn't very illustrative.
The first is where you want to make a pattern of creating things that you want to use twice. The goal is to create many different things, each with different parameters. The canonical example is a RDS instance or an EC2 instance. Think of the Terraform module as a function where you execute it with different inputs in different places and use the different results independently. This is exactly what Terraform modules are for.
The second is where you want to make a thing and reference it in multiple places. The canonical example is a VPC. You don't want to make a new VPC for every autoscaling group - you want to reuse it.
Terraform doesn't have a good way of stitching the outputs from one set of resources as inputs to another set. Atlas does and Cloudformation does as well. If you're not using those, then you have to stitch them together yourself. I have always written a wrapper around Terraform which enables me to do this (and other things, like validation and authentication). Save the outputs to a known place and then reference them as inputs later.

How does one get the Tags of an AWS resource by its ARN?

I'm working on a service that I want to use to monitor tags and enforce tagging policies.
One planned feature is to detect resources that are tagged with a value that is not allowed for the respective key.
I can already list the ARNs of resources that have a certain tag-key and I am now looking to filter this list of resources according to invalid values. To do that I want to query a list of each resources tags using its ARN and then filter by those that have invalid values in their tags.
I have
[{
"ResourceArn":"arn:aws:ec2:eu-central-1:123:xyz",
"ResourceType":"AWS::Service::Something
}, ...]
and I want to do something like
queryTags("arn:aws:ec2:eu-central-1:123:xyz")
to get the tags of the specified resource.
I'm using nodejs, but I'm happy to use a solution based on the AWS cli or anything else that can be used in a script.
You can use that through awscli.
For example, EC2 has the command describe-tags for listing the tags of resources and I think other resources also have command like this. It also has options that meet your need.
https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-tags.html

Puppet how to check that exec resource is not getting applied

I am working on a project which contains a module with nearly 100 of exec resources guarded by creates attribute to ensure that exec resource is idempotent.
On applying puppet class, it just logs only those resources which are getting applied but not the other resources which are not getting applied.
I am trying to refactor module and want to get list of all resources in that module along with their status like applied , not applied etc. If any of the resource is not getting applied then also the reason why it's not getting applied.
Is there any way to produce such a report in puppet.
Thanks

Resources