Module enabling to deploy based on a specific variable ( Terraform/Datadog ) - terraform

I'm looking to automate on specific part based on a very complicated Terraform script that I have.
To make a bit clear I have created TF template with Deployment of entire infra into Azure with App Services, Storage account, Security groups, Windows based VM's, Linux based VM's split for MongoDB and RabbitMQ. Inside my script I was able to automate deployment to use the name of the application and create Synthetic Test and using a local variable plus based on to the environment to use a specific Datadog Key using the local variable ability
keyTouse = lower(var.environment) != "production" ? var.DatadogNPD : var.DatadogPRD
right now the point that bothers me is the following.
Since we are not in a need to use Synthetic tests based on NON Production environment i would like to use some sort of logic and not deploy Synthetic tests if the var.environment is not "PRODUCTION"
To make this part more interesting i also have the ability to deploy multiple Synthetic Test using the "count" and "length" shown below
inside main.tf
module "Datadog" {
source = "./Datadog"
webapp_name = [ azurerm_linux_web_app.service1.name, azurerm_linux_web_app.service2.name ]
}
and for Datadog Synthetic Test
resource "datadog_synthetics_test" "app_service_monitoring" {
count = length(var.webapp_name)
type = "api"
subtype = "http"
request_definition {
method = "GET"
url = "https://${element(var.webapp_name, count.index)}.azurewebsites.net/health"
}
Could you help me and suggest how can I enable or disable modules deployment using the variable based on environment?

Based on my understanding of the question, the change would have to be two-fold:
Add an environment variable to the module code
Use that variable for deciding if the synthetics test resource should be created or not
The above translates to creating another variable in the module and later on providing that variable a value when calling the module. The last part would be deciding based on that if the resource gets created.
# module level variable
variable "environment" {
type = string
description = "Environment in which to deploy resources."
}
Then, in the resource, you would add the following:
resource "datadog_synthetics_test" "app_service_monitoring" {
count = var.environment == "production" ? length(var.webapp_name) : 0
type = "api"
subtype = "http"
request_definition {
method = "GET"
url = "https://${element(var.webapp_name, count.index)}.azurewebsites.net/health"
}
}
And finally, in the root module:
module "Datadog" {
source = "./Datadog"
webapp_name = [ azurerm_linux_web_app.service1.name, azurerm_linux_web_app.service2.name ]
environment = var.environment
}
The environment = var.environment line will work if you have also defined the variable environment in the root module. If not you can always set it to a value you want:
module "Datadog" {
source = "./Datadog"
webapp_name = [ azurerm_linux_web_app.service1.name, azurerm_linux_web_app.service2.name ]
environment = "dev" # <--- or "production" or any other environment you have
}

Related

Trying to use nested Terraform modules from a private registry

I have an issue with Terraform and modules that call modules. Whenever I call my nested module the only output I get is:
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
For context here is what I'm trying to do:
I have a simple module that creates an EC2 instance with certain options. (This module creates an aws_instance resource.)
I have a second module that calls my first and creates three of the above EC2 instances and sets a unique name for each node.
I have a Terraform project that simply calls the second module.
The two above modules are saved in my private GitLab registry. If I created a Terraform project that calls just the first module, it will create the EC2 instance. If I execute the second module as if it were just a normal project three EC2 instances are created. My issue is when I call the second module which then should call the first, but that it is where it falls apart.
I'm wondering if this configuration is actually supported. The use case I'm trying to test is standing up a Kubernetes (K8s) cluster. I create a module that defines up compute resource. I then create a second module that defines the compute modules with the options needed for a K8s cluster. Finally, I have a project that defines how many K8s nodes needed and in which region/availability zones.
I call this nested modules, but in all of my searching nested modules seems to only refer to modules and sub-modules that live inside of the Terraform project that is calling them.
Any help here would be greatly appreciated. Here is how I create the resources and call the modules. These are simple examples I'm using for testing and not anything like the K8s modules I'll be creating. I'm just trying to figure out if I can get nested private registry modules working.
First Module (compute)
resource "aws_instance" "poc-instance" {
ami = var.ol8_ami
key_name = var.key_name
monitoring = true
vpc_security_group_ids = [
data.aws_security_group.internal-ssh-only.id,
]
root_block_device {
delete_on_termination = true
encrypted = true
volume_type = var.ebs_volume_type
volume_size = var.ebs_volume_size
}
availability_zone = var.availability_zone
subnet_id = var.subnet_id
instance_type = var.instance_type
tags = var.tags
metadata_options {
http_tokens = "required"
http_endpoint = "disabled"
}
}
Second Module (nested-cluster)
module "ec2_instance" {
source = "gitlab.example.com/terraform/compute/aws"
for_each = {
0 = "01"
1 = "02"
2 = "03"
}
tags = {
"Name" = "poc-nested_ec2${each.value}"
"service" = "terraform test"
}
}
Terraform Project
module "sample_cluster" {
source = "gitlab.example.com/terraform/nested-cluster/aws"
}

Terraform Digitalocean: move resources under project

I want to create N droplets on DigitalOcean and assign them to a DigitalOcean project (that is not yet existing).
First I'm creating a project and I assign the droplets to the project using the resources item. I'm also creating the two droplets.
resource "digitalocean_project" "project" {
name = "playground"
count = "2"
description = "Description"
purpose = "Description purposes"
environment = "Development"
resources = [
digitalocean_droplet.myserver[count.index].urn
]
}
resource "digitalocean_droplet" "myserver" {
count = "2"
name = "server-${count.index}"
image = "ubuntu-18-04-x64"
size = "1gb"
region = "${var.region}"
}
The droplets are created successfully. One droplet is moved to the newly created project, while the other droplet remains in my default project.
The error message is below is clear. It tries to create a second project with the same name.
Error: Error creating Project: POST
https://api.digitalocean.com/v2/projects: 409 name is already in use
(duplicate)
on create_server.tf line 1, in resource "digitalocean_project"
"project": 1: resource "digitalocean_project" "project" {
How can I assign the two droplets to my project (which I want to create dynamically)?
If you want one project with multiple resources then you need to only create a single project and assign the list of resources to it. To do this you'll want to remove the count parameter from the digitalocean_project resource (this would create multiple projects) and then use the splat expression of the digitalocean_droplet resources to pass a list of the resources to the project.
So you want something that looks a little like this:
resource "digitalocean_project" "project" {
name = "playground"
description = "Description"
purpose = "Description purposes"
environment = "Development"
resources = digitalocean_droplet.myserver[*].urn
}
resource "digitalocean_droplet" "myserver" {
count = "2"
name = "server-${count.index}"
image = "ubuntu-18-04-x64"
size = "1gb"
region = var.region
}

How are azure terraform variables applied for multiple tf files processed together?

For Azure Terraform:
If a variable is declared in a tf file will this value be applied to same variable in other tf files processed together? Why is there a default value associated with a variable statement?
If I made a tfvars file:
cidrs = [ "10.0.0.0/16", "10.1.0.0/16" ]
Can cidr be used as below for subnet id? Not really understanding usage syntax?
subnet_id = "${azurerm_subnet.subnet.id}"
subnet id = cidr
What exactly is the "Default" function when used with variables? See below:
variable "prefix" {
type = "string"
default = "my"
}
variable "tags" {
type = "map"
default = {
Environment = "Terraform GS"
Dept = "Engineering"
}
}
variable "sku" {
default = {
westus = "16.04-LTS"
eastus = "18.04-LTS"
}
}
There are several questions there. the easy one: default:
The variable declaration can also include a default argument. If
present, the variable is considered to be optional and the default
value will be used if no value is set when calling the module or
running Terraform. The default argument requires a literal value and
cannot reference other objects in the configuration.
For the other question, refer to this example: https://www.terraform.io/docs/providers/azurerm/r/subnet.html#attributes-reference
So in short, to use the existing subnet cidr, you need to refer to it like this:
azurerm_subnet.%subnetname%.address_prefix
subnet name, however, cannot be equals to a cidr, because it doesnt allow for / inside of the name. you can use something like this though: 10.0.0.0-24

Efficient variable validation with Terraform

Is there an efficient way to apply validation logic to variables used in a terraform run?
Specifically I want to check the length and casing of some variables. The variables are a combination of ones declared in tfvars files, in variables.tf files, and collected during runtime by terraform.
Thanks.
Custom Validation Rules
Terraform document - Input Variables - Custom Validation Rules
Results
Failure case
provider aws {
profile="default"
}
terraform {
experiments = [variable_validation]
}
## Custom Validation Rules
variable "test" {
type = string
description = "Example to test the case and length of the variable"
default = "TEsT"
validation {
condition = length(var.test) > 4 && upper(var.test) == var.test
error_message = "Validation condition of the test variable did not meet."
}
}
Execution
$ terraform plan
Warning: Experimental feature "variable_validation" is active
on main.tf line 5, in terraform:
5: experiments = [variable_validation]
Experimental features are subject to breaking changes in future minor or patch
releases, based on feedback.
If you have feedback on the design of this feature, please open a GitHub issue
to discuss it.
Error: Invalid value for variable # <---------------------------
on main.tf line 9:
9: variable "test" {
Validation condition of the test variable did not meet.
This was checked by the validation rule at main.tf:14,3-13.
Pass case
terraform {
experiments = [variable_validation]
}
## Custom Validation Rules
variable "test" {
type = string
description = "Example to test the case and length of the variable"
default = "TESTED"
validation {
condition = length(var.test) > 4 && upper(var.test) == var.test
error_message = "Validation condition of the test variable did not meet."
}
}
Execution
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
No changes. Infrastructure is up-to-date.
Others
Alternatively, use null_resource local-exec to implement logic in shell script, or use external provider to send the variable to an external program to validate?
This isn't something you can currently do directly with Terraform but I find it easier to just mangle the input variables to the required format if necessary.
As an example the aws_lb_target_group resource takes a protocol parameter that currently requires it to be uppercased instead of automatically upper casing things and suppressing the diff like the aws_lb_listener resource does for the protocol (or even the protocol in the health_check block).
To solve this I just use the upper function when creating the resource:
variable "protocol" {
default = "http"
}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}
resource "aws_lb_target_group" "test" {
name = "tf-example-lb-tg"
port = 80
protocol = "${upper(var.protocol)}"
vpc_id = "${aws_vpc.main.id}"
}
As for checking length I just substring things to make them the right length. I currently do this for ALBs as the name has a max length of 32 and I have Gitlab CI create review environments for some services that get a name based on the slug of the Git branch name so have little control over the length that is used.
variable "environment" {}
variable "service_name" {}
variable "internal" {
default = true
}
resource "aws_lb" "load_balancer" {
name = "${substr(var.environment, 0, min(length(var.environment), 27 - length(var.service_name)))}-${var.service_name}-${var.internal ? "int" : "ext"}"
internal = "${var.internal}"
security_groups = ["${aws_security_group.load_balancer.id}"]
subnets = ["${data.aws_subnet_ids.selected.ids}"]
}
With the above then any combination of length of environment or service name will lead to the environment/service name pair being trimmed to 27 characters at most which leaves room for the extra characters that I want to specify.
Got inspired by this conversation and found the following already existing provider:
https://github.com/craigmonson/terraform-provider-validate

Terraform use case to create multiple almost identical copies of infrastructure

I have TF templates whose purpose is to create multiple copies of the same cloud infrastructure. For example you have multiple business units inside a big organization, and you want to build out the same basic networks. Or you want an easy way for a developer to spin up the stack that he's working on. The only difference between "tf apply" invokations is the variable BUSINESS_UNIT, for example, which is passed in as an environment variable.
Is anyone else using a system like this, and if so, how do you manage the state files ?
You should use a Terraform Module. Creating a module is nothing special: just put any Terraform templates in a folder. What makes a module special is how you use it.
Let's say you put the Terraform code for your infrastructure in the folder /terraform/modules/common-infra. Then, in the templates that actually define your live infrastructure (e.g. /terraform/live/business-units/main.tf), you could use the module as follows:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
To create the infrastructure for multiple business units, you could use the same module multiple times:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
}
If each business unit needs to customize some parameters, then all you need to do is define an input variable in the module (e.g. under /terraform/modules/common-infra/vars.tf):
variable "business_unit_name" {
description = "The name of the business unit"
}
Now you can set this variable to a different value each time you use the module:
module "business-unit-a" {
source = "/terraform/modules/common-infra"
business_unit_name = "a"
}
module "business-unit-b" {
source = "/terraform/modules/common-infra"
business_unit_name = "b"
}
module "business-unit-c" {
source = "/terraform/modules/common-infra"
business_unit_name = "c"
}
For more information, see How to create reusable infrastructure with Terraform modules and Terraform: Up & Running.
There's two ways of doing this that jump to mind.
Firstly, you could go down the route of using the same Terraform configuration folder that you apply and simply pass in a variable when running Terraform (either via the command line or through environment variables). You'd also want to have the same wrapper script that calls Terraform to configure your state settings to make them differ.
This might end up with something like this:
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
Which creates an EC2 instance and an RDS instance. You would then call that with something like this:
#!/bin/bash
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters - specify business unit as positional parameter"
fi
business_unit=$1
terraform remote config -backend="s3" \
-backend-config="bucket=${business_unit}" \
-backend-config="key=state"
terraform remote pull
terraform apply -var 'BUSINESS_UNIT=${business_unit}'
terraform remote push
As an alternative route you might want to consider using modules to wrap your Terraform configuration.
So instead you might have something that now looks like:
web-instance/main.tf
variable "BUSINESS_UNIT" {}
variable "ami" { default = "ami-123456" }
resource "aws_instance" "web" {
ami = "${var.ami}"
instance_type = "t2.micro"
tags {
Name = "web"
Business_Unit = "${var.BUSINESS_UNIT}"
}
}
db-instance/main.tf
variable "BUSINESS_UNIT" {}
resource "aws_db_instance" "default" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t2.micro"
name = "${var.BUSINESS_UNIT}"
username = "foo"
password = "bar"
db_subnet_group_name = "db_subnet_group"
parameter_group_name = "default.mysql5.6"
}
And then you might have different folders that call these modules per business unit:
business-unit-1/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-1" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
and
business-unit-2/main.tf
variable "BUSINESS_UNIT" { default = "business-unit-2" }
module "web_instance" {
source = "../web-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
module "db_instance" {
source = "../db-instance"
BUSINESS_UNIT = "${var.BUSINESS_UNIT}"
}
You still need a wrapper script to manage state configuration as before but going this route enables you to provide a rough template in your modules and then hard code certain extra configuration by business unit such as the instance size or the number of instances that are built for them.
This is rather popular use case. To archive this you can let developers to pass variable from command-line or from tfvars file into resource to make different resources unique:
main.tf:
resource "aws_db_instance" "db" {
identifier = "${var.BUSINESS_UNIT}"
# ... read more in docs
}
$ terraform apply -var 'BUSINESS_UNIT=unit_name'
PS: We do this often to provision infrastructure for specific git branch name, and since all resources are identifiable and are located in separate tfstate files, we can safely destroy them when we don't need them.

Resources