Ideal terraform workspace project structure - terraform

I'd like to setup Terraform to manage dev/stage/prod environments. The infrastructure is the same in all environments, but there are differences in the variables in every environment.
What does an ideal Terraform project structure look like now that workspaces have been introduced in Terraform 0.10? How do I reference the workspace when naming/tagging infrastructure?

I wouldn't recommend using workspaces (previously 'environments') for static environments because they add a fair bit of complexity and are harder to keep track of.
You could get away with using a single folder structure for all environments, use workspaces to separate the environments and then use conditional values based on the workspace to set the differences. In practice (and especially with more than 2 environments leading to nested ternary statements) you'll probably find this difficult to manage.
Instead I'd still advocate for separate folders for every static environment and using symlinks to keep all your .tf files the same across all environments and a terraform.tfvars file to provide any differences at each environment.
I would recommend workspaces for dynamic environments such as short lived review/lab environments as this allows for a lot of flexibility. I'm currently using them to create review environments in Gitlab CI so every branch can have an optionally deployed review environment that can be used for manual integration or exploratory testing.

In the old world you might have passed in the var 'environment' when running terraform, which you would interpolate in your .tf files as "${var.environment}".
When using workspaces, there is no need to pass in an environment variable, you just make sure you are in the correct workspace and then interpolate inside your .tf files with "${terraform.workspace}"
As for how you'd manage all of the variables, i'd recommend using a varmap, like so:
variable "vpc_cidr" {
type = "map"
default = {
dev = "172.0.0.0/24"
preprod = "172.0.0.0/24"
prod = "172.0.0.0/24"
}
}
This would then be referenced in a aws_vpc resource using a lookup
"${lookup(var.vpc_cidr, terraform.workspace)}"
The process of creating and selecting workspaces is pretty easy:
terraform workspace
Usage: terraform workspace
Create, change and delete Terraform workspaces.
Subcommands:
show Show the current workspace name.
list List workspaces.
select Select a workspace.
new Create a new workspace.
delete Delete an existing workspace.
so to create a new workspace for pre production you'd do the following:
terraform workspace new preprod
and if you ran a plan, you'd see that there should be no resources. What this will do in the backend is create a new folder to manage the state for 'preprod'.

Related

Including Terraform Configuration from Another Gitlab Project

I have a couple apps that use the same GCP project. There are dev, stage, and prod projects, but they're basically the same, apart from project IDs and project numbers. I would like to have a repo in gitlab like config where I keep these IDs, in a dev.tfvars, stage.tfvars, prod.tfvars. Currently each app's repo has a directory of config/{env}.tfvars, which is really repetitive.
Googling for importing or including terraform resources is just getting me results about terraform state, so hasn't been fruitful.
I've considered:
Using a group-level Gitlab variable just as key=val env file and have my gitlab-ci yaml source the correct environment's file, then just include what I need using -var="key=value" in my plan and apply commands.
Creating a terraform module that either uses TF_WORKSPACE or an input prop to return the correct variables. I think this may be possible, but I'm new to TF, so I'm not sure how to return data back from a module, or if this type of "side-effects only" solution is an abusive workaround to something there's a better way to achieve.
Is there any way to include terraform variables from another Gitlab project?

Custom environments in terraform on aws

I need to create a setup with terraform and aws where i could have the traditional Dev, Stage and Prod environments. I also need to allow creation of custom copies of individual services in those environments for developers to work on. I will use workspaces for Dev, Stage and Prod but can't figure out how to achieve these custom copies of services in environments.
An example would be a lambda in dev environment that would be called "test-lambda-dev". It would be deployed with the dev environment. The developer with initials "de" would need to work on some changes in this lambdas code and would need a custom version of it deployed alongside the regular one. So he would need to create a "test-lambda-dev-de".
I have tried to solve it by introducing a sufix after the resources name that would normaly be empty but someone could provide a value signalling that a cuwtom version is needed. Terraform destroyed the regular resource and replaced it with the custom one so it has not worked as intended.
Is there a way to get it to work? If not how can developers work on resources without temporarily breaking the environment?

Is there a way with terraform to depend on another terraform git repo?

I have 2 projects (git repo). They each have their own terraform files and state.
Now I want these 2 projects to depend on 1 database.
I would like to create a common repo with terraform files to create that database and make my 2 initial projects depend on it.
I know that with a monorepo and terragrunt I can do something like:
dependency "vpc" {
config_path = "../vpc"
}
but is there a way to do this with multiples git repos (no monorepos).
I'm guessing it can't be done, and I suspect that there would be a problem with the multiple states.
Yes, what you could do is use the state of the common terraform. Make sure you output the database id or something on the common terraform so that you can reference to it.
And then inside your child repos use terraform_remote_state data source.
More info here: https://www.terraform.io/docs/language/state/remote-state-data.html

Separated environment configurations into different folders and now Terraform wants to create all resources as if it doesn't know they exist

Terraform v0.13.5
AzureRM v2.44.0
I use the AzureRM backend to store the tfstate file. My initial Terraform project had a master main.tf with some modules and I used workspaces to separate the different environments (dev/qa). This created the tfstate files in a single container and it would append the environment to the name of the tfstate file. I recently changed the file structure of my Terraform project so that each environment had its own folder instead, where I would change directory to that environment folder and run terraform apply
But now, Terraform wants to create all of the resources as if they don't exist even though my main.tf file is the same. So, it seems like I'm missing something here because now, I don't really need to use workspaces, but I need to use my existing tfstate files in azure so Terraform knows!
What am I missing here?
Okay, I feel silly. I didn't realize that I never ran terraform workspace select when I went to the new folder. I guess what I should do is figure out how to do away with workspaces.

How to create a file with timestamp in `terraform`?

I use below configuration to generate a filename with timestamp which will be used in many different places.
variable "s3-key" {
default = "deploy-${timestamp()}.zip"
}
but got Error: Function calls not allowed error. How can I use timestamp for a variable?
Variable defaults in particular are constant values, but local values allow for arbitrary expressions derived from variables:
variable "override_s3_key" {
default = ""
}
locals {
s3_key = var.override_s3_key != "" ? var.override_s3_key : "deploy-${timestamp()}.zip"
}
You can then use local.s3_key elsewhere in the configuration to access this derived value.
With that said, Terraform is intended for creating long-running infrastructure objects and so including timestamps is often (but not always!) indicative of a design problem. In this particular case, it looks like using Terraform to create application artifacts for deployment, which is something Terraform can do but Terraform is often not the best tool for this sort of job.
Instead, consider splitting your build and deploy into two separate steps, where the build step is implemented using any separate tool of your choice -- possibly even just a shell script -- and produces a versioned (or timestamped) artifact in S3. Then you can parameterize your Terraform configuration with that version or timestamp to implement the "deploy" step:
variable "artifact_version" {}
locals {
artifact_s3_key = "deploy-${var.artifact_version}.zip"
}
An advantage of this separation is that by separating the versioned artifacts from the long-lived Terraform objects you will by default retain the historical artifacts, and so if you deploy and find a problem you can choose to switch back to a known-good existing artifact by just re-running the deploy step (Terraform) with an older artifact version. If you instead manage the artifacts directly with Terraform, Terraform will delete your old artifact before creating a new one, because that's Terraform's intended usage model.
There's more detail on this model in the HashiCorp guide Serverless Applications with AWS Lambda and API Gateway. You didn't say that the .zip file here is destined for Lambda, but a similar principle applies for any versioned artifact. This is analogous to the workflow for other deployment models, such as building a separate Docker image or AMI for each release; in each case, Terraform is better employed for the process of selecting an existing artifact built by some other tool rather than for creating those artifacts itself.

Resources