Terraform workspaces different variables - azure

Can we create different variables file for different workspaces?
how to create variables for each environment in terraform workspaces?

If you want to know how to use this in different workspaces for Terraform open-source, that should not be hard to achieve. For example, say you have three environments, namely dev, qa and prod. You would create three separate .tfvars files, i.e., dev.tfvars, qa.tfvars and prod.tfvars. Then, when running the plan and apply commands, you would do the following:
terraform plan -var-file=`terraform workspace show`.tfvars
or alternatively:
terraform plan -var-file=$(terraform workspace show).tfvars
where the terraform workspace show will be the workspace you are currently in. That means you would have to switch to the desired workspace previously with terraform workspace select <workspace>. However, I urge you to reconsider using the same code for different environments which are using workspaces as a means of separation of concerns. From the terraform documentation about workspaces [1]:
In particular, organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages (e.g. staging vs. production) or different internal teams. In this case, the backend used for each deployment often belongs to that deployment, with different credentials and access controls. Named workspaces are not a suitable isolation mechanism for this scenario.
Instead, use one or more re-usable modules to represent the common elements, and then represent each instance as a separate configuration that instantiates those common elements in the context of a different backend. In that case, the root module of each configuration will consist only of a backend configuration and a small number of module blocks whose arguments describe any small differences between the deployments.
[1] https://www.terraform.io/language/state/workspaces#when-to-use-multiple-workspaces

Related

Terraform: Mutliple providers, each with its own statefile

I know it's possible to combine multiple providers in a single Terraform project.
Would it be possible though, to declare different statefile per provider? In our use-case we will be deploying infrastructure with its part in the client's cloud provider account and other part within our cloud provider account.
We'd like to keep the statefiles separated (client's TF state vs our TF state), in order to allow smoother future migrations of either our part of the infra or client's part of the infra.
We also know that this can be achieved using Terragrunt on top of Terraform, but for the moment we'd prefer to avoid introducing a new tool into our stack. Hence looking for a TF-only solution (if such exists).
The simplest solution would be to use separate folders for your and your client's infrastructure.
Is there a specific reason why you would want to keep them in one folder? Even if you need to share some values, you can easily read them by using terraform_remote_state

Terraform conditional creation

I have a Terraform file that creates a resource group, storage account, storage shares, database and VMs on Azure. My proposed use case is that in production once the resource group, storage account, storage shares and database are created database, they should stay in place. However, there are cases where the VMs may need to be destroyed and re-created with different specs. I know that I can run the file once to create everything and then taint the VMs and re-create them with an apply, but that doesn't seem like the ideal method.
In regular use, changes to infrastructure should be made by changes to configuration, rather than by running imperative commands like terraform taint. If you change something about the configuration of a VM that requires it to be created then the underlying provider should produce a plan to replace that object automatically, leaving others unchanged.
When you have different objects that need to change on different rhythms -- particularly when some of them are stateful objects like databases -- a good way to model this in Terraform is to decompose the problem into multiple separate Terraform configurations. You can use data sources in one configuration to retrieve information about objects created in another.
Splitting into at least two separate configurations means that the risk of running terraform apply on one of them can be reduced, because the scope of actions it can take is just on the objects managed in that particular configuration. Although in principle you can carefully review the Terraform plan to see when it's planning to make a change that would be harmful, splitting into multiple configurations is extra insurance that many teams use to reduce the possible impact of human error.

What is the right way to create development and production environments for a network of servers in Azure?

I want to deploy multiple machines across various geographical regions to serve my application in a Development and Production environment; I'm coming from Google Cloud Platform where my solution would be to create 2 projects:
project-dev
project-prod
With that I have complete freedom of creating resources in any region/zone in either project/environment.
The closest thing to this I have found on Azure is Resource Groups, but those are tied to a specified region, which is not ideal for me. Is there a better way, rather than creating a resource group in EACH region I choose to deploy resources for both environments as follows:
project-dev-east-us
project-dev-west-us
project-dev-west-eu
project-dev-east-as
project-prod-east-us
project-prod-west-us
project-prod-west-eu
project-prod-east-as
Resource groups are tied to regions, but resource inside are not, so you can have resources from multiple regions in a single resource group. but resource group is like a folder on a hard drive. its just a way to logically organize things, nothing more.

How can I merge Terraform state drift back into configuration files?

I have a Terraform script which provisions AWS infrastructure with (among other things) a set of security groups.
Over time, these security groups have had a bunch of extra IP ranges added through the AWS console, so there's a drift between the .tf files and the real-world state.
Running terraform plan shows these differences, and wants to roll back to Terraform's configured state.
What I'd like to achieve is to (programmatically) update the .tf files' security group definition to reflect these additional IP ranges, bringing Terraform up-to-date and (hopefully) increasing the chances it'll be used to manage state changes in future.
That is a pending feature in Terraform: https://github.com/hashicorp/terraform/issues/15608
In that issue there are linked two projects that can help:
https://github.com/dtan4/terraforming
https://gitlab.com/Nowaker/terraform-import-as-hcl

Using Terraform, how can I orchestrate deploy of many related/dependent stacks?

CloudFormation doesn't provide tools for orchestrating deployment of several/many stacks. For example consider a microservice/layered architecture where many stacks need to be deployed together to replicate an environment. With cloudformation you need to use a tool like stacker or something home grown to solve the problem.
Does Terraform offer a multi-stack deployment orchestration solution?
Terraform operates on a directory level so you can simply define both stacks at the same place as a big group of resources or as modules.
In Terraform, if you need to deploy multiple resources together at the same time then you would typically use a module and then present a smaller surface area for configuring that module. This also extends to creating modules of modules.
So if you had a module that you created that deployed one service that contained a load balancer, service of some form (such as an ECS task definition; Kubernetes pod, service, deployment etc definition; an AMI) and a database and another module that contained a queue and another service you could then create an over-arching module that contains both of those modules so they are deployed at the same time with a smaller amount of configuration that may be shared between them.
Modules also allow you to define the source location as remote such as a git location or from a Terraform registry (both the public one or a private one) which means the Terraform code for the modules don't have to be stored in the same place or checked out/cloned into the same directory.

Resources