I have created a gcp kubernetes cluster using terraform and configured a few kubernetes resources such as namespaces and helm releases. I would like terraform to automatically destroy/recreate all the kubernetes cluster resources if the gcp cluster is destroyed/recreated but I cant seem to figure out how to do it.
The behavior I am trying to recreate is similar to what you would get if you used triggers with null_resources. Is this possible with normal resources?
resource "google_container_cluster" "primary" {
name = "marcellus-wallace"
location = "us-central1-a"
initial_node_count = 3
resource "kubernetes_namespace" "example" {
metadata {
annotations = {
name = "example-annotation"
}
labels = {
mylabel = "label-value"
}
name = "terraform-example-namespace"
#Something like this, but this only works with null_resources
triggers {
cluster_id = "${google_container_cluster.primary.id}"
}
}
}
In your specific case, you don't need to specify any explicit dependencies. They will be set automatically because you have cluster_id = "${google_container_cluster.primary.id}" in your second resource.
In case when you need to set manual dependency you can use depends_on meta-argument.
Related
I have defined resource to provision databricks workspace on Azure using Terraform as follows which consumes the list ( of inputs from tfvar file for # of workspaces) and provision them.
resource "azurerm_databricks_workspace" "workspace" {
for_each = { for r in var.databricks_workspace_list : r.workspace_nm => r}
name = each.key
resource_group_name = each.value.resource_group_name
location = each.value.location
sku = "standard"
tags = {
Environment = "Dev"
}
}
I am trying to create additional resource as below
resource "databricks_instance_pool" "smallest_nodes" {
instance_pool_name = "Smallest Nodes"
min_idle_instances = 0
max_capacity = 300
node_type_id = data.databricks_node_type.smallest.id // data block is defined
idle_instance_autotermination_minutes = 10
}
To create instance pool, I need to pass workspace id in databricks provider block as below
provider "databricks" {
azure_client_id= *******
azure_client_secret= *******
azure_tenant_id= *******
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
}
But when I do terraform plan, it fails with below error
Missing resource instance key
azure_workspace_resource_id = azurerm_databricks_workspace.workspace.id
Because azure_workspace_resource_id = azurerm_databricks_workspace has for_each set, its attribute must be accessed on specific instances.
For example, to correlate indices , use :
azurerm_databricks_workspace[each.key]
I couldnt use for_each in provider block, also not able to find out way to index workspace id in provider block.
Appreciate your inputs.
TF version : 0.13
Azure RM : 3.10.0
Databricks : 0.5.7
The problem is that you can create multiple workspaces when you're using for_each in the azurerm_databricks_workspace resource. But your provider block is trying to refer to a "generic" resource instance, so it's complaining.
The solution here would be either:
Remove for_each if you're creating just one workspace
instead of azurerm_databricks_workspace.workspace.id, you need to refer azurerm_databricks_workspace.workspace[<name>].id where the <name> is the specific instance of Databricks from the list of workspaces.
P.S. Your databricks_instance_pool resource doesn't have explicit depends_on, so the operation will fail with authentication error as described here.
My use case: I need to create an AKS cluster with Terraform azurerm provider, and then set up a Network Watcher flow log for its NSG.
Note that as many other AKS resources, the corresponding NSG is not controlled by Terraform. Instead, it's created by Azure indirectly (and asynchronously), so I treat it as data, not resource.
Also note that Azure will create and use its own NSG even if the AKS is created with a customary created VNet.
Depending on the particular region and the Azure API gateway, my team has seen up to 40 minute delay between having the AKS created and then the NSG resource visible in the node pool resource group.
If I don't want my Terraform config to fail, I see 3 options:
Run a CLI script that waits for the NSG, make it a null_resource and depend on it
Implement the same with a custom provider
Have a really ugly workaround that implements a retry pattern - below is 10 attempts at 30 seconds each:
data "azurerm_resources" "my_nsg_1" {
resource_group_name = var.clusterNodeResourceGroup
type = "Microsoft.Network/networkSecurityGroups"
}
resource "time_sleep" "my_nsg_sleep1" {
count = length(data.azurerm_resources.my_nsg_1.resources) == 0 ? 1 : 0
create_duration = "30s"
triggers = {
ts = timestamp()
}
}
data "azurerm_resources" "my_nsg_2" {
depends_on = [time_sleep.my_nsg_sleep1]
resource_group_name = var.clusterNodeResourceGroup
type = "Microsoft.Network/networkSecurityGroups"
}
resource "time_sleep" "my_nsg_sleep2" {
count = length(data.azurerm_resources.my_nsg_1.resources) == 0 ? 1 : 0
create_duration = length(data.azurerm_resources.my_nsg_2.resources) == 0 ? "30s" : "0s"
triggers = {
ts = timestamp()
}
}
...
data "azurerm_resources" "my_nsg_11" {
depends_on = [time_sleep.my_nsg_sleep10]
resource_group_name = var.clusterNodeResourceGroup
type = "Microsoft.Network/networkSecurityGroups"
}
// Now azurerm_resources.my_nsg_11 is OK as long as the NSG was created and became visible to the current API Gateway within 5 minutes.
Note that Terraform doesn't allow resource repeating via the use of "for_each" or "count" at more than an individual resource level. In addition, because it resolves dependencies during the static phase, two sets of resource lists created with "count" or "for_each" cannot have dependencies at an individual element level of each other - you can only have one list depend on the other, obviously with no circular dependencies allowed.
E.g. my_nsg[count.index] cannot depend on my_nsg_delay[count.index-1] while my_nsg_delay[count.index] depends on my_nsg[count.index]
Hence this horrible non-DRY antipattern.
Is there a better declarative solution so I don't involve a custom provider or a script?
With the eksctl cli one can create an EKS cluster of type Fargate which creates nodes of instance type "Fargate".
How can the same be achieved with terraform? The cluster can be created with node groups, but instance type Fargate does not seem to exist (although eksctl creates it like that)
node_groups = {
eks_nodes = {
desired_capacity = 3
max_capacity = 3
min_capaicty = 3
instance_type = "Fargate"
}
}
Thanks!
Have you tried to define a Fargate profile first?
You must define at least one Fargate profile that specifies which pods should use Fargate when they are launched. You also need to create a pod execution role this way the components running on the Fargate infrastructure need to make calls to AWS APIs on your behalf to do things like pull container images from Amazon ECR or route logs to other AWS services.
A terraform code for aws eks fargage looks like the following:
resource "aws_eks_fargate_profile" "default" {
cluster_name = var.cluster_name
fargate_profile_name = var.fargate_profile_name
pod_execution_role_arn = join("", aws_iam_role.default.arn)
subnet_ids = var.subnet_ids
tags = var.tags
selector {
namespace = var.kubernetes_namespace
labels = var.kubernetes_labels
}
}
Make sure you're using the aws_eks_fargate_profile resource to create an eks fargate profile.
A terraform code for fargate pod execution role looks like the following:
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["eks-fargate-pods.amazonaws.com"]
}
}
}
resource "aws_iam_role" "default" {
name = var.role_name
assume_role_policy = join("", data.aws_iam_policy_document.assume_role.json)
tags = var.tags
}
resource "aws_iam_role_policy_attachment" "amazon_eks_fargate_pod_execution_role_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
role = join("", aws_iam_role.default.name)
}
I suggest you check some awesome examples from awesome communities like Cloudposse.
I'll give you the complete example of fargate profile and eks-node-group, it seems the solution that you need to deploy at this moment.
Pd: Try to read how they made the modules, I think you'll reach your goal quickly.
I hope it may useful for you and other users.
I am having this issue of Terraform EKS tagging and don't seem to find workable solution to tag all the VPC subnets when a new cluster is created.
To provide some context: We have one AWS VPC where we deployment several EKS cluster into the subnets. We do not create VPC or subnets are part of the EKS cluster creation. Therefore, the terraform code creating a cluster doesn't get to tag existing subnets and VPC. Although EKS will add the required tags, they are automatically removed next time we run terraform apply on the VPC.
My attempt to workaround is to provide a terraform.tfvars file within the VPC to as follows:
eks_tags =
[
"kubernetes.io/cluster/${var.cluster-1}", "shared",
"kubernetes.io/cluster/${var.cluster-2}", "shared",
"kubernetes.io/cluster/${var.cluster-2}", "shared",
]
Then within the VPC and subnets resources, we do something like
resource "aws_vpc" "demo" {
cidr_block = "10.0.0.0/16"
tags = "${
map(
${var.eks_tags}
)
}"
}
However, the above does not seem to work. I have tried various Terraform 0.11 functions from https://www.terraform.io/docs/configuration-0-11/interpolation.html but not of them help.
Has anyone ben able to resolve this issue?
The idea that we always create new VPC and subnet for every EKS cluster is wrong. Obviously, the has to be a way to tag existing VPC and subnet resources using Terraform?
You can now use the aws provider ignore_tags attribute so that the tags made with the aws_ec2_tag resource do not get removed next time the VPC module is applied.
For example the provider becomes:
provider "aws" {
profile = "terraform"
region = "us-west-1"
// This is necessary so that tags required for eks can be applied to the vpc without changes to the vpc wiping them out.
// https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/resource-tagging
ignore_tags {
key_prefixes = ["kubernetes.io/"]
}
}
You can then leverage the aws_ec2_tag resource like so in your EKS module without worrying about the tag getting removed next time the VPC module is applied.
/**
Start of resource tagging logic to update the provided vpc and its subnets with the necessary tags for eks to work
The toset() function is actually multiplexing the resource block, one for every item in the set. It is what allows
for setting a tag on each of the subnets in the vpc.
*/
resource "aws_ec2_tag" "vpc_tag" {
resource_id = data.terraform_remote_state.vpc.outputs.vpc_id
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "shared"
}
resource "aws_ec2_tag" "private_subnet_cluster_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.private_subnets)
resource_id = each.value
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "shared"
}
resource "aws_ec2_tag" "public_subnet_cluster_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.public_subnets)
resource_id = each.value
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "shared"
}
/**
These tags have been decoupled from the eks module and moved to the more appropirate vpc module.
*/
resource "aws_ec2_tag" "private_subnet_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.private_subnets)
resource_id = each.value
key = "kubernetes.io/role/internal-elb"
value = "1"
}
resource "aws_ec2_tag" "public_subnet_tag" {
for_each = toset(data.terraform_remote_state.vpc.outputs.public_subnets)
resource_id = each.value
key = "kubernetes.io/role/elb"
value = "1"
}
In our case we have separate scripts to provision VPC and networking resources there we are not adding EKS specific tags.
For EKS cluster provisioning we have separate scripts which will auto update/add tags on cluster.
So on VPC scripts in provider.tf file we add below condition so that scripts will not remove these tags and everything works properly.
provider "aws" {
region = "us-east-1"
ignore_tags {
key_prefixes = ["kubernetes.io/cluster/"]
}
}
This problem will always exist when there are 2 pieces of code with different state files trying to act on the same resource.
One way to solve this is to re-import the VPC resource into your VPC state file everytime you apply your EKS terraform code. This will import your tags as well. Same goes with subnets, but it is a manual and tedious process in the long run.
terraform import aws_vpc.test_vpc vpc-a01106c2
Ref: https://www.terraform.io/docs/providers/aws/r/vpc.html
Cheers!
We are trying to create Terraform modules for below activities in AWS, so that we can use them where ever that is required.
VPC creation
Subnets creation
Instance creation etc.
But while creating these modules we have to define the provider in all above listed modules. So we decided to create one more module for provider so that we can call that provider module in other modules (VPC, Subnet, etc.).
Issue in above approach is that it is not taking provider value, and asking for the user input for region.
Terraform configuration is as follow:
$HOME/modules/providers/main.tf
provider "aws" {
region = "${var.region}"
}
$HOME/modules/providers/variables.tf
variable "region" {}
$HOME/modules/vpc/main.tf
module "provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
}
$HOME/modules/vpc/variables.tf
variable "vpc_cidr" {}
variable "environment" {}
variable "region" {}
$HOME/main.tf
module "dev_vpc" {
source = "modules/vpc"
vpc_cidr = "${var.vpc_cidr}"
environment = "${var.environment}"
region = "${var.region}"
}
$HOME/variables.tf
variable "vpc_cidr" {
default = "192.168.0.0/16"
}
variable "environment" {
default = "dev"
}
variable "region" {
default = "ap-south-1"
}
Then when running terraform plan command at $HOME/ location it is not taking provider value and instead asking for the user input for region.
I need help from the Terraform experts, what approach we should follow to address below concerns:
Wrap provider in a Terraform module
Handle multiple region use case using provider module or any other way.
I knew a long time back that it wasn't possible to do this because Terraform built a graph that required a provider for any resource before it included any dependencies and it didn't used to be possible to force a dependency on a module.
However since Terraform 0.8 it is now possible to set a dependency on modules with the following syntax:
module "network" {
# ...
}
resource "aws_instance" "foo" {
# ...
depends_on = ["module.network"]
}
However, if I try that with your setup by changing modules/vpc/main.tf to look something like this:
module "aws_provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
depends_on = ["module.aws_provider"]
}
And run terraform graph | dot -Tpng > graph.png against it it looks like the graph doesn't change at all from when the explicit dependency isn't there.
This seems like it might be a potential bug in the graph building stage in Terraform that should probably be raised as an issue but I don't know the core code base well enough to spot where the change needs to be.
For our usage we use symlinks heavily in our Terraform code base, some of which is historic from before Terraform supported other ways of doing things but could work for you here.
We simply define the provider in a single .tf file (such as environment.tf) along with any other generic config needed for every place you would ever run Terraform (ie not at a module level) and then symlink this into each location. That allows us to define the provider in a single place with overridable variables if necessary.
Step 1
Add region alias in the main.tf file where you gonna execute the terraform plan.
provider "aws" {
region = "eu-west-1"
alias = "main"
}
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Step 2
Add providers block inside your module definition block
module "lambda_edge_rule" {
providers = {
aws = aws.useast1
}
source = "../../../terraform_modules/lambda"
tags = var.tags
}
Step 3
Define "aws" as providers inside your module. ( source = ../../../terraform_modules/lambda")
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_lambda_function" "lambda" {
function_name = "blablabla"
.
.
.
.
.
.
.
}
Note: Terraform version v1.0.5 as of now.