After Terraform creates cluster, how do you new Kubernetes credentials with helm provider? - terraform

I'm using Terraform to create a Kubernetes cluster. As part of that, I am updating the local .kube/config with the new credentials. Unfortunately, it looks like the Helm provider in Terraform loads its credentials at the start of the apply. Is there a way to force the provider to load its credentials after cluster creation?

In terraform 0.13 you can set "depends_on" on modules.
Best way to achieve what you are looking for is to separate the cluster creation and the helm into separate modules. After that put "depends_on" the cluster module on the helm module.

Related

How to apply ConfigMaps to AKS Clusters via Terraform?

I currently deal 10-15 environment 100% IaC with Terraform in Azure. One of the recent projects was to change some log collection settings for all AKS Cluster. Here is a link of how to do it via kubectl - https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-agent-config#data-collection-settings.
What I've found so far?
Terraform has a kubernetes_config_map resource which I was able to successfully create. (https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map)
My next question is how do I apply or attach the kubernetes_config_map resource to the AKS cluster? Assuming I want this applied to all the namespaces. I wasn't able to find config_map parameter on any of the resources.
We also use helm_release, is it possible to attach/pass that kubernets_config_map to the helm_release?
Any guidance would be greatly appreciated. Thanks..

Apply a `configMap` to EKS cluster with Terraform

I am trying to apply a configMap to an EKS cluster through Terraform, but I don't see how. There is lots of documentation about this, but I don't see anyone succeeding with it, so I am not sure if this is possible or not.
Currently we control our infrastructure through Terraform. When I create the .kube/config file through AWS cli, and try to connect to the cluster, I get the Unauthorized error, which is documented how to solve here; in AWS. According to the docs, we need to edit aws-auth configMap and add some lines to it, which configures API server to accept requests from a VM with certain role. The problem is that only cluster creator has access to connect to the cluster and make these changes. The cluster creator in this case is Terraform, so what we do is aws config, we add the credentials of Terraform to the VM from where we are trying to connect to the cluster, we successfully authenticate against it, add the necessary lines to the configMap, then revoke the credentials from the VM.
From there on, any user can connect to the cluster from that VM, which is our goal. Now we would like to be able to edit the configMap through Terraform object, instead of doing all this process. There is a resource kubernetes_config_map in Terraform, but that's a different provider (kubernetes), not AWS, so it is not being able to find the cluster, and fails with trying to connect to the API server running in localhost.
There is a resource kubernetes_config_map in Terraform, but that's a different provider (kubernetes), not AWS
It is a different provider, because Terraform should now interact with a different API (Kubernetes API instead of AWS API).
There are data sources for aws_eks_cluster and aws_eks_cluster_auth that can be used to authenticate the kubernetes provider.
The aws_eks_cluster_auth has examples for authenticating the kubernetes provider:
data "aws_eks_cluster" "example" {
name = "example"
}
data "aws_eks_cluster_auth" "example" {
name = "example"
}
provider "kubernetes" {
host = data.aws_eks_cluster.example.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.example.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.example.token
load_config_file = false
}
Another example is how the Cloud Posse AWS EKS module authenticate the kubernetes provider and also use a ConfigMap.

Azure Terraform: Inserting "certificate-authority" data into Kube context during cluster creation

We're using Terraform to deploy AKS clusters to an environment behind a proxy over VPN. Deployment of the cluster works correctly when off-network without the proxy, but errors out on Helm deployment creation on-network.
We are able to connect to the cluster after it's up while on the network using the following command after retrieving the cluster context.
kubectl config set-cluster <cluster name> --certificate-authority=<path to organization's root certificate in PEM format>
The Helm deployments are also created with Terraform after the creation of the cluster. It seems that these require the certificate-authority data to deploy and we haven't been able to find a way to automate this at the right step in the process. Consequently, the apply fails with the error:
x509: certificate signed by unknown authority
Any idea how we can get the certificate-authority data in the right place so the Helm deployments stop failing? Or is there a way to get the cluster to implicitly trust that root certificate? We've tried a few different things:
Researched if you could automatically have that data in there when retrieving the cluster context (i.e. az aks get-credentials --name <cluster name> --resource-group <cluster RG>)?** Couldn't find an easy way to accomplish this.
We started to consider adding the root cert info as part of the kubeconfig that's generated during deployment (rather than the one you create when retrieving the context). The idea is that it can be passed in to the kubernetes/helm providers and also leveraged when running kubectl commands via local-exec blocks. We know that works but that means that we couldn't find a way to automate that via Terraform.
We've tried providing the root certificate to the different fields of the provider config, shown below. We've specifically tried a few different things with cluster_ca_certificate, namely providing the PEM-style cert of the root CA.
provider "kubernetes" {
host = module.aks.kube_config.0.host
client_certificate = base64decode(module.aks.kube_config.0.client_certificate)
client_key = base64decode(module.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(module.aks.kube_config.0.cluster_ca_certificate)
}
provider "helm" {
version = ">= 1.2.4"
kubernetes {
host = module.aks.kube_config.0.host
client_certificate = base64decode(module.aks.kube_config.0.client_certificate)
client_key = base64decode(module.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(module.aks.kube_config.0.cluster_ca_certificate)
}
}
Thanks in advance for the help! Let me know if you need any additional info. I'm still new to the project so I may not have explained everything correctly.
In case anyone finds this later, we ultimately ended up just breaking the project up into two parts: cluster creation and bootstrap. This let us add a local-exec block in the middle to run the kubectl config set-cluster... command. So the order of operations is now:
Deploy AKS cluster (which copies Kube config locally as one of the Terraform outputs)
Run the command
Deploy microservices
Because we're using Terragrunt, we can just use its apply-all function to execute both operations, setting the dependencies described here.

Regarding terraform script behaviour

I am using Terraform scripts to create azure services, I am having some doubts regarding Terraform,
1) If I have one environment let say dev in azure having some azure resources how can I copy all the resources to new environment lest say prod using terraform script.
2)what are the impact of re-run the terraform file with additional azure resources, what it will do.
3)What if I want to create an app service with the same name from Terraform script that already present in the azure will it update the resource or do nothing after terraform execution completed.
Please feel free to answer the question, it will be great help.
To answer your questions:
You could create a new workspace with terraform workspace new and copy all configuration files (.tf) to the new environment, then run terraform init, plan, apply.
The terraform will compare the content in your current state file with your configuration file, then update the new attributes or creating new resources other than re-creating the existing resources.
You could run terraform import to import existing infrastructure into Terraform. For referencing existing resources in the portal, you can use data sources.

alicloud_cs_managed_kubernetes vs alicloud_cs_kubernetes on terraform

So on Alibaba Cloud module in terraform and found identic resource:
alicloud_cs_managed_kubernetes
https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html
alicloud_cs_kubernetes
https://www.terraform.io/docs/providers/alicloud/r/cs_kubernetes.html
what is the different of that? i cannot differentiate that two resource
biggest difference is,
Managed kubernetes cluster, that means you can't control kubernetes master.
kubernetes cluster, you need create master as well.
master_instance_types = ["ecs.n4.small"]
Specifically speaking the differences between alicloud_cs_managed_kubernetes vs alicloud_cs_kubernetes on Terraform are can be addressed detailly with the help of parameter reference provided on official documentation.
But, the major difference between Kubernetes and Managed Kubernetes is
You don't need to manage the master node in Managed Kubernetes

Resources