Azure AKS with agic - how to create it with Terraform? - azure

I'm currently setting up an AGIC (Kubernetes Application Gateway Ingress Controller) for an AKS environment (https://azure.github.io/application-gateway-kubernetes-ingress/setup/install-existing/#using-a-service-principal).
As the whole environment is setup with Terraform, I'd like to install the necessary Helm repository also with Terraform.
Thought, the following simple code should do the trick:
data "helm_repository" "agic_repo" {
name = "agic_repository"
url = "https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/"
}
resource "helm_release" "agic" {
name = "agic"
namespace = "agic"
repository = data.helm_repository.agic_repo.metadata[0].url
chart = "application-gateway-kubernetes-ingress"
depends_on = [
data.helm_repository.agic_repo,
]
}
But I ran into this issue:
module.agic.helm_release.agic: Creating...
Error: chart "application-gateway-kubernetes-ingress" not found in https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ repository
on ../../modules/agic/main.tf line 91, in resource "helm_release" "agic":
91: resource "helm_release" "agic" {
So it looks, as if the package cannot be found. Did anyone else solve this before?
I'm not familiar with Helm, so I don't know how to 'browse' within the Helm repos to check whether I'm addressing the right URI...
So I added the repo manually with
helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/
When I search for the repo I receive:
V5T:~$ helm search | grep ingress
application-gateway-kubernetes-ingress/ingress-azure 1.0.0 1.0.0 Use Azure Application Gateway as the ingress for an Azure...
Any help appreciated!
P.S.: Sure, I could do it with a bash one-liner, but would be great to have the whole environment created by Terraform...

According to the data you provided it has to be this:
resource "helm_release" "agic" {
name = "agic"
namespace = "agic"
repository = data.helm_repository.agic_repo.metadata[0].url
chart = "ingress-azure"
depends_on = [
data.helm_repository.agic_repo,
]
}
so the chart name is different

Related

Terraform Provider config inside child module

I’m trying to create modules that will handle helm deployments. the structure goes like this
root module - call helm-traefik module
there’s an input (cluster name) that will be used to fetch data sources for the provider config inside the helm child module.
child module - helm-traefik
main tf. - call helm module
variables.tf
values.yaml
child module - helm
providers.tf - both provider config for kubernetes and helm are using kubelogin for authentication
datasources.tf
main.tf - helm_release
variables.tf
The issue is that I’m getting an error with tf plan and it says that Kubernetes cluster is unreachable. I’ve been reading docs regarding providers and I think the reason why I’m getting errors is that I don’t have the provider config for Kubernetes and helm in the root module level. Any feasible solution for this use case? I want to have a separation between the helm module in a way it can be consumed regardless of the helm chart to be deployed.
Also, If I put the provider config from the child module to the root module, that would mean I need to create a provider config for each cluster I want to manage.
on helm - child module, this is how I generate the provider config
datasources.tf
locals {
# The purpose of the cluster.
purpose = split("-", "${var.cluster_name}")[0]
# The network environment of the cluster.
customer_env = split("-", "${var.cluster_name}")[2]
}
data "azurerm_kubernetes_cluster" "cluster" {
resource_group_name = "rg-${local.purpose}-${local.customer_env}-001"
name = "aks-${local.purpose}-${local.customer_env}"
}
provider.tf
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.cluster.kube_config.0.host
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.cluster.kube_config.0.cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "kubelogin"
args = [
"get-token",
"--login", "spn",
"--environment", "AzurePublicCloud",
"--server-id", "6dae42f8-4368-4678-94ff-3960e28e3630",
"--tenant-id", data.azurerm_client_config.current.tenant_id,
"--client-id", data.azurerm_client_config.current.client_id,
"--client-secret", data.azurerm_key_vault_secret.service_principal_key.value,
]
}
}

ArgoCD bootstrapping with terraform in Azure Pipeline

I am trying to deploy ArgoCD and applications located in subfolders through Terraform in an AKS cluster.
This is my Folder structure tree:
I'm using app of apps approach, so first I will deploy ArgoCD (this will manage itself as well) and later ArgoCD will let me SYNC the cluster-addons and application manually once installed.
apps
cluster-addons
AKV2K8S
Cert-Manager
Ingress-nginx
application
application-A
argocd
override-values.yaml
Chart
When I run the command "helm install ..." manually in the AKS cluster everything is installed fine.
ArgoCD is installed and later when I access ArgoCD I see that rest of applications are missing and I can sync them manually.
However, If I want to install it through Terraform only ArgoCD is installed but looks like it does not "detect" the override_values.yaml file:
i mean, ArgoCD and ArgoCD application set controller are installed in the cluster but ArgoCD does not "detect" the values.yaml files that are customized for my AKS cluster. If I run "helm install" manually on the cluster everything works but not through Terraform
resource "helm_release" "argocd_applicationset" {
name = "argocd-applicationset"
repository = https://argoproj.github.io/argo-helm
chart = "argocd-applicationset"
namespace = "argocd"
version = "1.11.0"
}
resource "helm_release" "argocd" {
name = "argocd"
repository = https://argoproj.github.io/argo-helm
chart = "argo-cd"
namespace = "argocd"
version = "3.33.6"
values = [
"${file("values.yaml")}"
]
values.yaml file is located in the folder where I have the TF code to install argocd and argocd applicationset.
I tried to change the name of the file" values.yaml" to "override_values.yaml" but same issue.
I have many things changed into the override_values.yaml file so I cannot use "set" inside the TF code...
Also, I tried adding:
values = [
"${yamlencode(file("values.yaml"))}"
]
but I get this error in "apply" step in the pipeline:
error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {} "argo-cd:\r\n ## ArgoCD configuration\r\n ## Ref: https://github.com/argoproj/argo-cd\r\n
Probably because is not a JSON file? It does make sense to convert this file into a JSON one?
Any idea if I can pass this override values yaml file through terraform?
If not, please may you post a clear/full example with mock variables on how to do that using Azure pipeline?
Thanks in advance!
The issue was with the values identation in TF code.
The issue was resolved when I resolve that:
resource "helm_release" "argocd_applicationset" {
name = "argocd-applicationset"
repository = https://argoproj.github.io/argo-helm
chart = "argocd-applicationset"
namespace = "argocd"
version = "1.11.0"
}
resource "helm_release" "argocd" {
name = "argocd"
repository = https://argoproj.github.io/argo-helm
chart = "argo-cd"
namespace = "argocd"
version = "3.33.6"
values = [file("values.yaml")]
It is working fine also with quoting.

How to deploy a Bitnami chart on K8S with terraform passing a custom values.yaml?

Friends, I want to deploy a helm chart of bitnami with a custom values.yaml using Terraform. Is this possible? While I was using only K8S and Helm, what I did was copying the values.yaml from the Bitnami repo, and changing what I needed and then run helm install mysql -f values.yaml bitnami/mysql. Now I have to deploy everything with terraform and I am wondering how I can do that. Do I have to clone the whole Bitnami repo and deploy it like the following?
resource "helm_release" "example" {
name = "my-local-chart"
chart = "./charts/example"
}
Or is it possible to deploy the chart passing my costum values.yaml? Any idea? I am super new to all this.
To answer the question,
This is fairly possible with using terraform.
[ With Chart Repository]
Here you can find how you can set custom values.yaml file itself. and also how to use a chart from a remote repository.
[With Local Charts]
and here you can find out how to specify a specific value, here they have shown mostly how to work with local charts(men's you should have the charts in your local file system and from terraform code you have to point to it, just like you given in the question.),
also for local charts, you can look at this docs as well.
Example:
Helm : helm install mysql -f values.yaml bitnami/mysql
Terraform:
resource "helm_release" "mysql" {
name = "mysql"
repository = "https://charts.bitnami.com/bitnami"
chart = "mysql"
version = "8.2.3"
values = [
"${file("values.yaml")}"
]
set {
name = "metrics.enabled"
value = "true"
}
set {
name = "service.annotations.prometheus.io/port"
value = "9127"
type = "string"
}
}
in the above, I'm setting the values from custom values.yaml file although I'm overwriting the metrics.enabled and service.annotations.prometheus.io/port.

Terraform clone git repo at plan or init stage

Context:
I am building API Gateway with OpenAPI Specifications 3.0 using terraform. I have got the api-spec.yaml file in a different repo from the terraform code. So, here's what I have done so far.
Using null_resource to clone the repo at the desired location
resource "null_resource" "clone-spec-file" {
provisioner "local-exec" {
command = "git clone https://gitlab.com/dShringi/openapi-spec.git"
}
}
Using the cloned api-spec file while creating the api gateway resource
data "template_file" swagger {
template = file("./openapi-spec/api-spec.yaml")
depends_on = ["null_resource.clone-spec-file"]
}
Problem:
The script fails at terraform plan because although I have used depends_on with template_file, it doesn't actually clones the repo at plan stage but it checks if the file is present and hence it fails with file not found at template = file("./openapi-spec/api-spec.yaml").
Will appreciate any thoughts regarding how it can be best handled, thanks.

Cannot taint null_resource

I got terraform 0.11.11.
The graph show that the resource in speaking is in root module
$ terraform graph
digraph {
compound = "true"
newrank = "true"
subgraph "root" {
"[root] data.template_file.default" [label = "data.template_file.default", shape = "box"]
"[root] data.template_file.etcd" [label =
...
"[root] null_resource.service_resolv_conf" [label = "null_resource.service_resolv_conf", shape = "box"]
...
But the trying to taint it says it is not:
$ terraform taint null_resource.service_resolv_conf
The resource null_resource.service_resolv_conf couldn't be found in the module root.
updates
$ terraform state list|grep resolv_conf
null_resource.service_resolv_conf[0]
null_resource.service_resolv_conf[1]
then i tried:
$ terraform taint null_resource.service_resolv_conf[0]
The resource null_resource.service_resolv_conf[0] couldn't be found in the module root.
and
$ terraform taint null_resource.service_resolv_conf
The resource null_resource.service_resolv_conf couldn't be found in the module root.
terraform graph gives your the whole picture about the resources and their relationship.
But it is not a good command for troubleshooting and understand how the resources are named in terraform *.tfstate file.
I would recommend to run with terraform state list, then you can easily know how to taint one of the resources in list.
terraform state list
terraform taint <copy resource directly from above list>
For whoever who comes into this thread looking for terraform taint/untaint null_resource where terraform errors out with The resource […] couldn't be found in the module root here's the correct and working answer posted by #victor-m at Cannot taint null_resource
terraform taint -module=name null_resource.name
Same for untaint command.
After all I found out the solution
It appears, then when there are more hosts in connection based on list (used 'count')
resource "null_resource" "provision_docker_registry" {
count = "${length(local.service_vms_names)}"
depends_on = ["null_resource.service_resolv_conf"]
connection {
user = "cloud-user"
private_key = "${file("${path.module}/../ssh/${var.os_keypair}.pem")}"
host = "${element(local.service_fip, count.index)}"
}
You taint the resource by specifying index after dot, i.e.
$ terraform taint null_resource.provision_docker_registry.0
The resource null_resource.provision_docker_registry.0 in the module root has been marked as tainted!
Voila!
I could have not found that in documentation. Hope this helps someone.

Resources