ArgoCD bootstrapping with terraform in Azure Pipeline - azure

I am trying to deploy ArgoCD and applications located in subfolders through Terraform in an AKS cluster.
This is my Folder structure tree:
I'm using app of apps approach, so first I will deploy ArgoCD (this will manage itself as well) and later ArgoCD will let me SYNC the cluster-addons and application manually once installed.
apps
cluster-addons
AKV2K8S
Cert-Manager
Ingress-nginx
application
application-A
argocd
override-values.yaml
Chart
When I run the command "helm install ..." manually in the AKS cluster everything is installed fine.
ArgoCD is installed and later when I access ArgoCD I see that rest of applications are missing and I can sync them manually.
However, If I want to install it through Terraform only ArgoCD is installed but looks like it does not "detect" the override_values.yaml file:
i mean, ArgoCD and ArgoCD application set controller are installed in the cluster but ArgoCD does not "detect" the values.yaml files that are customized for my AKS cluster. If I run "helm install" manually on the cluster everything works but not through Terraform
resource "helm_release" "argocd_applicationset" {
name = "argocd-applicationset"
repository = https://argoproj.github.io/argo-helm
chart = "argocd-applicationset"
namespace = "argocd"
version = "1.11.0"
}
resource "helm_release" "argocd" {
name = "argocd"
repository = https://argoproj.github.io/argo-helm
chart = "argo-cd"
namespace = "argocd"
version = "3.33.6"
values = [
"${file("values.yaml")}"
]
values.yaml file is located in the folder where I have the TF code to install argocd and argocd applicationset.
I tried to change the name of the file" values.yaml" to "override_values.yaml" but same issue.
I have many things changed into the override_values.yaml file so I cannot use "set" inside the TF code...
Also, I tried adding:
values = [
"${yamlencode(file("values.yaml"))}"
]
but I get this error in "apply" step in the pipeline:
error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {} "argo-cd:\r\n ## ArgoCD configuration\r\n ## Ref: https://github.com/argoproj/argo-cd\r\n
Probably because is not a JSON file? It does make sense to convert this file into a JSON one?
Any idea if I can pass this override values yaml file through terraform?
If not, please may you post a clear/full example with mock variables on how to do that using Azure pipeline?
Thanks in advance!

The issue was with the values identation in TF code.
The issue was resolved when I resolve that:
resource "helm_release" "argocd_applicationset" {
name = "argocd-applicationset"
repository = https://argoproj.github.io/argo-helm
chart = "argocd-applicationset"
namespace = "argocd"
version = "1.11.0"
}
resource "helm_release" "argocd" {
name = "argocd"
repository = https://argoproj.github.io/argo-helm
chart = "argo-cd"
namespace = "argocd"
version = "3.33.6"
values = [file("values.yaml")]
It is working fine also with quoting.

Related

Terraform Provider config inside child module

I’m trying to create modules that will handle helm deployments. the structure goes like this
root module - call helm-traefik module
there’s an input (cluster name) that will be used to fetch data sources for the provider config inside the helm child module.
child module - helm-traefik
main tf. - call helm module
variables.tf
values.yaml
child module - helm
providers.tf - both provider config for kubernetes and helm are using kubelogin for authentication
datasources.tf
main.tf - helm_release
variables.tf
The issue is that I’m getting an error with tf plan and it says that Kubernetes cluster is unreachable. I’ve been reading docs regarding providers and I think the reason why I’m getting errors is that I don’t have the provider config for Kubernetes and helm in the root module level. Any feasible solution for this use case? I want to have a separation between the helm module in a way it can be consumed regardless of the helm chart to be deployed.
Also, If I put the provider config from the child module to the root module, that would mean I need to create a provider config for each cluster I want to manage.
on helm - child module, this is how I generate the provider config
datasources.tf
locals {
# The purpose of the cluster.
purpose = split("-", "${var.cluster_name}")[0]
# The network environment of the cluster.
customer_env = split("-", "${var.cluster_name}")[2]
}
data "azurerm_kubernetes_cluster" "cluster" {
resource_group_name = "rg-${local.purpose}-${local.customer_env}-001"
name = "aks-${local.purpose}-${local.customer_env}"
}
provider.tf
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.cluster.kube_config.0.host
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.cluster.kube_config.0.cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "kubelogin"
args = [
"get-token",
"--login", "spn",
"--environment", "AzurePublicCloud",
"--server-id", "6dae42f8-4368-4678-94ff-3960e28e3630",
"--tenant-id", data.azurerm_client_config.current.tenant_id,
"--client-id", data.azurerm_client_config.current.client_id,
"--client-secret", data.azurerm_key_vault_secret.service_principal_key.value,
]
}
}

Terraform helm release timing out problem

I am trying to deploy helm_release using terraform script. My .tf file looks like below
resource "helm_release" "ingress-aws" {
name = "ingress-aws"
repository = "https://aws.github.io/eks-charts"
chart = "aws-load-balancer-controller"
version = "2.1.3"
namespace = kubernetes_namespace.aws-ingress-controller.metadata[0].name
#namespace = "ingress-aws"
atomic = true
timeout = 300
values = [<<EOF
clusterName: dcp-daas-test14-dcp-eks
serviceAccount:
name: aws-load-balancer
when i run my deployment pipeline, after reaching at this stage it waits for 5 minutes (as timeout=300 sec) and then gives out an error saying something like this:
Error: [0m[0m[1mrelease ingress-aws failed, and has been uninstalled due to atomic being set: timed out waiting for the condition
on ingress.tf line 64, in resource "helm_release" "ingress-aws":
64: resource "helm_release" "ingress-aws" [4m{[0m
i tried changing the version of the helm release and tried applying this helm_release on newly created EKS cluster yet the same issue.
Any advice would be appreciated and ask question in comments if you want more info about the question.
Also I tried installing helm release chart version V-1.1.6 and it works fine but when i go with the V-1.2.2 it gives me above error. Not sure if it is causing due to any bugs in latest version of helm chart !!
Thanks and cheers!

How to deploy a Bitnami chart on K8S with terraform passing a custom values.yaml?

Friends, I want to deploy a helm chart of bitnami with a custom values.yaml using Terraform. Is this possible? While I was using only K8S and Helm, what I did was copying the values.yaml from the Bitnami repo, and changing what I needed and then run helm install mysql -f values.yaml bitnami/mysql. Now I have to deploy everything with terraform and I am wondering how I can do that. Do I have to clone the whole Bitnami repo and deploy it like the following?
resource "helm_release" "example" {
name = "my-local-chart"
chart = "./charts/example"
}
Or is it possible to deploy the chart passing my costum values.yaml? Any idea? I am super new to all this.
To answer the question,
This is fairly possible with using terraform.
[ With Chart Repository]
Here you can find how you can set custom values.yaml file itself. and also how to use a chart from a remote repository.
[With Local Charts]
and here you can find out how to specify a specific value, here they have shown mostly how to work with local charts(men's you should have the charts in your local file system and from terraform code you have to point to it, just like you given in the question.),
also for local charts, you can look at this docs as well.
Example:
Helm : helm install mysql -f values.yaml bitnami/mysql
Terraform:
resource "helm_release" "mysql" {
name = "mysql"
repository = "https://charts.bitnami.com/bitnami"
chart = "mysql"
version = "8.2.3"
values = [
"${file("values.yaml")}"
]
set {
name = "metrics.enabled"
value = "true"
}
set {
name = "service.annotations.prometheus.io/port"
value = "9127"
type = "string"
}
}
in the above, I'm setting the values from custom values.yaml file although I'm overwriting the metrics.enabled and service.annotations.prometheus.io/port.

Azure AKS with agic - how to create it with Terraform?

I'm currently setting up an AGIC (Kubernetes Application Gateway Ingress Controller) for an AKS environment (https://azure.github.io/application-gateway-kubernetes-ingress/setup/install-existing/#using-a-service-principal).
As the whole environment is setup with Terraform, I'd like to install the necessary Helm repository also with Terraform.
Thought, the following simple code should do the trick:
data "helm_repository" "agic_repo" {
name = "agic_repository"
url = "https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/"
}
resource "helm_release" "agic" {
name = "agic"
namespace = "agic"
repository = data.helm_repository.agic_repo.metadata[0].url
chart = "application-gateway-kubernetes-ingress"
depends_on = [
data.helm_repository.agic_repo,
]
}
But I ran into this issue:
module.agic.helm_release.agic: Creating...
Error: chart "application-gateway-kubernetes-ingress" not found in https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ repository
on ../../modules/agic/main.tf line 91, in resource "helm_release" "agic":
91: resource "helm_release" "agic" {
So it looks, as if the package cannot be found. Did anyone else solve this before?
I'm not familiar with Helm, so I don't know how to 'browse' within the Helm repos to check whether I'm addressing the right URI...
So I added the repo manually with
helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/
When I search for the repo I receive:
V5T:~$ helm search | grep ingress
application-gateway-kubernetes-ingress/ingress-azure 1.0.0 1.0.0 Use Azure Application Gateway as the ingress for an Azure...
Any help appreciated!
P.S.: Sure, I could do it with a bash one-liner, but would be great to have the whole environment created by Terraform...
According to the data you provided it has to be this:
resource "helm_release" "agic" {
name = "agic"
namespace = "agic"
repository = data.helm_repository.agic_repo.metadata[0].url
chart = "ingress-azure"
depends_on = [
data.helm_repository.agic_repo,
]
}
so the chart name is different

Terraform - How to Create a GKE Cluster and Install Helm Charts?

Goal
I have a specific workflow to set up a fresh Kubernetes cluster on Google Cloud. And I want to automate the process with Terraform. Those are the steps:
Create cluster
gcloud beta container --project "my-google-project" clusters create "cluster-name" --zone "europe-west3-b"
Setup Helm repos
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo add jetstack https://charts.jetstack.io/
helm repo update
Install NGINX Ingress
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)
helm install nginx-ingress stable/nginx-ingress
Install Cert-Manager
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/v0.13.0/deploy/manifests/00-crds.yaml
kubectl create namespace cert-manager
helm install cert-manager jetstack/cert-manager --namespace cert-manager
Ideas
The first step will probably look like this:
resource "google_container_cluster" "primary" {
name = "cluster-name"
location = "europe-west3-b"
initial_node_count = 3
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
metadata = {
disable-legacy-endpoints = "true"
}
}
}
But I have no idea how to approach steps 2 - 4.
While Terraform makes sense for building and provisioning cloud infrastructure for things like Kubernetes to run on, it doesn't necessarily make sense to be used to configure said infrastructure after deployment. I think most infrastructure designs would consider applications deployed onto a provisioned cluster as configurations to said cluster. The semantics here are surely a bit nuanced but I maintain that a tool like Ansible is better suited to deploy applications to your cluster after provisioning.
So my advice would be to define a handful of Ansible Roles. Perhaps:
create_cluster
deploy_helm
install_nginx_ingress
install_cert_manager
Within each respective role, define the tasks and variables that are required to be used as per the Galaxy schema. Lastly, define a Playbook that Ansible uses to include or import these roles. This would allow you to provision your infrastructure and deploy all of the required applications to it in a single command:
ansible-playbook playbook.yml

Resources