I am trying to deploy the helm charts from ACR using the terraform-provider-helm but it fails with below error. Can someone please let me know if I am doing anything wrong because I am not able to understand why is this searching for mcpshareddcr-index.yaml ?
Terraform Version
0.12.18
Affected Resource(s)
helm_release
helm_repository
Terraform Configuration Files
# Cluster RBAC helm Chart repository
data "helm_repository" "cluster_rbac_helm_chart_repo" {
name = "mcpshareddcr"
url = "https://mcpshareddcr.azurecr.io/helm/v1/repo"
username = var.ARM_CLIENT_ID
password = var.ARM_CLIENT_SECRET
}
# Deploy Cluster RBAC helm chart onto the cluster
resource "helm_release" "cluster_rbac_helm_chart_release" {
name = "mcp-rbac-cluster"
repository = data.helm_repository.cluster_rbac_helm_chart_repo.metadata[0].name
chart = "mcp-rbac-cluster"
version = "0.1.0"
}
module usage:
provider "azurerm" {
version = "=1.36.0"
tenant_id = var.ARM_TENANT_ID
subscription_id = var.ARM_SUBSCRIPTION_ID
client_id = var.ARM_CLIENT_ID
client_secret = var.ARM_CLIENT_SECRET
skip_provider_registration = true
}
data "azurerm_kubernetes_cluster" "aks_cluster" {
name = var.aks_cluster
resource_group_name = var.resource_group_aks
}
locals {
kubeconfig_path = "/tmp/kubeconfig"
}
resource "local_file" "kubeconfig" {
filename = local.kubeconfig_path
content = data.azurerm_kubernetes_cluster.aks_cluster.kube_admin_config_raw
}
provider "helm" {
home = "./.helm"
kubernetes {
load_config_file = true
config_path = local.kubeconfig_path
}
}
// Module to deploy Stratus offered helmcharts in AKS cluster
module "mcp_resources" {
source = "modules\/helm\/mcp-resources"
ARM_CLIENT_ID = var.ARM_CLIENT_ID
ARM_CLIENT_SECRET = var.ARM_CLIENT_SECRET
ARM_SUBSCRIPTION_ID = var.ARM_SUBSCRIPTION_ID
ARM_TENANT_ID = var.ARM_TENANT_ID
}
Expected Behavior
Deploy of helm charts on AKS fetching from ACR.
Actual Behavior
Error: Looks like "***/helm/v1/repo" is not a valid chart repository or cannot be reached: open .helm/repository/cache/.helm/repository/cache/mcpshareddcr-index.yaml: no such file or directory
Steps to Reproduce
terraform plan
Related
I have stored my docker images in artifacts registry in google cloud.
I have a helm chart that when I deploy with helm, everything works fine.
When I deploy with terraform, everything gets deployed. However, all images that need to be fetched from the artifacts registry fail with ImagePull errors. The configuration of these paths are done in the helm values file, so I am a bit confused why terraform fails to fetch them when I use terraform I provide my helm.tf and cluster.tf although not sure if these files are the issue.
cluster.tf
# google_client_config and kubernetes provider must be explicitly specified like the following.
# Retrieve an access token as the Terraform runner
data "google_client_config" "default" {}
# GKE cluster
resource "google_container_cluster" "primary" {
name = "my-cluster"
project = var.project
location = var.region
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
networking_mode = "VPC_NATIVE"
ip_allocation_policy {}
}
# Separately Managed Node Pool
resource "google_container_node_pool" "primary_nodes" {
project = var.project
name = "${google_container_cluster.primary.name}-node-pool"
location = var.region
cluster = google_container_cluster.primary.name
node_count = 1
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
labels = {
env = var.project
}
preemptible = true
machine_type = "e2-small"
tags = ["gke-node"]
metadata = {
disable-legacy-endpoints = "true"
}
}
}
helm.tf
provider "helm" {
kubernetes {
host = "https://${google_container_cluster.primary.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)
}
}
resource "helm_release" "example" {
name = "test-chart"
chart = "./helm"
namespace="test-namespace"
create_namespace=true
values = [
file("./helm/values/values-test.yaml")
]
depends_on = [
google_container_cluster.primary
]
}
I checked the oauth scopes of the cluster when I create it for helm, I added them to the terraform and it worked.
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append"
]
I am new to terraform and terragrunt. I am using Vault to store Azure Service Principal and provisioning infra using terragrunt. I am unable to initialize and apply the vault dependency from root folder, need to exclusively initialize Vault from its sub folder.
Modules:
Resource Group, VM, VNET and Vault.
RG depends on Vault, VNet depends on RG, and VM depends on RG and VNet.
My repo looks like this:
.
When I run terragrunt init in the root level with terragrunt.hcl file , it gets stuck in initialising state because it does not receive vault module outputs. But when I go to Vault-tf folder and do terragrunt init and terragrunt apply, it fetches the vault secrets properly and post this when I run terragrunt init and terragrunt apply at root level , it works fine and creates the azure resources successfully.
My root terragrunt.hcl file looks like this:
dependency "credentials" {
config_path = "/root/terragrunt-new/BaseConfig/Vault-tf"
mock_outputs = {
tenant_id = "temp-tenant-id"
client_id = "temp-client-id"
client_secret = "temp-secret-id"
subscription_id = "temp-subscription-id"
}
}
terraform {
source = "git::https://git link to modules//"
extra_arguments "force_subscription" {
commands = [
"init",
"apply",
"destroy",
"refresh",
"import",
"plan",
"taint",
"untaint"
]
env_vars = {
ARM_TENANT_ID = dependency.credentials.outputs.tenant_id
ARM_CLIENT_ID = dependency.credentials.outputs.client_id
ARM_CLIENT_SECRET = dependency.credentials.outputs.client_secret
ARM_SUBSCRIPTION_ID = dependency.credentials.outputs.subscription_id
}
}
}
inputs = {
prefix = "terragrunt-nbux"
location = "centralus"
}
locals {
subscription_id = "xxxxxxxxxx-cc3e-4014-a891-xxxxxxxxxx"
}
generate "versions" {
path = "versions_override.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.9.0"
}
vault = {
source = "hashicorp/vault"
version = "3.7.0"
}
}
}
provider "vault" {
address = "http://xx.xx.xx.xx:8200"
skip_tls_verify = true
token = "hvs.xxxxxxxxxxxxxxxxx"
}
provider "azurerm" {
features {}
}
EOF
}
remote_state {
backend = "azurerm"
config = {
subscription_id = "${local.subscription_id}"
key = "${path_relative_to_include()}/terraform.tfstate"
resource_group_name = "rg-terragrunt-vault"
storage_account_name = "terragruntnbuxstorage"
container_name = "base-config-tfstate"
}
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
}
And my Vault-tf folder's terragrunt.hcl file looks like this
terraform {
source = "git::https:path/terragrunt-new//Modules/Vault"
}
generate "versions" {
path = "versions_override.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
required_providers {
vault = {
source = "hashicorp/vault"
version = "3.7.0"
}
}
}
provider "vault" {
address = "http://xx.xx.xx.xx:8200"
skip_tls_verify = true
token = "hvs.xxxxxxxxxxxxxxxxxxxxI"
}
EOF
}
I have setup my Azure Kubernetes Cluster using Terraform and it working good.
I trying to deploy packages using Helm but not able to deploy getting below error.
Error: chart "stable/nginx-ingress" not found in https://kubernetes-charts.storage.googleapis.com repository
Note: I tried other packages as well my not able to deploy using "Terraform Resource" below is Terraform code. I tried local helm package using helm command and it works. I think the issue with Terraform helm resources. "nginx" is a sample package not able to deploy any package using Terraform.
resource "azurerm_kubernetes_cluster" "k8s" {
name = var.aks_cluster_name
location = var.location
resource_group_name = var.resource_group_name
dns_prefix = var.aks_dns_prefix
kubernetes_version = "1.19.0"
# private_cluster_enabled = true
linux_profile {
admin_username = var.aks_admin_username
ssh_key {
key_data = var.aks_ssh_public_key
}
}
default_node_pool {
name = var.aks_node_pool_name
enable_auto_scaling = true
node_count = var.aks_agent_count
min_count = var.aks_min_agent_count
max_count = var.aks_max_agent_count
vm_size = var.aks_node_pool_vm_size
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
# tags = data.azurerm_resource_group.rg.tags
}
provider "helm" {
version = "1.3.2"
kubernetes {
host = azurerm_kubernetes_cluster.k8s.kube_config[0].host
client_key = base64decode(azurerm_kubernetes_cluster.k8s.kube_config[0].client_key)
client_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config[0].client_certificate)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config[0].cluster_ca_certificate)
load_config_file = false
}
}
resource "helm_release" "nginx-ingress" {
name = "nginx-ingress-internal"
repository = "https://kubernetes-charts.storage.googleapis.com"
chart = "stable/nginx-ingress"
set {
name = "rbac.create"
value = "true"
}
}
You should skip stable in the chart name: it is a repository name but you have no helm repositories defined. Your resource should look like:
resource "helm_release" "nginx-ingress" {
name = "nginx-ingress-internal"
repository = "https://kubernetes-charts.storage.googleapis.com"
chart = "nginx-ingress"
...
}
which is an equivalent to the helm command:
helm install nginx-ingress-internal nginx-ingress --repo https://kubernetes-charts.storage.googleapis.com
Alternatively you can define repositories with the repository_config_path.
I am deploying my infra with terraform, but for AKS I use ARM templating because it has some features that are not in TF yet.
So in my tf template I have the following resource defined to deploy an ARM template:
resource "azurerm_template_deployment" "k8s" {
name = "${var.environment}-aks-deployment"
resource_group_name = "${azurerm_resource_group.kubernetes.name}"
parameters = {
workspaceResourceId = "${azurerm_log_analytics_workspace.k8s-law.id}"
aksClusterName = "fntm-k8s-${var.environment}"
subnetKubernetes = "${azurerm_subnet.kubernetes.id}"
servicePrincipal = "${azuread_service_principal.k8s_sp.application_id}"
clientSecret = "${random_string.sp_password.result}"
clientAppID = "${var.clientAppID}"
serverAppID = "${var.serverAppID}"
tenantID = "${var.tenant_id}"
serverAppSecret = "${var.serverAppSecret}"
}
template_body = "${file("kubernetes/azuredeploy.json")}"
deployment_mode = "Incremental"
}
The deployment of the cluster goes fine, but after that I need to get data from the AKS cluster which will be used by a different module.
If I use the data resource for AKS it tries to get the cluster data before it is deployed. So the below part doesn't work.
data "azurerm_kubernetes_cluster" "kubernetes" {
name = "fntm-k8s-${var.environment}"
resource_group_name = "${azurerm_resource_group.kubernetes.name}"
}
I thought maybe a depends_on but that is not supported in the data resource.
Anybody maybe an idea how I can get the data attribute node_resource_group from the AKS cluster with output? Or any other thoughts/solutions?
output "k8s_resource_group" {
value = "${lookup(azurerm_template_deployment.k8s.outputs, "?????")}"
}
In your azuredeploy.json use this for the output:
"outputs": {
"aksClusterName": {
"type": "string",
"value": "[parameters('aksClusterName')]"
}
}
And in your tf file use:
output "aksClusterName" {
value = "${azurerm_template_deployment.k8s.outputs["aksClusterName"]}"
}
data "azurerm_kubernetes_cluster" "kubernetes" {
name = ""
resource_group_name = "${azurerm_resource_group.kubernetes.name}"
}
output "k8s_resource_group" {
value = "${data.azurerm_kubernetes_cluster.kubernetes.node_resource_group}"
}
I am trying to install helm chart with Terraform Helm Provider using the following terraform script
I'm already succeed to use Kubernetes provider to deploy some k8s ressources, but it doesn't work with Helm
terraform v0.11.13
provider.helm v0.10
provider.kubernetes v1.9
provider "helm" {
alias = "prdops"
service_account = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
kubernetes {
host = "${google_container_cluster.prdops.endpoint}"
alias = "prdops"
load_config_file = false
username = "${google_container_cluster.prdops.master_auth.0.username}"
password = "${google_container_cluster.prdops.master_auth.0.password}"
client_certificate = "${base64decode(google_container_cluster.prdops.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.prdops.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.prdops.master_auth.0.cluster_ca_certificate)}"
}
}
resource "kubernetes_service_account" "tiller" {
provider = "kubernetes.prdops"
metadata {
name = "tiller"
namespace = "kube-system"
}
}
resource "kubernetes_cluster_role_binding" "tiller" {
provider = "kubernetes.prdops"
metadata {
name = "tiller"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "tiller"
}
subject {
kind = "ServiceAccount"
name = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
api_group = ""
}
}
resource "helm_release" "jenkins" {
provider = "helm.prdops"
name = "jenkins"
chart = "stable/jenkins"
}
but I'm geting the following error
1 error(s) occurred:
* helm_release.jenkins: 1 error(s) occurred:
* helm_release.jenkins: rpc error: code = Unknown desc = configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
Helm uses a server component (in Helm v2, they are getting rid of it in the new Helm v3) called tiller. In order for helm to function, tiller is assigned a service account to interact with the Kubernetes API. In this case it seems the service account of tiller has insufficient permissions to perform the operation.
Kindly check if tiller pod is running in kube-system namespace. If not reinstall helm and do helm init so that tiller pod comes up and I hope this issue will be resolved.