Error while installing Helm chart using Terraform helm provider - terraform

I am trying to install helm chart with Terraform Helm Provider using the following terraform script
I'm already succeed to use Kubernetes provider to deploy some k8s ressources, but it doesn't work with Helm
terraform v0.11.13
provider.helm v0.10
provider.kubernetes v1.9
provider "helm" {
alias = "prdops"
service_account = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
kubernetes {
host = "${google_container_cluster.prdops.endpoint}"
alias = "prdops"
load_config_file = false
username = "${google_container_cluster.prdops.master_auth.0.username}"
password = "${google_container_cluster.prdops.master_auth.0.password}"
client_certificate = "${base64decode(google_container_cluster.prdops.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.prdops.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.prdops.master_auth.0.cluster_ca_certificate)}"
}
}
resource "kubernetes_service_account" "tiller" {
provider = "kubernetes.prdops"
metadata {
name = "tiller"
namespace = "kube-system"
}
}
resource "kubernetes_cluster_role_binding" "tiller" {
provider = "kubernetes.prdops"
metadata {
name = "tiller"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "tiller"
}
subject {
kind = "ServiceAccount"
name = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
api_group = ""
}
}
resource "helm_release" "jenkins" {
provider = "helm.prdops"
name = "jenkins"
chart = "stable/jenkins"
}
but I'm geting the following error
1 error(s) occurred:
* helm_release.jenkins: 1 error(s) occurred:
* helm_release.jenkins: rpc error: code = Unknown desc = configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"

Helm uses a server component (in Helm v2, they are getting rid of it in the new Helm v3) called tiller. In order for helm to function, tiller is assigned a service account to interact with the Kubernetes API. In this case it seems the service account of tiller has insufficient permissions to perform the operation.

Kindly check if tiller pod is running in kube-system namespace. If not reinstall helm and do helm init so that tiller pod comes up and I hope this issue will be resolved.

Related

Assign object to Terraform provider configuration

A private Terraform module outputs an object with three properties "host", "token" and "cluster_ca_certificate". The kubernetes provider and the kubernetes section of the helm provider accept the same property names. Unfortunately, as far as I can tell, I cannot e.g. assign the output object to them, so that I don't need to repeat myself:
provider "kubernetes" = module.kubernetes.configuration
provider "helm" {
kubernetes = module.kubernetes.configuration
}
Something like that I would prefer over the much more repetitive and error prone:
provider "kubernetes" {
host = module.kubernetes.configuration.host
token = module.kubernetes.configuration.token
cluster_ca_certificate = module.kubernetes.configuration.cluster_ca_certificate
}
provider "helm" {
kubernetes {
host = module.kubernetes.configuration.host
token = module.kubernetes.configuration.token
cluster_ca_certificate = module.kubernetes.configuration.cluster_ca_certificate
}
}
Am I missing something? Can this be simplified?

Creating a GCP Cloud Composer V2 instance via Terraform

I am trying to provision a Cloud Composer V2 instance via Terraform.
Terraform version: 1.1.3
Provider versions:
hashicorp/google: ~> 3.87.0
My tf code is as below:
resource "google_composer_environment" "cc_foo_uat_airflow" {
name = "cc-foo-uat-airflow"
region = var.region
project = var.project_id
provider = google-beta
config {
node_config {
zone = var.primary_zone
network = google_compute_network.foo_uat_composer.id
subnetwork = google_compute_subnetwork.foo_uat_composer.id
service_account = module.sa_foo_uat_airflow_runner.id
}
software_config {
image_version = var.image_version
python_version = var.python_version
airflow_config_overrides = {
secrets-backend = "airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend"
webserver-expose_config = "True"
}
}
}
}
Relevant variables are below:
variable "image_version" {
default = "composer-2.0.1-airflow-2.1.4"
}
variable "python_version" {
default = "3"
}
Running terraform init via the CLI produces a valid plan, but my build on Terraform cloud fails with the following error:
Error: googleapi: Error 400: Found 1 problem: 1) Configuring node location is not supported for Cloud Composer environments in versions 2.0.0 and newer., badRequest
with google_composer_environment.cc_foo_uat_airflow
on main.tf line 100, in resource "google_composer_environment" "cc_foo_uat_airflow":
resource "google_composer_environment" "cc_foo_uat_airflow" {
I cannot discern from this error message what portion of my TF code is invalid. I cannot remove the zone block from the node_config section, as it is required. I cannot figure out what is causing this error.
Edit: anonymized a missing reference to a proper noun
We're using "terraform-google-composer2.0" module and our .yaml file looks like this
module: "terraform-google-composer2.0"
version: "1.0.0"
name: XXXXX
image_version: composer-2.0.0-airflow-2.1.4
network: XXXXX
subnetwork: composer-XXXXX
region: us-east1
service_account: XXXXXXX
environment_size: ENVIRONMENT_SIZE_LARGE
scheduler_cpu: 2
scheduler_memory_gb: 4
scheduler_storage_gb: 4
scheduler_count: 4
web_server_cpu: 2
web_server_memory_gb: 4
web_server_storage_gb: 4
worker_cpu: 2
worker_max_count: 100
worker_min_count: 3
worker_memory_gb: 4
airflow_config_overrides:
scheduler-catchup_by_default: false
scheduler-dag_dir_list_interval: 180

Terraform apply fails because kubernetes provider runs as user "client" that has no permissions [duplicate]

I can use terraform to deploy a Kubernetes cluster in GKE.
Then I have set up the provider for Kubernetes as follows:
provider "kubernetes" {
host = "${data.google_container_cluster.primary.endpoint}"
client_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
By default, terraform interacts with Kubernetes with the user client, which has no power to create (for example) deployments. So I get this error when I try to apply my changes with terraform:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_deployment.foo: 1 error(s) occurred:
* kubernetes_deployment.foo: Failed to create deployment: deployments.apps is forbidden: User "client" cannot create deployments.apps in the namespace "default"
I don't know how should I proceed now, how should I give this permissions to the client user?
If the following fields are added to the provider, I am able to perform deployments, although after reading the documentation it seems these credentials are used for HTTP communication with the cluster, which is insecure if it is done through the internet.
username = "${data.google_container_cluster.primary.master_auth.0.username}"
password = "${data.google_container_cluster.primary.master_auth.0.password}"
Is there any other better way of doing so?
you can use the service account that are running the terraform
data "google_client_config" "default" {}
provider "kubernetes" {
host = "${google_container_cluster.default.endpoint}"
token = "${data.google_client_config.default.access_token}"
cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}"
load_config_file = false
}
OR
give permissions to the default "client"
But you need a valid authentication on GKE cluster provider to run this :/ ups circular dependency here
resource "kubernetes_cluster_role_binding" "default" {
metadata {
name = "client-certificate-cluster-admin"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
kind = "User"
name = "client"
api_group = "rbac.authorization.k8s.io"
}
subject {
kind = "ServiceAccount"
name = "default"
namespace = "kube-system"
}
subject {
kind = "Group"
name = "system:masters"
api_group = "rbac.authorization.k8s.io"
}
}
It looks like the user that you are using is missing the required RBAC role for creating deployments. Make sure that user has the correct verbs for the deployments resource. You can take a look at this Role examples to have an idea about it.
You need to provide both. Check this example on how to integrate the Kubernetes provider with the Google Provider.
Example of how to configure the Kubernetes provider:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}

helm list not showing up after terraform apply

I am using terraform helm provider to install helm package.
My current main.tf file
provider "helm" {
kubernetes {
config_path = pathexpand(var.kube_config)
}
}
provider "kubernetes" {
config_path = pathexpand(var.kube_config)
}
data "template_file" "test_values" {
template = file("./scripts/test-values.yml")
vars = {
NAMESPACE = "test"
}
}
resource "helm_release" "test" {
chart = "test"
name = "test"
repository = "."
namespace = "test"
values = [
data.template_file.test_values.rendered
]
}
Kubectl command output
kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
test 0/1 Running 0 59m
Issue is "helm list" does not show any result.
My current version
helm version
Client: &version.Version{SemVer:"v2.17.0", GitCommit:"a690bad98af45b015bd3da1a41f6218b1a451dbe", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.17.0", GitCommit:"a690bad98af45b015bd3da1a41f6218b1a451dbe", GitTreeState:"clean"}
and terrafrom version -
provider registry.terraform.io/hashicorp/google v3.69.0
+ provider registry.terraform.io/hashicorp/helm v2.1.2
+ provider registry.terraform.io/hashicorp/kubernetes v2.2.0
Any reason why "helm list" does not show any output ?

Error: data helm repository doesn't work as expected

I am trying to deploy the helm charts from ACR using the terraform-provider-helm but it fails with below error. Can someone please let me know if I am doing anything wrong because I am not able to understand why is this searching for mcpshareddcr-index.yaml ?
Terraform Version
0.12.18
Affected Resource(s)
helm_release
helm_repository
Terraform Configuration Files
# Cluster RBAC helm Chart repository
data "helm_repository" "cluster_rbac_helm_chart_repo" {
name = "mcpshareddcr"
url = "https://mcpshareddcr.azurecr.io/helm/v1/repo"
username = var.ARM_CLIENT_ID
password = var.ARM_CLIENT_SECRET
}
# Deploy Cluster RBAC helm chart onto the cluster
resource "helm_release" "cluster_rbac_helm_chart_release" {
name = "mcp-rbac-cluster"
repository = data.helm_repository.cluster_rbac_helm_chart_repo.metadata[0].name
chart = "mcp-rbac-cluster"
version = "0.1.0"
}
module usage:
provider "azurerm" {
version = "=1.36.0"
tenant_id = var.ARM_TENANT_ID
subscription_id = var.ARM_SUBSCRIPTION_ID
client_id = var.ARM_CLIENT_ID
client_secret = var.ARM_CLIENT_SECRET
skip_provider_registration = true
}
data "azurerm_kubernetes_cluster" "aks_cluster" {
name = var.aks_cluster
resource_group_name = var.resource_group_aks
}
locals {
kubeconfig_path = "/tmp/kubeconfig"
}
resource "local_file" "kubeconfig" {
filename = local.kubeconfig_path
content = data.azurerm_kubernetes_cluster.aks_cluster.kube_admin_config_raw
}
provider "helm" {
home = "./.helm"
kubernetes {
load_config_file = true
config_path = local.kubeconfig_path
}
}
// Module to deploy Stratus offered helmcharts in AKS cluster
module "mcp_resources" {
source = "modules\/helm\/mcp-resources"
ARM_CLIENT_ID = var.ARM_CLIENT_ID
ARM_CLIENT_SECRET = var.ARM_CLIENT_SECRET
ARM_SUBSCRIPTION_ID = var.ARM_SUBSCRIPTION_ID
ARM_TENANT_ID = var.ARM_TENANT_ID
}
Expected Behavior
Deploy of helm charts on AKS fetching from ACR.
Actual Behavior
Error: Looks like "***/helm/v1/repo" is not a valid chart repository or cannot be reached: open .helm/repository/cache/.helm/repository/cache/mcpshareddcr-index.yaml: no such file or directory
Steps to Reproduce
terraform plan

Resources