I have the following .tf file
provider "kubernetes" {
config_context_cluster = "minikube"
}
resource "kubernetes_namespace" "user-namespace" {
metadata {
name = "user-namespace"
}
}
provider "helm" {
kubernetes {
config_context_cluster = "minikube"
}
}
resource "helm_release" "local" {
name = "user-server-chart"
chart = "./user-server"
}
When I run terraform apply I get the following error
kubernetes_namespace.brw-user-namespace: Creating...
helm_release.local: Creating...
Error code explanation: 501 = Server does not support this operation.\n") has prevented the request from succeeding (post namespaces)
│
│ with kubernetes_namespace.user-namespace,
│ on main.tf line 5, in resource "kubernetes_namespace" "user-namespace":
│ 5: resource "kubernetes_namespace" "user-namespace" {
│
╵
Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
│
│ with helm_release.local,
│ on main.tf line 17, in resource "helm_release" "local":
│ 17: resource "helm_release" "local" {
Step 1: kubeconfig is not having the correct context and add a line with kubeconfig to your ~/.bashrc file
kubectl config set-context ~/.kube/kubeconfig1.yml
kubectl config use-context ~/.kube/kubeconfig1.yml
or export KUBECONFIG=~/.kube/<kubeconfig_env>.yml
Step 2: A Helm Release resource can be imported using its namespace and name
e.g. terraform import helm_release.example default/example-name
Since the repository attribute is not being continued as metadata by helm, it will not be set to any value by default. All other provider specific attributes will be set to their default values and they can be overridden after running apply using the resource definition configuration.
You may refer to the document [1] [2] for additional information.
[1] https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release
[2] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs
Related
I am trying to do a Helm chart deployment through Terraform code on AKS.
The TF code that I have will create a resource in Datadog from which I will grab an output value that will be passed to my Helm release to be deployed on my cluster. It only has to create two resources, one of which is the Helm chart.
The problem that I am having is with authentication against my Kubernetes cluster, I am using a data source to bring the credentials from the cluster and then pass them in my kubernetes and helm providers.
My Terraform state for the AKS cluster is stored inside a Blob in a Azure Storage account.
I have tried updating the Helm chart versions, using different methods to access the data such as ${} around my variables.
Tried changing from username = data.azurerm_kubernetes_cluster.credentials.kube_config.0.username to use the admin configuration username = data.azurerm_kubernetes_cluster.credentials.kube_admin_config.0.username
Tried
Terraform version: 1.1.7
A data source is setup to bring the credentials for the AKS cluster in main.tf
data "azurerm_kubernetes_cluster" "credentials" {
name = var.aks_cluster_name
resource_group_name = var.aks_cluster_resource_group_name
}
This is versions.tf and what is being used to setup the connections to AKS.
terraform {
required_providers {
datadog = {
source = "DataDog/datadog"
}
}
backend "azurerm" {
}
}
provider "azurerm" {
features {}
}
provider "helm" {
debug = true
kubernetes {
username = data.azurerm_kubernetes_cluster.credentials.kube_config.0.username
password = data.azurerm_kubernetes_cluster.credentials.kube_config.0.password
host = data.azurerm_kubernetes_cluster.credentials.kube_config.0.host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.cluster_ca_certificate)
}
}
provider "kubernetes" {
username = data.azurerm_kubernetes_cluster.credentials.kube_config.0.username
password = data.azurerm_kubernetes_cluster.credentials.kube_config.0.password
host = data.azurerm_kubernetes_cluster.credentials.kube_config.0.host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.cluster_ca_certificate)
}
Error that I am seeing when running terraform apply, which will report that it can't find the elements in the collection for any of the attributes specified in my provider:
╷
│ Error: Invalid index
│
│ on versions.tf line 26, in provider "helm":
│ 26: host = data.azurerm_kubernetes_cluster.credentials.kube_admin_config.0.host
│ ├────────────────
│ │ data.azurerm_kubernetes_cluster.credentials.kube_admin_config has a sensitive value
│
│ The given key does not identify an element in this collection value.
╵
[ ... ]
╷
│ Error: Invalid index
│
│ on versions.tf line 27, in provider "helm":
│ 27: username = data.azurerm_kubernetes_cluster.credentials.kube_admin_config.0.username
│ ├────────────────
│ │ data.azurerm_kubernetes_cluster.credentials.kube_admin_config has a sensitive value
│
│ The given key does not identify an element in this collection value.
I am unsure on how to change my Terraform code such that this authentication works, given that the methods mentioned above have yielded no results. If needed I can provide the TF code for the deployment of the resources.
I'm using kubelogin to identify myself:
data "azurerm_client_config" "current" {
}
provider "helm" {
kubernetes {
host = azurerm_kubernetes_cluster.aks.kube_config.0.host
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = [
"get-token",
"--environment", "AzurePublicCloud",
"--server-id", "6dae42f8-4368-4678-94ff-3960e28e3630", # The AAD server app ID of AKS Managed AAD is always 6dae42f8-4368-4678-94ff-3960e28e3630 in any environments.
"--client-id", "${yamldecode(azurerm_kubernetes_cluster.aks.kube_config_raw).users[0].user.auth-provider.config.client-id}",
"--tenant-id", data.azurerm_client_config.current.tenant_id,
"--login", "devicecode"
]
command = "kubelogin"
}
}
}
I am using terraform to create an ECS cluster in AWS.
resource "aws_ecs_cluster" "cluster" {
name = "api-cluster"
capacity_providers = ["FARGATE"]
default_capacity_provider_strategy {
capacity_provider = "FARGATE"
weight = "100"
}
}
Whenever I run this code it starts fine, but then errors waiting for the cluster to be created.
Terraform v1.0.0
on linux_amd64
Initializing plugins and modules...
aws_ecs_cluster.cluster: Creating...
aws_ecs_cluster.cluster: Still creating... [10s elapsed]
╷
│ Error: error waiting for ECS Cluster (arn:aws:ecs:xxxxxxxxxxx:cluster/api-cluster) to become Available: couldn't find resource
│
│ with aws_ecs_cluster.cluster,
│ on main.tf line 73, in resource "aws_ecs_cluster" "cluster":
│ 73: resource "aws_ecs_cluster" "cluster" {
│
╵
The ECS cluster is ultimately deployed, but the script errors. What am I doing wrong?
It turns out this is caused by needing the ecs:DescribeClusters Action to be allowed on the user executing terraform apply.
Thanks to #ydaetskocR for pointing me in the right direction.
In my case, I had:
resource "aws_ecs_cluster" "cluster" {
name = "${var.service}-${var.env}"
}
module "ecs_service" {
source = "app.terraform.io/ifit/ecs-service/aws"
version = "2.3.0"
cluster_name = aws_ecs_cluster.cluster.name
...other stuff...
}
It was trying to create both the ecs cluster and the ecs service at the same time, but the service depends on the cluster for the cluster_name. I needed to add a depends_on attribute to the ecs_service:
module "ecs_service" {
source = "app.terraform.io/ifit/ecs-service/aws"
version = "2.3.0"
cluster_name = aws_ecs_cluster.cluster.name
...other stuff...
depends_on = [
aws_ecs_cluster.cluster
]
}
I have a simple data source block to retrieve a WAF policy resource but it doesn't find the resource. If I comment out the data block there is no problem. The WAF policy created by terraform can be applied and destroyed.
data "azurerm_web_application_firewall_policy" "example" {
name = "a205555-az-waf-policy"
resource_group_name = "eastus2-204161-platform-resources"
}
output "azurerm_web_application_firewall_policy" {
value = data.azurerm_web_application_firewall_policy.example.id
}
When I try to get the resource data I get error:
│ Error: Error: Web Application Firewall Policy "a205555-az-waf-policy" was not found
│ with data.azurerm_web_application_firewall_policy.example,
│ on output.tf line 40, in data "azurerm_web_application_firewall_policy" "example":
│ 40: data "azurerm_web_application_firewall_policy" "example" {
│
I've tried removing the resource from tfstate, importing the resource, deleting and recreating it, upgrading the provider in case there was a bug. Any ideas?
Hi I am trying to follow this offical guide to manage aks resources. There terraform_remote_state is used to get the resource_group_name and kubernetes_cluster_name.
data "terraform_remote_state" "aks" {
backend = "local"
config = {
path = "/path/to/base/project/terraform.tfstate"
}
}
# Retrieve AKS cluster information
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "cluster" {
name = data.terraform_remote_state.aks.outputs.kubernetes_cluster_name
resource_group_name = data.terraform_remote_state.aks.outputs.resource_group_name
}
I have created the inital aks cluster with the aks module. Looking at its output in the documentation, it doesnt export the resource group name or cluster name.
Now I wonder how I can get the information. I have tried the below in the base project.
module "aks" {
...
}
output "resource_group_name" {
value = module.aks.resource_group_name
}
output "kubernetes_cluster_name" {
value = module.aks.cluster_name
}
But I get erros when trying terraform plan
Error: Unsupported attribute
│
│ on main.tf line 59, in output "resource_group_name":
│ 59: value = module.aks.resource_group_name
│ ├────────────────
│ │ module.aks is a object, known only after apply
│
│ This object does not have an attribute named "resource_group_name".
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 63, in output "kubernetes_cluster_name":
│ 63: value = module.aks.cluster_name
│ ├────────────────
│ │ module.aks is a object, known only after apply
│
│ This object does not have an attribute named "cluster_name".
Those are listed under inputs for that module though. Now I dont have an idea know how to get those values from the terraform_remote_state.
As the module itself doesn’t have name and resource group as output , we have to declare outputs there first and then call it while deploying or in remote state as well.
So we have to add 2 outputs in output.tf for aks module after doing terraform init.
output "kubernetes_cluster_name" {
value = azurerm_kubernetes_cluster.main.name
}
output "resource_group_name" {
value = azurerm_kubernetes_cluster.main.resource_group_name
}
Then call outputs in main.tf after defining the modules i.e. network and aks , you can see your Kubernetes cluster name in plan as well and after applying it.
output "kuberneteclustername" {
value = module.aks.kubernetes_cluster_name
}
output "resourcegroupname" {
value = module.aks.resource_group_name
}
Now lets test it from the remote state :
data "terraform_remote_state" "aks" {
backend = "local"
config = {
path = "path/to/terraform/aksmodule/terraform.tfstate"
}
}
# Retrieve AKS cluster information
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "cluster" {
name = data.terraform_remote_state.aks.outputs.kuberneteclustername
resource_group_name = data.terraform_remote_state.aks.outputs.resourcegroupname
}
output "aks" {
value = data.azurerm_kubernetes_cluster.cluster.name
}
output "rg" {
value = data.azurerm_kubernetes_cluster.cluster.resource_group_name
}
I'm on TF version v1.0.0(latest) and am trying to make use of the pagerduty Tf provider and the error log says could not retrieve teh list of available versions. Below is the code snippet and complete error log.
Code:
terraform {
required_providers {
pagerduty = {
source = "PagerDuty/pagerduty"
version = "~> 1.9.8"
}
}
}
provider "pagerduty" {
token = var.token
}
resource "pagerduty_service" "example" {
name = "My Web App"
auto_resolve_timeout = 14400
acknowledgement_timeout = 600
escalation_policy = var.policy
}
resource "pagerduty_service_integration" "apiv2" {
name = "API V2"
type = "events_api_v2_inbound_integration"
service = pagerduty_service.example.id
}
Error:
- Finding latest version of hashicorp/pagerduty...
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/pagerduty: provider registry
│ registry.terraform.io does not have a provider named registry.terraform.io/hashicorp/pagerduty
│
│ Did you intend to use pagerduty/pagerduty? If so, you must specify that source address in each module which
│ requires that provider. To see which modules are currently depending on hashicorp/pagerduty, run the following
│ command:
│ terraform providers
Answering my question.
Separating the first terraform required_providers blocks into its own versions.tf file has solved the issue.