Terraform erroring waiting for ECS cluster - terraform

I am using terraform to create an ECS cluster in AWS.
resource "aws_ecs_cluster" "cluster" {
name = "api-cluster"
capacity_providers = ["FARGATE"]
default_capacity_provider_strategy {
capacity_provider = "FARGATE"
weight = "100"
}
}
Whenever I run this code it starts fine, but then errors waiting for the cluster to be created.
Terraform v1.0.0
on linux_amd64
Initializing plugins and modules...
aws_ecs_cluster.cluster: Creating...
aws_ecs_cluster.cluster: Still creating... [10s elapsed]
╷
│ Error: error waiting for ECS Cluster (arn:aws:ecs:xxxxxxxxxxx:cluster/api-cluster) to become Available: couldn't find resource
│
│ with aws_ecs_cluster.cluster,
│ on main.tf line 73, in resource "aws_ecs_cluster" "cluster":
│ 73: resource "aws_ecs_cluster" "cluster" {
│
╵
The ECS cluster is ultimately deployed, but the script errors. What am I doing wrong?

It turns out this is caused by needing the ecs:DescribeClusters Action to be allowed on the user executing terraform apply.
Thanks to #ydaetskocR for pointing me in the right direction.

In my case, I had:
resource "aws_ecs_cluster" "cluster" {
name = "${var.service}-${var.env}"
}
module "ecs_service" {
source = "app.terraform.io/ifit/ecs-service/aws"
version = "2.3.0"
cluster_name = aws_ecs_cluster.cluster.name
...other stuff...
}
It was trying to create both the ecs cluster and the ecs service at the same time, but the service depends on the cluster for the cluster_name. I needed to add a depends_on attribute to the ecs_service:
module "ecs_service" {
source = "app.terraform.io/ifit/ecs-service/aws"
version = "2.3.0"
cluster_name = aws_ecs_cluster.cluster.name
...other stuff...
depends_on = [
aws_ecs_cluster.cluster
]
}

Related

Authenticating on AKS for deploying a Helm release with Terraform

I am trying to do a Helm chart deployment through Terraform code on AKS.
The TF code that I have will create a resource in Datadog from which I will grab an output value that will be passed to my Helm release to be deployed on my cluster. It only has to create two resources, one of which is the Helm chart.
The problem that I am having is with authentication against my Kubernetes cluster, I am using a data source to bring the credentials from the cluster and then pass them in my kubernetes and helm providers.
My Terraform state for the AKS cluster is stored inside a Blob in a Azure Storage account.
I have tried updating the Helm chart versions, using different methods to access the data such as ${} around my variables.
Tried changing from username = data.azurerm_kubernetes_cluster.credentials.kube_config.0.username to use the admin configuration username = data.azurerm_kubernetes_cluster.credentials.kube_admin_config.0.username
Tried
Terraform version: 1.1.7
A data source is setup to bring the credentials for the AKS cluster in main.tf
data "azurerm_kubernetes_cluster" "credentials" {
name = var.aks_cluster_name
resource_group_name = var.aks_cluster_resource_group_name
}
This is versions.tf and what is being used to setup the connections to AKS.
terraform {
required_providers {
datadog = {
source = "DataDog/datadog"
}
}
backend "azurerm" {
}
}
provider "azurerm" {
features {}
}
provider "helm" {
debug = true
kubernetes {
username = data.azurerm_kubernetes_cluster.credentials.kube_config.0.username
password = data.azurerm_kubernetes_cluster.credentials.kube_config.0.password
host = data.azurerm_kubernetes_cluster.credentials.kube_config.0.host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.cluster_ca_certificate)
}
}
provider "kubernetes" {
username = data.azurerm_kubernetes_cluster.credentials.kube_config.0.username
password = data.azurerm_kubernetes_cluster.credentials.kube_config.0.password
host = data.azurerm_kubernetes_cluster.credentials.kube_config.0.host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.cluster_ca_certificate)
}
Error that I am seeing when running terraform apply, which will report that it can't find the elements in the collection for any of the attributes specified in my provider:
╷
│ Error: Invalid index
│
│ on versions.tf line 26, in provider "helm":
│ 26: host = data.azurerm_kubernetes_cluster.credentials.kube_admin_config.0.host
│ ├────────────────
│ │ data.azurerm_kubernetes_cluster.credentials.kube_admin_config has a sensitive value
│
│ The given key does not identify an element in this collection value.
╵
[ ... ]
╷
│ Error: Invalid index
│
│ on versions.tf line 27, in provider "helm":
│ 27: username = data.azurerm_kubernetes_cluster.credentials.kube_admin_config.0.username
│ ├────────────────
│ │ data.azurerm_kubernetes_cluster.credentials.kube_admin_config has a sensitive value
│
│ The given key does not identify an element in this collection value.
I am unsure on how to change my Terraform code such that this authentication works, given that the methods mentioned above have yielded no results. If needed I can provide the TF code for the deployment of the resources.
I'm using kubelogin to identify myself:
data "azurerm_client_config" "current" {
}
provider "helm" {
kubernetes {
host = azurerm_kubernetes_cluster.aks.kube_config.0.host
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = [
"get-token",
"--environment", "AzurePublicCloud",
"--server-id", "6dae42f8-4368-4678-94ff-3960e28e3630", # The AAD server app ID of AKS Managed AAD is always 6dae42f8-4368-4678-94ff-3960e28e3630 in any environments.
"--client-id", "${yamldecode(azurerm_kubernetes_cluster.aks.kube_config_raw).users[0].user.auth-provider.config.client-id}",
"--tenant-id", data.azurerm_client_config.current.tenant_id,
"--login", "devicecode"
]
command = "kubelogin"
}
}
}

Error: Inconsistent dependency lock file when extract resource into a module

I am new to terraform and as I extracted one of the resources into a module I got this:
Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are inconsistent with the current
│ configuration:
│ - provider registry.terraform.io/hashicorp/heroku: required by this configuration but no version is selected
│
│ To update the locked dependency selections to match a changed configuration, run:
│ terraform init -upgrade
How I did?
First I had this:
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
resource "heroku_addon" "redis" {
app = heroku_app.example.id
plan = "rediscloud:30"
}
after that terraform init was runing without error also terraform plan was successfull.
Then I extracted the redis resource declaration into a module:
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
module "key-value-store" {
source = "./modules/key-value-store"
app = heroku_app.example.id
plan = "30"
}
And the content of modules/key-value-store/main.tf is this:
terraform {
required_providers {
mycloud = {
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
resource "heroku_addon" "redis" {
app = var.app
plan = "rediscloud:${var.plan}"
}
terraform get went well. but terraform plan showed me the above error!
For this code to work, you have to have the required_providers blocks in both the root and child modules. So, the following needs to happen:
Add the required_providers block to the root module (this is what you have already)
Add the required_providers block to the child module and name it properly (currently you have set it to mycloud, and provider "heroku" {} block is missing)
The code that needs to be added in the root module is:
terraform {
required_providers {
heroku = {
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
module "key-value-store" {
source = "./modules/key-value-store"
app = heroku_app.example.id
plan = "30"
}
In the child module (i.e., ./modules/key-value-store) the following needs to be present:
terraform {
required_providers {
heroku = { ### not mycloud
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
provider "heroku" {} ### this was missing as well
resource "heroku_addon" "redis" {
app = var.app
plan = "rediscloud:${var.plan}"
}
This stopped working when the second resource was moved to the module as Heroku is not an official Terraform provider hence the provider settings are not propagated to the modules. For the unofficial providers (e.g., marked with verified), corresponding blocks of required_providers and provider <name> {} have to be defined. Also, make sure to remove the .terraform directory and re-run terraform init.

Azure vm snapshot using terraform throwing error

I have written a small terraform script to take snapshot of two VM's sitting on Azure. I have created two lists with resource group details and OS Disk name. Below is the necessary files.
main.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
required_version = ">= 1.1.0"
}
provider "azurerm" {
features {}
}
data "azurerm_managed_disk" "existing" {
for_each = zipmap(var.cloud_resource_group_list,var.cloud_vm_os_disk_name)
name = each.value
resource_group_name = each.key
}
resource "azurerm_snapshot" "example" {
name = "snapshot"
for_each = ([for i in data.azurerm_managed_disk.existing: zipmap(i.resource_group_name, i.name)])
location = data.azurerm_managed_disk.existing[each.key].location
resource_group_name = data.azurerm_managed_disk.existing[each.key]
create_option = "Copy"
source_uri = data.azurerm_managed_disk.existing[each.value].id
}
variables.tf
variable "cloud_resource_group_list" {
description = "VM resource group name"
type = list(string)
}
variable "cloud_vm_os_disk_name" {
description = "VM OS disk names"
type = list(string)
}
terraform.tfvars
cloud_resource_group_list = ["rg1", "rg2"]
cloud_vm_os_disk_name = ["disk1", "disk2"]
terraform validate runs sucessfully. When I run terraform apply the first resource group is read sucessfully but it fails for second resource group. Below is the error.
terraform apply
data.azurerm_managed_disk.existing["rg1"]: Reading...
data.azurerm_managed_disk.existing["rg1"]: Reading...
data.azurerm_managed_disk.existing["disk1"]: Read complete after 1s
╷
│ Error: Managed Disk: (Disk Name "disk2" / Resource Group "rg2") was not found
│
│ with data.azurerm_managed_disk.existing["rg2"],
│ on main.tf line 22, in data "azurerm_managed_disk" "existing":
│ 22: data "azurerm_managed_disk" "existing" {
Both rg2 and disk2 exists on azure portal. Please help me where am I wrong and why its not working.

Terraform plan throws Provider Plugin Error

I am doing a small POC with terraform and I am unable to run terraform plan
My code:
terraform {
backend "azurerm" {
storage_account_name = "appngqastorage"
container_name = "terraform"
key = "qa.terraform.tfstate"
access_key = "my access key here"
}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.77"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "qa_resource_group" {
location = "East US"
name = "namehere"
}
My execution:
terraform init = success
terraform validate = configuration is valid
terraform plan = throws exception
Error:
│ Error: Plugin error
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on main.tf line 15, in provider "azurerm":
│ 15: provider"azurerm"{
│
│ The plugin returned an unexpected error from plugin.(*GRPCProvider).ConfigureProvider: rpc error: code = Internal desc = grpc: error while marshaling: string field contains invalid UTF-8
After digging a little deeper I was able to figure out what was the issue.
Current project I am working on is been utilized within multiple regions. So, when testing work I am swapping my region in order to properly test the data that displays in specific region. This time when running terraform apply my windows configuration was pointing to another region not the US.
This thread helped me understand.
Upgrade azure CLI to the latest version using az upgrade
Log into your azure account using az login
Re-run your terraform commands!

using terraform with helm on minikube gives error

I have the following .tf file
provider "kubernetes" {
config_context_cluster = "minikube"
}
resource "kubernetes_namespace" "user-namespace" {
metadata {
name = "user-namespace"
}
}
provider "helm" {
kubernetes {
config_context_cluster = "minikube"
}
}
resource "helm_release" "local" {
name = "user-server-chart"
chart = "./user-server"
}
When I run terraform apply I get the following error
kubernetes_namespace.brw-user-namespace: Creating...
helm_release.local: Creating...
Error code explanation: 501 = Server does not support this operation.\n") has prevented the request from succeeding (post namespaces)
│
│ with kubernetes_namespace.user-namespace,
│ on main.tf line 5, in resource "kubernetes_namespace" "user-namespace":
│ 5: resource "kubernetes_namespace" "user-namespace" {
│
╵
Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
│
│ with helm_release.local,
│ on main.tf line 17, in resource "helm_release" "local":
│ 17: resource "helm_release" "local" {
Step 1: kubeconfig is not having the correct context and add a line with kubeconfig to your ~/.bashrc file
kubectl config set-context ~/.kube/kubeconfig1.yml
kubectl config use-context ~/.kube/kubeconfig1.yml
or export KUBECONFIG=~/.kube/<kubeconfig_env>.yml
Step 2: A Helm Release resource can be imported using its namespace and name
e.g. terraform import helm_release.example default/example-name
Since the repository attribute is not being continued as metadata by helm, it will not be set to any value by default. All other provider specific attributes will be set to their default values and they can be overridden after running apply using the resource definition configuration.
You may refer to the document [1] [2] for additional information.
[1] https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release
[2] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs

Resources