index.yaml : 404 Not Found - terraform

I want to run Helm chart from Terraform script. I tried this:
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
source = "hashicorp/kubernetes"
version = "2.13.1"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "1.14.0"
}
helm = {
source = "hashicorp/helm"
version = "2.6.0"
}
}
}
provider "kubectl" {
# run kubectl cluster-info to get expoint and port
host = "https://192.168.1.139:6443/"
token = "eyJhbG......."
insecure = "true"
}
provider "kubernetes" {
# run kubectl cluster-info to get expoint and port
host = "https://192.168.1.139:6443/"
token = "eyJhb...."
insecure = "true"
}
resource "kubernetes_namespace" "example" {
metadata {
annotations = {
name = "example-annotation"
}
labels = {
mylabel = "label-value"
}
name = "terraform-example-namespace"
}
}
resource "helm_release" "spring-helm-stg" {
name = "spring-helm-stg"
repository = "https://github.com/rcbandit111/terraform_helm_chart_poc/tree/main/helm/spring-helm-stg"
chart = "spring-helm-stg"
}
Full code: https://github.com/rcbandit111/terraform_helm_chart_poc
helm_release.spring-helm-stg: Creating...
╷
│ Error: could not download chart: looks like "https://github.com/rcbandit111/terraform_helm_chart_poc/tree/main/helm/spring-helm-stg" is not a valid chart repository or cannot be reached: failed to fetch https://github.com/rcbandit111/terraform_helm_chart_poc/tree/main/helm/spring-helm-stg/index.yaml : 404 Not Found
│
│ with helm_release.spring-helm-stg,
│ on main.tf line 48, in resource "helm_release" "spring-helm-stg":
│ 48: resource "helm_release" "spring-helm-stg" {
I created the helm chart using this command: helm create spring-helm-stg
But there is no file index.yaml
Full helm chart code: https://github.com/rcbandit111/terraform_helm_chart_poc/tree/main/helm/spring-helm-stg
Do you know how I can fix this?

First: your repository url is https://github.com/rcbandit111/terraform_helm_chart_poc (and NOT https://github.com/rcbandit111/terraform_helm_chart_poc/tree/main/helm/spring-helm-stg)
After fixing that, you should then place the index.yaml file at root level (instead of helm directory) and also - make it a valid one. That's also "kind of" important.
Because your repository is filled with sub-directories, lots of index files and seems pretty messed-up (it's OK to make experiments... it's also OK to delete irrelevant parts) you may consider rearranging everything in a new branch and merge it to master OR create a new better-organized repository.
RESPECT to #marko for the documentation link in the comment. Please use that when you are writing your repository's index file
Cheers

Related

Why am I getting 'Unsupported argument errors' in my main.tf file?

I have a main.tf file with the following code block:
module "login_service" {
source = "/path/to/module"
name = var.name
image = "python:${var.image_version}"
port = var.port
command = var.command
}
# Other stuff below
I've defined a variables.tf file as follows:
variable "name" {
type = string
default = "login-service"
description = "Name of the login service module"
}
variable "command" {
type = list(string)
default = ["python", "-m", "LoginService"]
description = "Command to run the LoginService module"
}
variable "port" {
type = number
default = 8000
description = "Port number used by the LoginService module"
}
variable "image" {
type = string
default = "python:3.10-alpine"
description = "Image used to run the LoginService module"
}
Unfortunately, I keep getting this error when running terraform plan.
Error: Unsupported argument
│
│ on main.tf line 4, in module "login_service":
│ 4: name = var.name
│
│ An argument named "name" is not expected here.
This error repeats for the other variables. I've done a bit of research and read the terraform documentation on variables, and read other stack overflow answers but I haven't really found a good answer to the problem.
Any help appreciated.
A Terraform module block is only for referring to a Terraform module. It doesn't support any other kind of module. Terraform modules are a means for reusing Terraform declarations across many configurations, but th
Therefore in order for this to be valid you must have at least one .tf file in /path/to/module that declares the variables that you are trying to pass into the module.
From what you've said it seems like there's a missing step in your design: you are trying to declare something in Kubernetes using Terraform, but the configuration you've shown here doesn't include anything which would tell Terraform to interact with Kubernetes.
A typical way to manage Kubernetes objects with Terraform is using the hashicorp/kubernetes provider. A Terraform configuration using that provider would include a declaration of the dependency on that provider, the configuration for that provider, and at least one resource block declaring something that should exist in your Kubernetes cluster:
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
}
}
}
provider "kubernetes" {
host = "https://example.com/" # URL of your Kubernetes API
# ...
}
# For example only, a kubernetes_deployment resource
# that declares one Kubernetes deployment.
# In practice you can use any resource type from this
# provider, depending on what you want to declare.
resource "kubernetes_deployment" "example" {
metadata {
name = "terraform-example"
labels = {
test = "MyExampleApp"
}
}
spec {
replicas = 3
selector {
match_labels = {
test = "MyExampleApp"
}
}
template {
metadata {
labels = {
test = "MyExampleApp"
}
}
spec {
container {
image = "nginx:1.21.6"
name = "example"
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
liveness_probe {
http_get {
path = "/"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
}
}
}
}
Although you can arrange resources into separate modules in Terraform if you wish, I would suggest focusing on learning to directly describe resources in Terraform first and then once you are confident with that you can learn about techniques for code reuse using Terraform modules.

Error: Inconsistent dependency lock file when extract resource into a module

I am new to terraform and as I extracted one of the resources into a module I got this:
Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are inconsistent with the current
│ configuration:
│ - provider registry.terraform.io/hashicorp/heroku: required by this configuration but no version is selected
│
│ To update the locked dependency selections to match a changed configuration, run:
│ terraform init -upgrade
How I did?
First I had this:
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
resource "heroku_addon" "redis" {
app = heroku_app.example.id
plan = "rediscloud:30"
}
after that terraform init was runing without error also terraform plan was successfull.
Then I extracted the redis resource declaration into a module:
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
module "key-value-store" {
source = "./modules/key-value-store"
app = heroku_app.example.id
plan = "30"
}
And the content of modules/key-value-store/main.tf is this:
terraform {
required_providers {
mycloud = {
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
resource "heroku_addon" "redis" {
app = var.app
plan = "rediscloud:${var.plan}"
}
terraform get went well. but terraform plan showed me the above error!
For this code to work, you have to have the required_providers blocks in both the root and child modules. So, the following needs to happen:
Add the required_providers block to the root module (this is what you have already)
Add the required_providers block to the child module and name it properly (currently you have set it to mycloud, and provider "heroku" {} block is missing)
The code that needs to be added in the root module is:
terraform {
required_providers {
heroku = {
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
provider "heroku" {}
resource "heroku_app" "example" {
name = "learn-terraform-heroku-ob"
region = "us"
}
module "key-value-store" {
source = "./modules/key-value-store"
app = heroku_app.example.id
plan = "30"
}
In the child module (i.e., ./modules/key-value-store) the following needs to be present:
terraform {
required_providers {
heroku = { ### not mycloud
source = "heroku/heroku"
version = "~> 4.6"
}
}
}
provider "heroku" {} ### this was missing as well
resource "heroku_addon" "redis" {
app = var.app
plan = "rediscloud:${var.plan}"
}
This stopped working when the second resource was moved to the module as Heroku is not an official Terraform provider hence the provider settings are not propagated to the modules. For the unofficial providers (e.g., marked with verified), corresponding blocks of required_providers and provider <name> {} have to be defined. Also, make sure to remove the .terraform directory and re-run terraform init.

Terraform Cloud Run Service URL

I create a cloud run service like so
terraform {
required_version = ">= 1.1.2"
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.1.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = "~> 4.2.0"
}
}
}
provider "google" {
project = "main_project"
region = "us-central-1"
credentials = "<my-key-path>"
}
resource "google_cloud_run_service" "default" {
name = "cloudrun-srv"
location = "us-central1"
template {
spec {
containers {
image = "us-docker.pkg.dev/cloudrun/container/hello"
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
I want to save the value of the service url that is created - https://default-hml2qtrgfq-uw.a.run.app using an output variable. something like
output "cloud_run_instance_url" {
value = google_cloud_run_service.default.url
}
This gives me an error:
terraform plan
╷
│ Error: Unsupported attribute
│
│ on main.tf line 40, in output "cloud_run_instance_url":
│ 40: value = google_cloud_run_service.default.url
│
│ This object has no argument, nested block, or exported attribute named "url".
╵
How do I get this output value and assign it to a variable so that other services like cloud scheduler can point to it?
If you declare an output for the url resource attribute like:
output "cloud_run_instance_url" {
value = google_cloud_run_service.default.status.0.url
}
then it will be available for resolution (such as for inputs to other modules) at the scope where the module is declared in the namespace module.<declared module name>.cloud_run_instance_url. For example, if this module is declared in the root module config, then it can be resolved at that namespace elsewhere in the root module config.

Terraform AKS error: services "azure-vote-back" already exists, how to deal with it?

In Terraform I wrote a resource that deploys to AKS. I want to apply the terraform changes multiple times, but don't want to have the error below. The system automatically needs to detect whether the resource already exists / is identical. Currently it shows me 'already exists', but I don't want it to fail. Any suggestions how I can fix this issue?
│ Error: services "azure-vote-back" already exists
│
│ with kubernetes_service.example2,
│ on main.tf line 91, in resource "kubernetes_service" "example2":
│ 91: resource "kubernetes_service" "example2" {
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "aks" {
name = "kubernetescluster"
resource_group_name = "myResourceGroup"
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.aks.kube_config[0].host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}
resource "kubernetes_namespace" "azurevote" {
metadata {
annotations = {
name = "azurevote-annotation"
}
labels = {
mylabel = "azurevote-value"
}
name = "azurevote"
}
}
resource "kubernetes_service" "example" {
metadata {
name = "azure-vote-front"
}
spec {
selector = {
app = kubernetes_pod.example.metadata.0.labels.app
}
session_affinity = "ClientIP"
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
resource "kubernetes_pod" "example" {
metadata {
name = "azure-vote-front"
labels = {
app = "azure-vote-front"
}
}
spec {
container {
image = "mcr.microsoft.com/azuredocs/azure-vote-front:v1"
name = "front"
env {
name = "REDIS"
value = "azure-vote-back"
}
}
}
}
resource "kubernetes_pod" "example2" {
metadata {
name = "azure-vote-back"
namespace = "azure-vote"
labels = {
app = "azure-vote-back"
}
}
spec {
container {
image = "mcr.microsoft.com/oss/bitnami/redis:6.0.8"
name = "back"
env {
name = "ALLOW_EMPTY_PASSWORD"
value = "yes"
}
}
}
}
resource "kubernetes_service" "example2" {
metadata {
name = "azure-vote-back"
namespace = "azure-vote"
}
spec {
selector = {
app = kubernetes_pod.example2.metadata.0.labels.app
}
session_affinity = "ClientIP"
port {
port = 6379
target_port = 6379
}
type = "ClusterIP"
}
}
Thats the ugly thing with deploying thing inside Kubernetes with terraform....you will meet this nice errors from time to time and thats why it is not recommended to do it :/
You could try to just remove the record from the state file:
terraform state rm 'kubernetes_service.example2'
Terraform now will no longer track this record and the good thing it will not be deleted on the remote system.
On the next run terraform then will recognise that this resource exists on the remote system and add the record to the state.
I would like to add a bit to #Philip Welz's answer.
The terraform state rm command is used to remove items from the Terraform state. This command can remove single resources, single instances of a resource, entire modules, and more. [1]
(Just in case) To list all state:
terraform state list
According to the documentation, exactly as #Philip Welz mentioned, this command will cause Terraform to "forget" all of the instances of the kubernetes_service resource named "example2:
terraform state rm 'kubernetes_service.example2'
After all you should see:
Successfully removed 1 resource instance(s).
See also links:
[1] Doc about Command: state rm
[2] This question
[3] This guide

Error: data helm repository doesn't work as expected

I am trying to deploy the helm charts from ACR using the terraform-provider-helm but it fails with below error. Can someone please let me know if I am doing anything wrong because I am not able to understand why is this searching for mcpshareddcr-index.yaml ?
Terraform Version
0.12.18
Affected Resource(s)
helm_release
helm_repository
Terraform Configuration Files
# Cluster RBAC helm Chart repository
data "helm_repository" "cluster_rbac_helm_chart_repo" {
name = "mcpshareddcr"
url = "https://mcpshareddcr.azurecr.io/helm/v1/repo"
username = var.ARM_CLIENT_ID
password = var.ARM_CLIENT_SECRET
}
# Deploy Cluster RBAC helm chart onto the cluster
resource "helm_release" "cluster_rbac_helm_chart_release" {
name = "mcp-rbac-cluster"
repository = data.helm_repository.cluster_rbac_helm_chart_repo.metadata[0].name
chart = "mcp-rbac-cluster"
version = "0.1.0"
}
module usage:
provider "azurerm" {
version = "=1.36.0"
tenant_id = var.ARM_TENANT_ID
subscription_id = var.ARM_SUBSCRIPTION_ID
client_id = var.ARM_CLIENT_ID
client_secret = var.ARM_CLIENT_SECRET
skip_provider_registration = true
}
data "azurerm_kubernetes_cluster" "aks_cluster" {
name = var.aks_cluster
resource_group_name = var.resource_group_aks
}
locals {
kubeconfig_path = "/tmp/kubeconfig"
}
resource "local_file" "kubeconfig" {
filename = local.kubeconfig_path
content = data.azurerm_kubernetes_cluster.aks_cluster.kube_admin_config_raw
}
provider "helm" {
home = "./.helm"
kubernetes {
load_config_file = true
config_path = local.kubeconfig_path
}
}
// Module to deploy Stratus offered helmcharts in AKS cluster
module "mcp_resources" {
source = "modules\/helm\/mcp-resources"
ARM_CLIENT_ID = var.ARM_CLIENT_ID
ARM_CLIENT_SECRET = var.ARM_CLIENT_SECRET
ARM_SUBSCRIPTION_ID = var.ARM_SUBSCRIPTION_ID
ARM_TENANT_ID = var.ARM_TENANT_ID
}
Expected Behavior
Deploy of helm charts on AKS fetching from ACR.
Actual Behavior
Error: Looks like "***/helm/v1/repo" is not a valid chart repository or cannot be reached: open .helm/repository/cache/.helm/repository/cache/mcpshareddcr-index.yaml: no such file or directory
Steps to Reproduce
terraform plan

Resources