I have setup my Azure Kubernetes Cluster using Terraform and it working good.
I trying to deploy packages using Helm but not able to deploy getting below error.
Error: chart "stable/nginx-ingress" not found in https://kubernetes-charts.storage.googleapis.com repository
Note: I tried other packages as well my not able to deploy using "Terraform Resource" below is Terraform code. I tried local helm package using helm command and it works. I think the issue with Terraform helm resources. "nginx" is a sample package not able to deploy any package using Terraform.
resource "azurerm_kubernetes_cluster" "k8s" {
name = var.aks_cluster_name
location = var.location
resource_group_name = var.resource_group_name
dns_prefix = var.aks_dns_prefix
kubernetes_version = "1.19.0"
# private_cluster_enabled = true
linux_profile {
admin_username = var.aks_admin_username
ssh_key {
key_data = var.aks_ssh_public_key
}
}
default_node_pool {
name = var.aks_node_pool_name
enable_auto_scaling = true
node_count = var.aks_agent_count
min_count = var.aks_min_agent_count
max_count = var.aks_max_agent_count
vm_size = var.aks_node_pool_vm_size
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
# tags = data.azurerm_resource_group.rg.tags
}
provider "helm" {
version = "1.3.2"
kubernetes {
host = azurerm_kubernetes_cluster.k8s.kube_config[0].host
client_key = base64decode(azurerm_kubernetes_cluster.k8s.kube_config[0].client_key)
client_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config[0].client_certificate)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config[0].cluster_ca_certificate)
load_config_file = false
}
}
resource "helm_release" "nginx-ingress" {
name = "nginx-ingress-internal"
repository = "https://kubernetes-charts.storage.googleapis.com"
chart = "stable/nginx-ingress"
set {
name = "rbac.create"
value = "true"
}
}
You should skip stable in the chart name: it is a repository name but you have no helm repositories defined. Your resource should look like:
resource "helm_release" "nginx-ingress" {
name = "nginx-ingress-internal"
repository = "https://kubernetes-charts.storage.googleapis.com"
chart = "nginx-ingress"
...
}
which is an equivalent to the helm command:
helm install nginx-ingress-internal nginx-ingress --repo https://kubernetes-charts.storage.googleapis.com
Alternatively you can define repositories with the repository_config_path.
Related
I have created an AKS cluster using the following Terraform code
resource "azurerm_virtual_network" "test" {
name = var.virtual_network_name
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
address_space = [var.virtual_network_address_prefix]
subnet {
name = var.aks_subnet_name
address_prefix = var.aks_subnet_address_prefix
}
tags = var.tags
}
data "azurerm_subnet" "kubesubnet" {
name = var.aks_subnet_name
virtual_network_name = azurerm_virtual_network.test.name
resource_group_name = azurerm_resource_group.rg.name
depends_on = [azurerm_virtual_network.test]
}
resource "azurerm_kubernetes_cluster" "k8s" {
name = var.aks_name
location = azurerm_resource_group.rg.location
dns_prefix = var.aks_dns_prefix
resource_group_name = azurerm_resource_group.rg.name
http_application_routing_enabled = false
linux_profile {
admin_username = var.vm_user_name
ssh_key {
key_data = file(var.public_ssh_key_path)
}
}
default_node_pool {
name = "agentpool"
node_count = var.aks_agent_count
vm_size = var.aks_agent_vm_size
os_disk_size_gb = var.aks_agent_os_disk_size
vnet_subnet_id = data.azurerm_subnet.kubesubnet.id
}
service_principal {
client_id = local.client_id
client_secret = local.client_secret
}
network_profile {
network_plugin = "azure"
dns_service_ip = var.aks_dns_service_ip
docker_bridge_cidr = var.aks_docker_bridge_cidr
service_cidr = var.aks_service_cidr
}
# Enabled the cluster configuration to the Azure kubernets with RBAC
azure_active_directory_role_based_access_control {
managed = var.azure_active_directory_role_based_access_control_managed
admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids
azure_rbac_enabled = var.azure_rbac_enabled
}
oms_agent {
log_analytics_workspace_id = module.log_analytics_workspace[0].id
}
timeouts {
create = "20m"
delete = "20m"
}
depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace]
tags = var.tags
}
and followed the below steps and installed the ISTIO as per the ISTIO documentation
#Prerequisites
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
#create namespace
kubectl create namespace istio-system
# helm install istio-base and istiod
helm install istio-base istio/base -n istio-system
helm install istiod istio/istiod -n istio-system --wait
# Check the installation status
helm status istiod -n istio-system
#create namespace and enable istio-injection for envoy proxy containers
kubectl create namespace istio-ingress
kubectl label namespace istio-ingress istio-injection=enabled
## helm install istio-ingress for traffic management
helm install istio-ingress istio/gateway -n istio-ingress --wait
## Mark the default namespace as istio-injection=enabled
kubectl label namespace default istio-injection=enabled
## Install the App and Gateway
kubectl apply -f bookinfo.yaml
kubectl apply -f bookinfo-gateway.yaml
and I could access the application like shown below
This application prints the logs in the console.
I have created the Log Analytics workspace as mentioned below
# Create Log Analytics Workspace
module "log_analytics_workspace" {
source = "./modules/log_analytics_workspace"
count = var.enable_log_analytics_workspace == true ? 1 : 0
app_or_service_name = "log"
subscription_type = var.subscription_type
environment = var.environment
resource_group_name = azurerm_resource_group.rg.name
location = var.location
instance_number = var.instance_number
sku = var.log_analytics_workspace_sku
retention_in_days = var.log_analytics_workspace_retention_in_days
tags = var.tags
}
and set the log_anaglytics_workspace_id in the oms_agent section in azurerm_kubernetes_cluster to collect logs and ship to Log Analytics.
Is that all needed to ship the application logs to Log Analytics Workspace? somehow I don't see the logs flowing? am I missing something?
I am creating Storage account using terraform and want to set cross_tenant_replication_enabled to false
data "azurerm_resource_group" "data_resource_group" {
name = var.resource_group_name
}
resource "azurerm_storage_account" "example_storage_account" {
name = var.storage_account_name
resource_group_name = data.azurerm_resource_group.data_resource_group.name #(Existing resource group)
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
allow_nested_items_to_be_public = false
cross_tenant_replication_enabled = false
identity {
type = "SystemAssigned"
}
}
I am getting below error
Error: Unsupported argument
on ceft_azure/main.tf line 55, in resource "azurerm_storage_account" "example_storage_account":
55: cross_tenant_replication_enabled = false
An argument named "cross_tenant_replication_enabled" is not expected here.
How can I set the attribute value to false?
Tried to set the attribute(cross_tenant_replication_enabled=false) in storage container block. But it didn't work.
Able to create storage account using terraform and want to set cross_tenant_replication_enabled as false
Root Cause: current version on AzureRM is not supported to cross tenant replication. Use azurerm >=3.0.1
Update version in provider as
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=3.0.1"
}
here is the code snippet
Step1:
run below command
terraform init -upgrade
Step2:
Copy the below code in main tf file
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "rg_swarna-example-resources"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "swarnastorageaccountname"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
allow_nested_items_to_be_public = false
cross_tenant_replication_enabled = false
identity {
type = "SystemAssigned"
}
tags = {
environment = "staging"
}
}
Step3:
Run below commands
terraform plan
terraform apply -auto-approve
Verification:
I'm currently building my terraform plan and it seems that I'm running into issues as soon as I run the following command:
terraform init
The current main.tf contains this:
terraform {
backend "azurerm"{
resource_group_name = "test"
storage_account_name = "testaccount"
container_name = "testc"
key = "testc.state"
}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "2.46.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
data "azurerm_key_vault" "keyVaultClientID" {
name = "AKSClientID"
key = var.keyvaultID
}
data "azure_key_vault_secret" "keyVaultClientSecret" {
name = "AKSClientSecret"
key_vault_id = var.keyvaultID
}
resource "azurerm_kubernetes_cluster" "test_cluster" {
name = var.name
location = var.location
resource_group_name = var.resourceGroup
dns_prefix = ""
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
service_principal {
client_id = data.azurerm_key_vault_secret.keyVaultClientID.value
client_secret = data.azurerm_key_vault_secret.keyVaultClientSecret.value
}
tags = {
"Environment" = "Development"
}
}
The error message that I get is the following:
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/azure: provider
│ registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/azure
I'm looking at the documentation, and I'm changing the version, but I'm not getting any luck. Does anyone knows what else I can do or what should I change on my main.tf?
The solve this issue, you will have to add the following inside of the main terraform plan:
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.75.0"
}
If you add it, the issue will never appear again. Also, you might have to run the upgrade command to make sure terraform will be able to handle the new version.
I am trying to create a Private Cloud on AKS with Terraform.
The public route seemd to work fine and I am putting in security stuff, step by step
After putting in Networks azurerm_virtual_network, azurerm_subnet it seems to hand my Helm Deployment
There are no logs, its just an infinite wait.
helm_release.ingress: Still creating... [11m0s elapsed] (this is a simple NGINX Ingress Controller)
resource "azurerm_virtual_network" "foo_network" {
name = "${var.prefix}-network"
location = azurerm_resource_group.foo_group.location
resource_group_name = azurerm_resource_group.foo_group.name
address_space = ["10.1.0.0/16"]
}
resource "azurerm_subnet" "internal" {
name = "internal"
virtual_network_name = azurerm_virtual_network.foo_network.name
resource_group_name = azurerm_resource_group.foo_group.name
address_prefixes = ["10.1.0.0/22"]
}
Any points on how should I debug this? Lack of logs is making it difficult to understand.
Complete Script
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "foo" {
name = "${var.prefix}-k8s-resources"
location = var.location
}
resource "azurerm_kubernetes_cluster" "foo" {
name = "${var.prefix}-k8s"
location = azurerm_resource_group.foo.location
resource_group_name = azurerm_resource_group.foo.name
dns_prefix = "${var.prefix}-k8s"
default_node_pool {
name = "system"
node_count = 1
vm_size = "Standard_D4s_v3"
}
identity {
type = "SystemAssigned"
}
addon_profile {
aci_connector_linux {
enabled = false
}
azure_policy {
enabled = false
}
http_application_routing {
enabled = false
}
kube_dashboard {
enabled = true
}
oms_agent {
enabled = false
}
}
}
provider "kubernetes" {
version = "~> 1.11.3"
load_config_file = false
host = azurerm_kubernetes_cluster.foo.kube_config.0.host
username = azurerm_kubernetes_cluster.foo.kube_config.0.username
password = azurerm_kubernetes_cluster.foo.kube_config.0.password
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.foo.kube_config.0.cluster_ca_certificate)
}
provider "helm" {
# Use provider with Helm 3.x support
version = "~> 1.2.2"
}
resource "null_resource" "configure_kubectl" {
provisioner "local-exec" {
command = "az aks get-credentials --resource-group ${azurerm_resource_group.foo.name} --name ${azurerm_kubernetes_cluster.foo.name} --overwrite-existing"
environment = {
KUBECONFIG = ""
}
}
depends_on = [azurerm_kubernetes_cluster.foo]
}
resource "helm_release" "ingress" {
name = "ingress-foo"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
timeout = 3000
depends_on = [null_resource.configure_kubectl]
}
The best way to debug this is to be able to kubectl into the AKS cluster. (AKS should have documentation on how to set up kubectl.)
Then, play around with kubectl get pods -A and see if anything jumps out as being wrong. Specifically, look for nginx-ingress pods that are not in a Running status.
If there are such pods, debug further with kubectl describe pod <pod_name> or kubectl logs -f <pod_name>, depending on whether the issue happens after the container has successfully started up or not.
I am trying to deploy the helm charts from ACR using the terraform-provider-helm but it fails with below error. Can someone please let me know if I am doing anything wrong because I am not able to understand why is this searching for mcpshareddcr-index.yaml ?
Terraform Version
0.12.18
Affected Resource(s)
helm_release
helm_repository
Terraform Configuration Files
# Cluster RBAC helm Chart repository
data "helm_repository" "cluster_rbac_helm_chart_repo" {
name = "mcpshareddcr"
url = "https://mcpshareddcr.azurecr.io/helm/v1/repo"
username = var.ARM_CLIENT_ID
password = var.ARM_CLIENT_SECRET
}
# Deploy Cluster RBAC helm chart onto the cluster
resource "helm_release" "cluster_rbac_helm_chart_release" {
name = "mcp-rbac-cluster"
repository = data.helm_repository.cluster_rbac_helm_chart_repo.metadata[0].name
chart = "mcp-rbac-cluster"
version = "0.1.0"
}
module usage:
provider "azurerm" {
version = "=1.36.0"
tenant_id = var.ARM_TENANT_ID
subscription_id = var.ARM_SUBSCRIPTION_ID
client_id = var.ARM_CLIENT_ID
client_secret = var.ARM_CLIENT_SECRET
skip_provider_registration = true
}
data "azurerm_kubernetes_cluster" "aks_cluster" {
name = var.aks_cluster
resource_group_name = var.resource_group_aks
}
locals {
kubeconfig_path = "/tmp/kubeconfig"
}
resource "local_file" "kubeconfig" {
filename = local.kubeconfig_path
content = data.azurerm_kubernetes_cluster.aks_cluster.kube_admin_config_raw
}
provider "helm" {
home = "./.helm"
kubernetes {
load_config_file = true
config_path = local.kubeconfig_path
}
}
// Module to deploy Stratus offered helmcharts in AKS cluster
module "mcp_resources" {
source = "modules\/helm\/mcp-resources"
ARM_CLIENT_ID = var.ARM_CLIENT_ID
ARM_CLIENT_SECRET = var.ARM_CLIENT_SECRET
ARM_SUBSCRIPTION_ID = var.ARM_SUBSCRIPTION_ID
ARM_TENANT_ID = var.ARM_TENANT_ID
}
Expected Behavior
Deploy of helm charts on AKS fetching from ACR.
Actual Behavior
Error: Looks like "***/helm/v1/repo" is not a valid chart repository or cannot be reached: open .helm/repository/cache/.helm/repository/cache/mcpshareddcr-index.yaml: no such file or directory
Steps to Reproduce
terraform plan