We're trying to run terraform apply with the following kubernetes provider setting in our terraform file:
data "google_client_config" "current" {
}
data "google_container_cluster" "onboarding_cluster" {
name = var.cluster_name
location = var.cluster_location
}
provider "kubernetes" {
load_config_file = false
host = data.google_container_cluster.onboarding_cluster.endpoint
cluster_ca_certificate = base64decode(data.google_container_cluster.onboarding_cluster.master_auth[0].cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
resource "kubernetes_service_account" "service_account" {
metadata {
name = var.kubernetes_service_account_name
namespace = var.kubernetes_service_account_namespace
}
}
But we're getting the following error:
Error: Unauthorized
on main.tf line 85, in resource "kubernetes_service_account" "service_account":
85: resource "kubernetes_service_account" "service_account" {
After setting the TF_LOG to DEBUG we see the following request being made to create the kubernetes service account:
---[ REQUEST ]---------------------------------------
POST /api/v1/namespaces/default/serviceaccounts HTTP/1.1
...
Authorization: Bearer <SOME_KUBERNETES_JWT>
The auth bearer token is being overwritten even when we hardcode the token in our provider! For example:
provider "kubernetes" {
load_config_file = false
host = data.google_container_cluster.onboarding_cluster.endpoint
cluster_ca_certificate = base64decode(data.google_container_cluster.onboarding_cluster.master_auth[0].cluster_ca_certificate)
token = "some.hardcoded.token"
}
Even with the above, the token will remain the same in the HTTP request.
We've found that the token that's being sent in the auth header is found on the terraform container at /run/secrets/kubernetes.io/serviceaccount/token.
Is there any reason terraform would overwrite this token with a token generated by kubernetes? Are there any other settings we could attempt?
This is an issue with the kubernetes provider. Github issue here: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/679
To fix, set your provider version to 1.9, like so:
provider "kubernetes" {
version = "1.9"
cluster_ca_certificate = base64decode(
data.google_container_cluster.this.master_auth[0].cluster_ca_certificate,
)
host = data.google_container_cluster.this.endpoint
token = data.external.get_token.result["token"]
load_config_file = false
}
Related
I'm trying to deploy a web app with a database on Azure but can't seem to get it to work despite double/triple checking the credentials for the Tenant in Azure. Tried creating new client secrets but doesn't work regardless.
Unable to list provider registration status, it is possible that this is due to invalid credentials or the service principal does not have permission to use the Resource Manager API, Azure error: resources.ProvidersClient#List: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '########-########-########-########-########' with object id '########-########-########-########-########' does not have authorization to perform action 'Microsoft.Resources/subscriptions/providers/read' over scope '/subscriptions/########-########-########-########-########' or the scope is invalid. If access was recently granted, please refresh your credentials."
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}
provider "azurerm" {
features {}
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.client_secret
tenant_id = var.tenant_id
}
resource "azurerm_resource_group" "example" {
name = "azure-tf-bgapp"
location = "West Europe"
}
resource "azurerm_container_group" "example" {
name = "bgapp-tf"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_address_type = "Public"
dns_name_label = "aci-label"
os_type = "Linux"
container {
name = "bgapp-web"
image = "shekeriev/bgapp-web"
cpu = "0.5"
memory = "1.5"
ports {
port = 80
protocol = "TCP"
}
}
container {
name = "bgapp-web"
image = "shekeriev/bgapp-db"
cpu = "0.5"
memory = "1.5"
environment_variables = {
"MYSQL_ROOT_PASSWORD" = "Password1"
}
}
tags = {
environment = "bgapp"
}
}
I tried in my environment and got below results:
Initially I tried the same code and got same error in my environment.
Console:
The above error occurs due to your (Service principal) doesn't has required permission to do that operation (Authorization).
After assigning a role like Owner to the service principal code worked successfully.
Go to portal -> subscription -> Access control (IAM) -> Add role assignments -> owner -> Add your service principal -> review + create.
After I executed code of terraform it executed perfectly.
Console:
Portal:
A private Terraform module outputs an object with three properties "host", "token" and "cluster_ca_certificate". The kubernetes provider and the kubernetes section of the helm provider accept the same property names. Unfortunately, as far as I can tell, I cannot e.g. assign the output object to them, so that I don't need to repeat myself:
provider "kubernetes" = module.kubernetes.configuration
provider "helm" {
kubernetes = module.kubernetes.configuration
}
Something like that I would prefer over the much more repetitive and error prone:
provider "kubernetes" {
host = module.kubernetes.configuration.host
token = module.kubernetes.configuration.token
cluster_ca_certificate = module.kubernetes.configuration.cluster_ca_certificate
}
provider "helm" {
kubernetes {
host = module.kubernetes.configuration.host
token = module.kubernetes.configuration.token
cluster_ca_certificate = module.kubernetes.configuration.cluster_ca_certificate
}
}
Am I missing something? Can this be simplified?
I am trying to deploy application g/w with ssl certificate from key vault. It is prompting error as SecretIdSpecifiedIsInvalid when I run the terraform apply …Even though it is showing correct certificate id and name on error code which I can validate manually on portal.
I am also able to deploy app gateway manually using the same certificate from keyvault.
│ Error: creating Application Gateway: (Name “poc-appgw-iaps” /
Resource Group “poc-rg-appgw”):
network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending
request: StatusCode=400 – Original Error:
Code=“SecretIdSpecifiedIsInvalid” Message=“SecretId
‘https://pockv-iaps.vault.azure.net/certificates/poc-cert-admin/xxxxxxxxxx’
specified in
‘/subscriptions/xxxxxxxxxxxxxxx/resourceGroups/poc-rg-appgw/providers/Microsoft.Network/applicationGateways/poc-appgw-iaps/sslCertificates/poc-cert-admin’
is invalid.” Details=[]
Initially please try solve this problem by upgrading to the latest
azurerm terraform provider. The latest should contain fixes for
the situation if provision is all correct.
The ssl certificate block must contain your PFX certificate. Data
must be used if key vault secret_id is not already set.
Key vault secret id of base-64 encoded unencrypted pfx
certificate/secret must be stored in Azure KeyVault.
Please note that to enable the above feature , azure key vault soft delete must be anabled
Please make sure to have required access policies to get secrets .
provider "azurerm" {
features{}
}
data "azurerm_client_config" "current" {}
resource "azurerm_user_assigned_identity" "base" {
resource_group_name = "resourcegroup"
location = "resgrouplocation"
name = "appgwkeyvault"
}
data "azurerm_key_vault" "example"{
name = "keyvault-name"
resource_group_name = "resourcegroup"
}
resource "azurerm_key_vault_access_policy" "example" {
key_vault_id = data.azurerm_key_vault.example.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = azurerm_user_assigned_identity.base.principal_id
key_permissions = [
"Get",
]
certificate_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
output "secret_identifier" {
value = azurerm_key_vault_certificate.example.secret_id
}
//TODO required soft delete on the keyvault
ssl_certificate {
name = "app_listener"
key_vault_secret_id = azurerm_key_vault_certificate.example.secret_id
}
Please make sure certificate properties are properly given , secrets must be .pfx format
resource "azurerm_key_vault_certificate" "example" {
name = "imported-cert"
key_vault_id = azurerm_key_vault.kv.id
//make sure certificate is base64 encoded pfx certificate
certificate {
contents = filebase64("C:/appgwlistener.pfx")
password = "password"
}
certificate_policy {
...
}
key_properties {
exportable = true
key_size = 2048
key_type = "RSA"
reuse_key = false
}
secret_properties {
content_type = "application/x-pkcs12"
}
}
}
Below references can guide you:
Terraform - How to attach SSL certificate stored in Azure KeyVault
to an Application Gateway - Stack Overflow
key_vault_secret_id azure_application_gateway| Terraform Registry
I can use terraform to deploy a Kubernetes cluster in GKE.
Then I have set up the provider for Kubernetes as follows:
provider "kubernetes" {
host = "${data.google_container_cluster.primary.endpoint}"
client_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
By default, terraform interacts with Kubernetes with the user client, which has no power to create (for example) deployments. So I get this error when I try to apply my changes with terraform:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_deployment.foo: 1 error(s) occurred:
* kubernetes_deployment.foo: Failed to create deployment: deployments.apps is forbidden: User "client" cannot create deployments.apps in the namespace "default"
I don't know how should I proceed now, how should I give this permissions to the client user?
If the following fields are added to the provider, I am able to perform deployments, although after reading the documentation it seems these credentials are used for HTTP communication with the cluster, which is insecure if it is done through the internet.
username = "${data.google_container_cluster.primary.master_auth.0.username}"
password = "${data.google_container_cluster.primary.master_auth.0.password}"
Is there any other better way of doing so?
you can use the service account that are running the terraform
data "google_client_config" "default" {}
provider "kubernetes" {
host = "${google_container_cluster.default.endpoint}"
token = "${data.google_client_config.default.access_token}"
cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}"
load_config_file = false
}
OR
give permissions to the default "client"
But you need a valid authentication on GKE cluster provider to run this :/ ups circular dependency here
resource "kubernetes_cluster_role_binding" "default" {
metadata {
name = "client-certificate-cluster-admin"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
kind = "User"
name = "client"
api_group = "rbac.authorization.k8s.io"
}
subject {
kind = "ServiceAccount"
name = "default"
namespace = "kube-system"
}
subject {
kind = "Group"
name = "system:masters"
api_group = "rbac.authorization.k8s.io"
}
}
It looks like the user that you are using is missing the required RBAC role for creating deployments. Make sure that user has the correct verbs for the deployments resource. You can take a look at this Role examples to have an idea about it.
You need to provide both. Check this example on how to integrate the Kubernetes provider with the Google Provider.
Example of how to configure the Kubernetes provider:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}
I've an EKS cluster deployed in AWS and I use terraform to deploy components to that cluster.
In order to get authenticated I'm using the following EKS datasources that provides the cluster API Authentication:
data "aws_eks_cluster_auth" "cluster" {
name = var.cluster_id
}
data "aws_vpc" "eks_vpc" {
id = var.vpc_id
}
And using the token inside several local-exec provisioners (apart of other resources) to deploy components
resource "null_resource" "deployment" {
provisioner "local-exec" {
working_dir = path.module
command = <<EOH
kubectl \
--server="${data.aws_eks_cluster.cluster.endpoint}" \
--certificate-authority=./ca.crt \
--token="${data.aws_eks_cluster_auth.cluster.token}" \
apply -f test.yaml
EOH
}
}
The problem I have is that some resources are taking a little while to deploy and at some point when terraform executes the next resource I get this error because the token has expired:
exit status 1. Output: error: You must be logged in to the server (the server has asked for the client to provide credentials)
Is there a way to force re-creation of the data before running the local-execs?
UPDATE: example moved to https://github.com/aidanmelen/terraform-kubernetes-rbac/blob/main/examples/authn_authz/main.tf
The data.aws_eks_cluster_auth.cluster_auth.token creates a token with a non-configurable 15 minute timeout.
One way to get around this is to use the sts token to create a long-lived service-account token and use that to provision the terraform-kubernetes-provider for long running kuberenetes resources.
I created a module called terraform-kubernetes-service-account to capture this common behavior of creating a service account, giving it some permissions, and output the auth information i.e. token, ca.crt, namespace.
For example:
module "terraform_admin" {
source = "aidanmelen/service-account/kubernetes"
name = "terraform-admin"
namespace = "kube-system"
cluster_role_name = "terraform-admin"
cluster_role_rules = [
{
api_groups = ["*"]
resources = ["*"]
resource_names = ["*"]
verbs = ["*"]
},
]
}
provider "kubernetes" {
alias = "terraform_admin_service_account"
host = "https://kubernetes.docker.internal:6443"
cluster_ca_certificate = module.terraform_admin.auth["ca.crt"]
token = module.terraform_admin.auth["token"]
}
data "kubernetes_namespace_v1" "example" {
metadata {
name = kubernetes_namespace.ex_complete.metadata[0].name
}
}