Assign object to Terraform provider configuration - terraform

A private Terraform module outputs an object with three properties "host", "token" and "cluster_ca_certificate". The kubernetes provider and the kubernetes section of the helm provider accept the same property names. Unfortunately, as far as I can tell, I cannot e.g. assign the output object to them, so that I don't need to repeat myself:
provider "kubernetes" = module.kubernetes.configuration
provider "helm" {
kubernetes = module.kubernetes.configuration
}
Something like that I would prefer over the much more repetitive and error prone:
provider "kubernetes" {
host = module.kubernetes.configuration.host
token = module.kubernetes.configuration.token
cluster_ca_certificate = module.kubernetes.configuration.cluster_ca_certificate
}
provider "helm" {
kubernetes {
host = module.kubernetes.configuration.host
token = module.kubernetes.configuration.token
cluster_ca_certificate = module.kubernetes.configuration.cluster_ca_certificate
}
}
Am I missing something? Can this be simplified?

Related

Is there a way to retrieve data from another Account in Terraform?

I want to use the resource "data" in Terraform for example for an sns topic but I don't want too look for a resource in the aws-account, for which I'm deploying my other resources. It should look up to my other aws-account (in the same organization) and find resources in there. Is there a way to make this happen?
data "aws_sns_topic" "topic_alarms_data" {
name = "topic_alarms"
}
Define an aws provider with credentials to the remote account:
# Default provider that you use:
provider "aws" {
region = var.context.aws_region
assume_role {
role_arn = format("arn:aws:iam::%s:role/TerraformRole", var.account_id)
}
}
provider "aws" {
alias = "remote"
region = var.context.aws_region
assume_role {
role_arn = format("arn:aws:iam::%s:role/TerraformRole", var.remote_account_id)
}
}
data "aws_sns_topic" "topic_alarms_data" {
provider = aws.remote
name = "topic_alarms"
}
Now the topics are loaded from the second provider.

Terraform apply fails because kubernetes provider runs as user "client" that has no permissions [duplicate]

I can use terraform to deploy a Kubernetes cluster in GKE.
Then I have set up the provider for Kubernetes as follows:
provider "kubernetes" {
host = "${data.google_container_cluster.primary.endpoint}"
client_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
By default, terraform interacts with Kubernetes with the user client, which has no power to create (for example) deployments. So I get this error when I try to apply my changes with terraform:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_deployment.foo: 1 error(s) occurred:
* kubernetes_deployment.foo: Failed to create deployment: deployments.apps is forbidden: User "client" cannot create deployments.apps in the namespace "default"
I don't know how should I proceed now, how should I give this permissions to the client user?
If the following fields are added to the provider, I am able to perform deployments, although after reading the documentation it seems these credentials are used for HTTP communication with the cluster, which is insecure if it is done through the internet.
username = "${data.google_container_cluster.primary.master_auth.0.username}"
password = "${data.google_container_cluster.primary.master_auth.0.password}"
Is there any other better way of doing so?
you can use the service account that are running the terraform
data "google_client_config" "default" {}
provider "kubernetes" {
host = "${google_container_cluster.default.endpoint}"
token = "${data.google_client_config.default.access_token}"
cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}"
load_config_file = false
}
OR
give permissions to the default "client"
But you need a valid authentication on GKE cluster provider to run this :/ ups circular dependency here
resource "kubernetes_cluster_role_binding" "default" {
metadata {
name = "client-certificate-cluster-admin"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
kind = "User"
name = "client"
api_group = "rbac.authorization.k8s.io"
}
subject {
kind = "ServiceAccount"
name = "default"
namespace = "kube-system"
}
subject {
kind = "Group"
name = "system:masters"
api_group = "rbac.authorization.k8s.io"
}
}
It looks like the user that you are using is missing the required RBAC role for creating deployments. Make sure that user has the correct verbs for the deployments resource. You can take a look at this Role examples to have an idea about it.
You need to provide both. Check this example on how to integrate the Kubernetes provider with the Google Provider.
Example of how to configure the Kubernetes provider:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}

EKS Terraform - datasource aws_eks_cluster_auth - token expired

I've an EKS cluster deployed in AWS and I use terraform to deploy components to that cluster.
In order to get authenticated I'm using the following EKS datasources that provides the cluster API Authentication:
data "aws_eks_cluster_auth" "cluster" {
name = var.cluster_id
}
data "aws_vpc" "eks_vpc" {
id = var.vpc_id
}
And using the token inside several local-exec provisioners (apart of other resources) to deploy components
resource "null_resource" "deployment" {
provisioner "local-exec" {
working_dir = path.module
command = <<EOH
kubectl \
--server="${data.aws_eks_cluster.cluster.endpoint}" \
--certificate-authority=./ca.crt \
--token="${data.aws_eks_cluster_auth.cluster.token}" \
apply -f test.yaml
EOH
}
}
The problem I have is that some resources are taking a little while to deploy and at some point when terraform executes the next resource I get this error because the token has expired:
exit status 1. Output: error: You must be logged in to the server (the server has asked for the client to provide credentials)
Is there a way to force re-creation of the data before running the local-execs?
UPDATE: example moved to https://github.com/aidanmelen/terraform-kubernetes-rbac/blob/main/examples/authn_authz/main.tf
The data.aws_eks_cluster_auth.cluster_auth.token creates a token with a non-configurable 15 minute timeout.
One way to get around this is to use the sts token to create a long-lived service-account token and use that to provision the terraform-kubernetes-provider for long running kuberenetes resources.
I created a module called terraform-kubernetes-service-account to capture this common behavior of creating a service account, giving it some permissions, and output the auth information i.e. token, ca.crt, namespace.
For example:
module "terraform_admin" {
source = "aidanmelen/service-account/kubernetes"
name = "terraform-admin"
namespace = "kube-system"
cluster_role_name = "terraform-admin"
cluster_role_rules = [
{
api_groups = ["*"]
resources = ["*"]
resource_names = ["*"]
verbs = ["*"]
},
]
}
provider "kubernetes" {
alias = "terraform_admin_service_account"
host = "https://kubernetes.docker.internal:6443"
cluster_ca_certificate = module.terraform_admin.auth["ca.crt"]
token = module.terraform_admin.auth["token"]
}
data "kubernetes_namespace_v1" "example" {
metadata {
name = kubernetes_namespace.ex_complete.metadata[0].name
}
}

terraform keeps overwriting token for kubernetes provider

We're trying to run terraform apply with the following kubernetes provider setting in our terraform file:
data "google_client_config" "current" {
}
data "google_container_cluster" "onboarding_cluster" {
name = var.cluster_name
location = var.cluster_location
}
provider "kubernetes" {
load_config_file = false
host = data.google_container_cluster.onboarding_cluster.endpoint
cluster_ca_certificate = base64decode(data.google_container_cluster.onboarding_cluster.master_auth[0].cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
resource "kubernetes_service_account" "service_account" {
metadata {
name = var.kubernetes_service_account_name
namespace = var.kubernetes_service_account_namespace
}
}
But we're getting the following error:
Error: Unauthorized
on main.tf line 85, in resource "kubernetes_service_account" "service_account":
85: resource "kubernetes_service_account" "service_account" {
After setting the TF_LOG to DEBUG we see the following request being made to create the kubernetes service account:
---[ REQUEST ]---------------------------------------
POST /api/v1/namespaces/default/serviceaccounts HTTP/1.1
...
Authorization: Bearer <SOME_KUBERNETES_JWT>
The auth bearer token is being overwritten even when we hardcode the token in our provider! For example:
provider "kubernetes" {
load_config_file = false
host = data.google_container_cluster.onboarding_cluster.endpoint
cluster_ca_certificate = base64decode(data.google_container_cluster.onboarding_cluster.master_auth[0].cluster_ca_certificate)
token = "some.hardcoded.token"
}
Even with the above, the token will remain the same in the HTTP request.
We've found that the token that's being sent in the auth header is found on the terraform container at /run/secrets/kubernetes.io/serviceaccount/token.
Is there any reason terraform would overwrite this token with a token generated by kubernetes? Are there any other settings we could attempt?
This is an issue with the kubernetes provider. Github issue here: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/679
To fix, set your provider version to 1.9, like so:
provider "kubernetes" {
version = "1.9"
cluster_ca_certificate = base64decode(
data.google_container_cluster.this.master_auth[0].cluster_ca_certificate,
)
host = data.google_container_cluster.this.endpoint
token = data.external.get_token.result["token"]
load_config_file = false
}

Error while installing Helm chart using Terraform helm provider

I am trying to install helm chart with Terraform Helm Provider using the following terraform script
I'm already succeed to use Kubernetes provider to deploy some k8s ressources, but it doesn't work with Helm
terraform v0.11.13
provider.helm v0.10
provider.kubernetes v1.9
provider "helm" {
alias = "prdops"
service_account = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
kubernetes {
host = "${google_container_cluster.prdops.endpoint}"
alias = "prdops"
load_config_file = false
username = "${google_container_cluster.prdops.master_auth.0.username}"
password = "${google_container_cluster.prdops.master_auth.0.password}"
client_certificate = "${base64decode(google_container_cluster.prdops.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.prdops.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.prdops.master_auth.0.cluster_ca_certificate)}"
}
}
resource "kubernetes_service_account" "tiller" {
provider = "kubernetes.prdops"
metadata {
name = "tiller"
namespace = "kube-system"
}
}
resource "kubernetes_cluster_role_binding" "tiller" {
provider = "kubernetes.prdops"
metadata {
name = "tiller"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "tiller"
}
subject {
kind = "ServiceAccount"
name = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
api_group = ""
}
}
resource "helm_release" "jenkins" {
provider = "helm.prdops"
name = "jenkins"
chart = "stable/jenkins"
}
but I'm geting the following error
1 error(s) occurred:
* helm_release.jenkins: 1 error(s) occurred:
* helm_release.jenkins: rpc error: code = Unknown desc = configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
Helm uses a server component (in Helm v2, they are getting rid of it in the new Helm v3) called tiller. In order for helm to function, tiller is assigned a service account to interact with the Kubernetes API. In this case it seems the service account of tiller has insufficient permissions to perform the operation.
Kindly check if tiller pod is running in kube-system namespace. If not reinstall helm and do helm init so that tiller pod comes up and I hope this issue will be resolved.

Resources