Vault kv secrets and nomad jobs - vault

I am creating a nomad job that accesses vault kv secrets. At the moment I managed to create the policies, and a role, but I can't make it consume the secret.
This would be my nomad job:
job "http-echo" {
datacenters = ["ikerdc2"]
group "echo" {
count = 1
task "server" {
driver = "docker"
vault {
policies = ["access-tables"]
}
template {
data = <<EOT
{{ with secret "kv/me" }}
NAME ="{{ .Data.data.name }}"
{{ end }}
EOT
destination = "echo.env"
env = true
}
config {
image = "hashicorp/http-echo:latest"
args = [
"-listen", ":8080",
"-text", "Hello World!",
]
}
resources {
network {
mbits = 10
port "http" {
static = 8080
}
}
}
service {
name = "http-echo"
port = "http"
tags = [
"urlprefix-/http-echo",
]
}
}
}
}
I have created a vault server with the command vault server -dev
I have a kv secret named "me" and inside it's just like
{
"name" = "Hello Iker"
}
And the policies are like this:
# Allow creating tokens under "nomad-cluster" role. The role name should be
# updated if "nomad-cluster" is not used.
path "auth/token/create/nomad-cluster" {
capabilities = ["update"]
}
# Allow looking up "nomad-cluster" role. The role name should be updated if
# "nomad-cluster" is not used.
path "auth/token/roles/nomad-cluster" {
capabilities = ["read"]
}
# Allow looking up the token passed to Nomad to validate the token has the
# proper capabilities. This is provided by the "default" policy.
path "auth/token/lookup-self" {
capabilities = ["read"]
}
# Allow looking up incoming tokens to validate they have permissions to access
# the tokens they are requesting. This is only required if
# `allow_unauthenticated` is set to false.
path "auth/token/lookup" {
capabilities = ["update"]
}
# Allow revoking tokens that should no longer exist. This allows revoking
# tokens for dead tasks.
path "auth/token/revoke-accessor" {
capabilities = ["update"]
}
# Allow checking the capabilities of our own token. This is used to validate the
# token upon startup.
path "sys/capabilities-self" {
capabilities = ["update"]
}
# Allow our own token to be renewed.
path "auth/token/renew-self" {
capabilities = ["update"]
}
path "kv/*" {
capabilities = ["create", "update", "read"]
}
And the role is like this:
{
"allowed_policies": "access-tables",
"token_explicit_max_ttl": 0,
"name": "nomad-cluster",
"orphan": true,
"token_period": 259200,
"renewable": true
}
This are the errors I get when I run the job:
Missing: vault.read(kv/me) Template failed: vault.read(kv/me):
vault.read(kv/me): Error making API request. URL: GET
http://127.0.0.1:8200/v1/kv/me Code: 403. Errors: * 1 error occurred:
permission denied
If someone could help me with that it would be great, thanks

In your post, I don't see the contents of the access-tables policy itself. That ACL policy must have the following rules:
path "kv/data/me" {
capabilities = ["read"]
}
The permissions that the nomad-cluster policy from the documentation requires are for Nomad to create tokens in Vault for the policies that you list in the vault stanzas of your Nomad jobs. Adding the ability for that policy to read KV will not help. Instead, the policy in the vault stanza in your job, access-tables, needs that permission.

Related

Can't HCP Vault Admin Access Token access child namespace?

Here's my HCP Vault Namespace structure:
admin/
- terraform-modules-global/
Here, I want to create a child namespace of the parent namespace using the Vault Admin Token.
admin/
- terraform-modules-global/
- global/
But I get the following error:
Error: error writing to Vault: Error making API request. Namespace: terraform-modules-global/terraform-modules URL: PUT https://HCP_VAULT_URL:8200/v1/sys/namespaces/global Code: 403. Errors: * 1 error occurred: * permission denied
I am trying this through Terraform Cloud.
data "tfe_outputs" "hcp-vault" {
organization = "nftbank"
workspace = "hcp-vault-global"
}
provider "vault" {
address = data.tfe_outputs.hcp-vault.values.vault_public_endpoint
token = data.tfe_outputs.hcp-vault.values.vault_admin_token # This was created via the hcp_vault_cluster_admin_token resource.
}
locals {
terraform-modules = {
environments = [
"global",
]
}
}
resource "vault_namespace" "terraform-modules" {
path = "terraform-modules"
}
# Try option 1
resource "vault_namespace" "terraform-modules" {
for_each = toset(local.terraform-modules.environments)
path = "terraform-modules-${each.value}"
namespace = "terraform-modules"
}
# Try option 2
provider "vault" {
address = data.tfe_outputs.hcp-vault.values.vault_public_endpoint
token = data.tfe_outputs.hcp-vault.values.vault_admin_token # This was created via the hcp_vault_cluster_admin_token resource.
namespace = "terraform-modules"
alias = "terraform-modules"
}
resource "vault_namespace" "terraform-modules" {
for_each = toset(local.terraform-modules.environments)
path = "terraform-modules-${each.value}"
provider = vault.terraform-modeuls
}
Both options fail. terraform-modules namespace is created normally, but child namespaces are not created.

How to update KMS Key policy using Terraform

I have the following terraform code to create KMS Key. The My.tf file is using organization level common cmk core module that creates a key using aws_kms_key resource. This core module also attach a default key policy to the newly created Key.
my.tf file
//create key using core module
module "cmk" {
source = "git::https://company-repository-url/cmk?ref=v1.0.0"
name = "test"
enable_key_rotation = true
}
I don't have access to the core module. In My.tf file, after the Key is created I want to append the Key policy with the following policy document
data "aws_caller_identity" "current" {}
data "aws_iam_policy_document" "default" {
statement {
sid = "Some Sid"
effect = "Allow"
principals {
type = "AWS"
identifiers = [
"arn:aws:iam::123456789:root", //hardcoded. this is a cross account user
"arn:aws:iam::${data.aws_caller_identity.current.id}:role/service-role/SomeAWSRole"]
}
actions = [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
]
resources = ["arn:aws:kms:us-west-2:${data.aws_caller_identity.current.id}:key/*"]
condition {
test = "Bool"
variable = "kms:GrantIsForAWSResource"
values = ["true"]
}
}
}
Is it possible to attach this policy to Key using aws_iam_policy_attachment or some other way?

Terraform apply fails because kubernetes provider runs as user "client" that has no permissions [duplicate]

I can use terraform to deploy a Kubernetes cluster in GKE.
Then I have set up the provider for Kubernetes as follows:
provider "kubernetes" {
host = "${data.google_container_cluster.primary.endpoint}"
client_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
By default, terraform interacts with Kubernetes with the user client, which has no power to create (for example) deployments. So I get this error when I try to apply my changes with terraform:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_deployment.foo: 1 error(s) occurred:
* kubernetes_deployment.foo: Failed to create deployment: deployments.apps is forbidden: User "client" cannot create deployments.apps in the namespace "default"
I don't know how should I proceed now, how should I give this permissions to the client user?
If the following fields are added to the provider, I am able to perform deployments, although after reading the documentation it seems these credentials are used for HTTP communication with the cluster, which is insecure if it is done through the internet.
username = "${data.google_container_cluster.primary.master_auth.0.username}"
password = "${data.google_container_cluster.primary.master_auth.0.password}"
Is there any other better way of doing so?
you can use the service account that are running the terraform
data "google_client_config" "default" {}
provider "kubernetes" {
host = "${google_container_cluster.default.endpoint}"
token = "${data.google_client_config.default.access_token}"
cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}"
load_config_file = false
}
OR
give permissions to the default "client"
But you need a valid authentication on GKE cluster provider to run this :/ ups circular dependency here
resource "kubernetes_cluster_role_binding" "default" {
metadata {
name = "client-certificate-cluster-admin"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
kind = "User"
name = "client"
api_group = "rbac.authorization.k8s.io"
}
subject {
kind = "ServiceAccount"
name = "default"
namespace = "kube-system"
}
subject {
kind = "Group"
name = "system:masters"
api_group = "rbac.authorization.k8s.io"
}
}
It looks like the user that you are using is missing the required RBAC role for creating deployments. Make sure that user has the correct verbs for the deployments resource. You can take a look at this Role examples to have an idea about it.
You need to provide both. Check this example on how to integrate the Kubernetes provider with the Google Provider.
Example of how to configure the Kubernetes provider:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}

Only create policy document rule on condition true - Terraform Vault

I have a Vault instance and I manage policies and secrets in it with Terraform. There are a couple of repeated steps when creating approle authentication, policy and policy documents for newly onboarded teams, because each team has several applications they work on. I'd like to modularize the repeated parts ( policy document, policy creation and approle for the team-app), though each application has a slightly different rule set.
Is there a way to create policy documents in a way that some rules are only included if a bool is set to true?
for example:
I have a module that creates policies and policy documents as below:
I would pass a bool variable named enable_metadata_rule and based on it's value I would create the 2nd rule or not:
resource "vault_policy" "example_policy" {
for_each = var.environments
provider = vault
name = "${var.team}-${var.application}-${each.key}"
policy = data.vault_policy_document.policy_document["${each.key}"].hcl
}
data "vault_policy_document" "policy_document" {
for_each = var.environments
rule {
path = "engines/${var.team}-kv/data/${each.key}/services/${var.application}/*"
capabilities = ["read", "list"]
description = "Read secrets for ${var.application}"
}
rule {
# IF enable_metadata_rule == true
path = "engines/${var.team}-kv/metadata/*"
capabilities = ["list"]
description = "List metadata for kv store"
}
}
If there isn't such thing, is there an option for merging separately created policy documents?
You should be able to do it using dynamic blocks:
data "vault_policy_document" "policy_document" {
for_each = var.environments
rule {
path = "engines/${var.team}-kv/data/${each.key}/services/${var.application}/*"
capabilities = ["read", "list"]
description = "Read secrets for ${var.application}"
}
dynamic "rule" {
for_each = var.enable_metadata_rule == true ? [1]: []
content {
path = "engines/${var.team}-kv/metadata/*"
capabilities = ["list"]
description = "List metadata for kv store"
}
}
}

EKS Terraform - datasource aws_eks_cluster_auth - token expired

I've an EKS cluster deployed in AWS and I use terraform to deploy components to that cluster.
In order to get authenticated I'm using the following EKS datasources that provides the cluster API Authentication:
data "aws_eks_cluster_auth" "cluster" {
name = var.cluster_id
}
data "aws_vpc" "eks_vpc" {
id = var.vpc_id
}
And using the token inside several local-exec provisioners (apart of other resources) to deploy components
resource "null_resource" "deployment" {
provisioner "local-exec" {
working_dir = path.module
command = <<EOH
kubectl \
--server="${data.aws_eks_cluster.cluster.endpoint}" \
--certificate-authority=./ca.crt \
--token="${data.aws_eks_cluster_auth.cluster.token}" \
apply -f test.yaml
EOH
}
}
The problem I have is that some resources are taking a little while to deploy and at some point when terraform executes the next resource I get this error because the token has expired:
exit status 1. Output: error: You must be logged in to the server (the server has asked for the client to provide credentials)
Is there a way to force re-creation of the data before running the local-execs?
UPDATE: example moved to https://github.com/aidanmelen/terraform-kubernetes-rbac/blob/main/examples/authn_authz/main.tf
The data.aws_eks_cluster_auth.cluster_auth.token creates a token with a non-configurable 15 minute timeout.
One way to get around this is to use the sts token to create a long-lived service-account token and use that to provision the terraform-kubernetes-provider for long running kuberenetes resources.
I created a module called terraform-kubernetes-service-account to capture this common behavior of creating a service account, giving it some permissions, and output the auth information i.e. token, ca.crt, namespace.
For example:
module "terraform_admin" {
source = "aidanmelen/service-account/kubernetes"
name = "terraform-admin"
namespace = "kube-system"
cluster_role_name = "terraform-admin"
cluster_role_rules = [
{
api_groups = ["*"]
resources = ["*"]
resource_names = ["*"]
verbs = ["*"]
},
]
}
provider "kubernetes" {
alias = "terraform_admin_service_account"
host = "https://kubernetes.docker.internal:6443"
cluster_ca_certificate = module.terraform_admin.auth["ca.crt"]
token = module.terraform_admin.auth["token"]
}
data "kubernetes_namespace_v1" "example" {
metadata {
name = kubernetes_namespace.ex_complete.metadata[0].name
}
}

Resources