How to update KMS Key policy using Terraform - terraform

I have the following terraform code to create KMS Key. The My.tf file is using organization level common cmk core module that creates a key using aws_kms_key resource. This core module also attach a default key policy to the newly created Key.
my.tf file
//create key using core module
module "cmk" {
source = "git::https://company-repository-url/cmk?ref=v1.0.0"
name = "test"
enable_key_rotation = true
}
I don't have access to the core module. In My.tf file, after the Key is created I want to append the Key policy with the following policy document
data "aws_caller_identity" "current" {}
data "aws_iam_policy_document" "default" {
statement {
sid = "Some Sid"
effect = "Allow"
principals {
type = "AWS"
identifiers = [
"arn:aws:iam::123456789:root", //hardcoded. this is a cross account user
"arn:aws:iam::${data.aws_caller_identity.current.id}:role/service-role/SomeAWSRole"]
}
actions = [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
]
resources = ["arn:aws:kms:us-west-2:${data.aws_caller_identity.current.id}:key/*"]
condition {
test = "Bool"
variable = "kms:GrantIsForAWSResource"
values = ["true"]
}
}
}
Is it possible to attach this policy to Key using aws_iam_policy_attachment or some other way?

Related

How to share terraform resources between modules?

I realised that terraform modules are recreating its resources per module declaration. So the way to reference a resource created in a module can only be done from the module, if it's defined as output. I'm looking for a way where I can reuse a module not in the way so it's recreating resources.
Imagine a scenario where I have three terraform modules.
One is creating an IAM policy (AWS), second is creating an IAM role, third is creating a different IAM role, and both roles share the same IAM policy.
In code:
# policy
resource "aws_iam_policy" "secrets_manager_read_policy" {
name = "SecretsManagerRead"
description = "Read only access to secrets manager"
policy = {} # just to shorten demonstration
}
output "policy" {
value = aws_iam_policy.secrets_manager_read_policy
}
# test-role-1
resource "aws_iam_role" "test_role_1" {
name = "test-role-1"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
module "policy" {
source = "../test-policy"
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_1.name
policy_arn = module.policy.policy.arn
}
# test-role-2
resource "aws_iam_role" "test_role_2" {
name = "test-role-2"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
module "policy" {
source = "../test-policy"
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_2.name
policy_arn = module.policy.policy.arn
}
# create-roles
module "role-1" {
source = "../../../modules/resources/test-role-1"
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
}
In this scenario terraform is trying to create two policies for each user, but I want them to use the same resource.
Is there a way to keep the code clean, so not all resources are in the same file so that a resource is identified, and the same resource can be used in multiple modules? Or it's a tree like structure where sibling modules cannot share the same child? Yes, I could define the policy first, and pass down the properties needed to child modules where I create the users, but what if I want to have a many to many relationship between them so multiple roles share the same multiple policies?
I can think of a few ways to do this:
Option 1: Move the use of the policy module up to the parent level, and have your parent (root) Terraform code look like this:
# create-policy
module "my-policy" {
source = "../../../modules/resources/policy"
}
# create-roles
module "role-1" {
source = "../../../modules/resources/test-role-1"
policy = module.my-policy.policy
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
policy = module.my-policy.policy
}
Option 2: Output the policy from the role modules, and also make it an optional input variable of the modules:
variable "policy" {
default = null # Make the variable optional
}
module "policy" {
# Create the policy, only if one wasn't passed in
count = var.policy == null ? 1 : 0
source = "../test-policy"
}
locals {
# Create a variable with the value of either the passed-in policy,
# or the one we are creating
my-policy = var.policy == null ? module.policy[0].policy : var.policy
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_2.name
policy_arn = local.my-policy
}
output "policy" {
value = locals.my-policy
}
Then your root code could look like this:
module "role-1" {
source = "../../../modules/resources/test-role-1"
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
policy = module.role-1.policy
}
The first module wouldn't get an input, so it would create a new policy. The second module would get an input, so it would use it instead of re-creating the policy.
I also highly recommend looking at the source code for some of the official AWS Terraform modules, like this one. Reading the source code for those really helped me understand how to create reusable Terraform modules.

Using terraform variable in hcl extention ( vault )

I'am tring to do automation for path and policies creation in vault.
Do you know how I can proceed please ? variables declared in terraform are not reconized in .hcl file.
I tried to rename my file client-ro-policy.hcl to client-ro-policy.tf but I have same issue
Varibales is recognized in file with .tf extention
Thanks
main.tf
# Use Vault provider
provider "vault" {
# It is strongly recommended to configure this provider through the
# environment variables:
# - VAULT_ADDR
# - VAULT_TOKEN
# - VAULT_CACERT
# - VAULT_CAPATH
# - etc.
}
acl-ro-policy.hcl
path "${var.client[0]}/k8s/preprod/*" {
capabilities = ["read"]
}
policies.tf
#---------------------
# Create policies
#---------------------
# Create 'client' policy
resource "vault_policy" "ro-client" {
name = "${var.client[0]}_k8s_preprod_ro"
policy = file("./hcl-ro-policy.tf")
}
variables.tf
variable "client" {
type = list(string)
}
variables.tfvars
client = ["titi", "toto","itutu"]
Result in vault:
Even though Terraform and Vault both use HCL as the underlying syntax of their respective configuration languages, their language interpreters are totally separate and so the Vault policy language implementation cannot make direct use of any values defined in the Terraform language.
Instead, you'll need to use the Terraform language to construct a suitable configuration for Vault. Vault supports a JSON variant of its policy language in order to make it easier to programmatically generate it, and so you can use Terraform's jsonencode function to build a JSON-based policy from the result of a Terraform expression, which may itself include references to values elsewhere in Terraform.
For example:
locals {
vault_ro_policy = {
path = {
"${var.client[0]}/k8s/preprod/*" = {
capabilities = ["read"]
}
}
}
}
resource "vault_policy" "ro-client" {
name = "${var.client[0]}_k8s_preprod_ro"
policy = jsonencode(local.var_ro_policy)
}
The value of local.vault_ro_policy should encode to JSON as follows, assuming that var.client[0] has the value "example":
{
"path": {
"example/k8s/preprod/*": {
"capabilities": ["read"]
}
}
}
Assuming that this is valid Vault JSON policy syntax (which I've not verified), this should be accepted by Vault as a valid policy. If I didn't get the JSON policy syntax exactly right then hopefully you can see how to adjust it to be valid; my expertise is with Terraform, so I focused on the Terraform language part here.

Only create policy document rule on condition true - Terraform Vault

I have a Vault instance and I manage policies and secrets in it with Terraform. There are a couple of repeated steps when creating approle authentication, policy and policy documents for newly onboarded teams, because each team has several applications they work on. I'd like to modularize the repeated parts ( policy document, policy creation and approle for the team-app), though each application has a slightly different rule set.
Is there a way to create policy documents in a way that some rules are only included if a bool is set to true?
for example:
I have a module that creates policies and policy documents as below:
I would pass a bool variable named enable_metadata_rule and based on it's value I would create the 2nd rule or not:
resource "vault_policy" "example_policy" {
for_each = var.environments
provider = vault
name = "${var.team}-${var.application}-${each.key}"
policy = data.vault_policy_document.policy_document["${each.key}"].hcl
}
data "vault_policy_document" "policy_document" {
for_each = var.environments
rule {
path = "engines/${var.team}-kv/data/${each.key}/services/${var.application}/*"
capabilities = ["read", "list"]
description = "Read secrets for ${var.application}"
}
rule {
# IF enable_metadata_rule == true
path = "engines/${var.team}-kv/metadata/*"
capabilities = ["list"]
description = "List metadata for kv store"
}
}
If there isn't such thing, is there an option for merging separately created policy documents?
You should be able to do it using dynamic blocks:
data "vault_policy_document" "policy_document" {
for_each = var.environments
rule {
path = "engines/${var.team}-kv/data/${each.key}/services/${var.application}/*"
capabilities = ["read", "list"]
description = "Read secrets for ${var.application}"
}
dynamic "rule" {
for_each = var.enable_metadata_rule == true ? [1]: []
content {
path = "engines/${var.team}-kv/metadata/*"
capabilities = ["list"]
description = "List metadata for kv store"
}
}
}

Is it possible to set up a multiuser secret rotation in AWS secrets manager with terraform?

... Given the existing capabilities of terraform (v.3.23.0)
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret_rotation
Or is it simply not available in terraform yet as of this writing? Obviously, this can be done in the AWS UI, but I'm interested in scripting it out in TF.
I have a simple example for rotating a singular secret in AWS secrets manager, but if I edit the created rotation associated with that secret in the AWS dashboard, there is no way to make it a multi-user rotation -- the UI simply does not show it as being an option.
resource "aws_secretsmanager_secret_rotation" "rds_postgres_key_rotation" {
secret_id = aws_secretsmanager_secret.rotation_example.id
rotation_lambda_arn = aws_serverlessapplicationrepository_cloudformation_stack.postgres_rotator.outputs["RotationLambdaARN"]
rotation_rules {
automatically_after_days = 1
}
}
resource "aws_secretsmanager_secret" "rotation_example" {
name = "normalusersecret"
kms_key_id = aws_kms_key.my_key.id
}
resource "aws_serverlessapplicationrepository_cloudformation_stack" "postgres_rotator" {
name = "postgres-rotator"
application_id = "arn:aws:serverlessrepo:us-east-1:297356227824:applications/SecretsManagerRDSPostgreSQLRotationMultiUser"
capabilities = [
"CAPABILITY_IAM",
"CAPABILITY_RESOURCE_POLICY",
]
parameters = {
functionName = "func-postgres-rotator"
#endpoint = "secretsmanager.${data.aws_region.current.name}.${data.aws_partition.current.dns_suffix}"
endpoint = "secretsmanager.us-east-1.lambda.amazonaws.com"
}
}
It appears that the SecretsManager just inspects the Secret Value JSON for the masterarn key. If that key exists, it flips the multi user radio button.
e.g.
Single user
resource "aws_secretsmanager_secret_version" "example" {
secret_id = aws_secretsmanager_secret.example.id
secret_string = tostring(jsonencode({
password = "password"
username = "user"
}))
}
Multi user
resource "aws_secretsmanager_secret_version" "example" {
secret_id = aws_secretsmanager_secret.example.id
secret_string = tostring(jsonencode({
masterarn = aws_secretsmanager_secret.master.arn
password = "password"
username = "user"
}))
}

Terraform optional provider for optional resource

I have a module where I want to conditionally create an s3 bucket in another region. I tried something like this:
resource "aws_s3_bucket" "backup" {
count = local.has_backup ? 1 : 0
provider = "aws.backup"
bucket = "${var.bucket_name}-backup"
versioning {
enabled = true
}
}
but it appears that I need to provide the aws.backup provider even if count is 0. Is there any way around this?
NOTE: this wouldn't be a problem if I could use a single provider to create buckets in multiple regions, see https://github.com/terraform-providers/terraform-provider-aws/issues/8853
Based on your description, I understand that you want to create resources using the same "profile", but in a different region.
For that case I would take the following approach:
Create a module file for you s3_bucket_backup, in that file you will build your "backup provider" with variables.
# Module file for s3_bucket_backup
provider "aws" {
region = var.region
profile = var.profile
alias = "backup"
}
variable "profile" {
type = string
description = "AWS profile"
}
variable "region" {
type = string
description = "AWS profile"
}
variable "has_backup" {
type = bool
description = "AWS profile"
}
variable "bucket_name" {
type = string
description = "VPC name"
}
resource "aws_s3_bucket" "backup" {
count = var.has_backup ? 1 : 0
provider = aws.backup
bucket = "${var.bucket_name}-backup"
}
In your main tf file, declare your provider profile using local variables, call the module passing the profile and a different region
# Main tf file
provider "aws" {
region = "us-east-1"
profile = local.profile
}
locals {
profile = "default"
has_backup = false
}
module "s3_backup" {
source = "./module"
profile = local.profile
region = "us-east-2"
has_backup = true
bucket_name = "my-bucket-name"
}
And there you have it, you can now build your s3_bucket_backup using the same "profile" with different regions.
In the case above, the region used by the main file is us-east-1 and the bucket is created on us-east-2.
If you set has_backup to false, it won't create anything.
Since the "backup provider" is build inside the module, your code won't look "dirty" for having multiple providers in the main tf file.

Resources