I tried to create a custom module to encrypt our access and secret access keys when outputs are used. Thus when a build runs it tries to print the out the access key after encrypting it using kms.
But currently when we use this module to create multiple users, it's just printing the 1st user's access key and secret key for other users as well.
Someone please suggest me how i should fix this. Using terraform 0.12.18
variable "iam_username" {
description = "IAM username"
}
variable "path" {
description = "path for IAM user"
default = "/"
}
resource "aws_iam_user" "iam_user" {
name = var.iam_username
path = var.path
}
resource "aws_iam_access_key" "iam_keys" {
user = aws_iam_user.iam_user.name
}
data "external" "stdout" {
program = [
"bash",
"${path.module}/encrypt_credentials.sh"]
query = {
access_key = aws_iam_access_key.iam_keys.id
secret_key = aws_iam_access_key.iam_keys.secret
}
}
encrypt_credentials.sh
function encrypt() {
aws kms encrypt --key-id alias/xxxx --plaintext $ACCESS_KEY --output text --query CiphertextBlob --region us-east-1 > encrypted_access_key
aws kms encrypt --key-id alias/xxxx --plaintext $SECRET_KEY --output text --query CiphertextBlob --region us-east-1 > encrypted_secret_key
}
function output() {
access_key=$(cat encrypted_access_key )
secret_key=$(cat encrypted_secret_key)
jq -n \
--arg access_value "$access_key" \
--arg secret_value "$secret_key"\
'{"access_value":$access_value,"secret_value":$secret_value}'
}
encrypt
output
outputs.tf
output "aws_iam_access_key" {
value = chomp(data.external.stdout.result["access_value"])
}
output "aws_iam_secret_access_key" {
value = chomp(data.external.stdout.result["secret_value"])
}
I tested this module, I'm trying to create two users, test1, test2 ..here is the output, it as the same access key and secret key for both users
Terraform
module "test1user" {
source = "../../"
iam_username = "test1"
path = "/"
}
module "test2user" {
source = "../../"
iam_username = "test2"
path = "/"
}
outputs.tf
output "user1_access_key" {
value = module.test1user.aws_iam_access_key
}
output "user1_secret_key" {
value = module.test1user.aws_iam_secret_access_key
}
output "user2_access_key" {
value = module.test2user.aws_iam_access_key
}
output "user2_secret_key" {
value = module.test2user.aws_iam_secret_access_key
}
14:47:47 TestTerraformAwsNetworkExample 2020-07-22T18:47:47Z logger.go:66: user1_access_key = AQECAHj0ior/LD5LXMzmwFwEYlbqXWdHuCRWGQNeqhU6VNir+gAAAHIwcAYJKoZIhvcNAQcGoGMwYQIBADBcBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDDULiS2JecmxLYdv9QIBEIAvjB60Maw5IuryzukItn8awWXnqfUzUcnPJNq7mFHQ2MYRBtOqBJJo0zbPo1i+pgw=
14:47:47 TestTerraformAwsNetworkExample 2020-07-22T18:47:47Z logger.go:66: user1_secret_key = AQECAHj0ior/LD5LXMzmwFwEYlbqXWdHuCRWGQNeqhU6VNir+gAAAIcwgYQGCSqGSIb3DQEHBqB3MHUCAQAwcAYJKoZIhvcNAQcBMB4GCWCGSAFlAwQBLjARBAxyo66cMnxkOCrHjhoCARCAQzbpGYCzH6Ed+XvDFinBSbrK0LDk0YMXh39JCcztYwoJDFMbAtnWlS4cUyrmncf5paxE2oB7w2ujtpds/dBxUtsw6Lg=
14:47:47 TestTerraformAwsNetworkExample 2020-07-22T18:47:47Z logger.go:66: user2_access_key = AQECAHj0ior/LD5LXMzmwFwEYlbqXWdHuCRWGQNeqhU6VNir+gAAAHIwcAYJKoZIhvcNAQcGoGMwYQIBADBcBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDDULiS2JecmxLYdv9QIBEIAvjB60Maw5IuryzukItn8awWXnqfUzUcnPJNq7mFHQ2MYRBtOqBJJo0zbPo1i+pgw=
14:47:47 TestTerraformAwsNetworkExample 2020-07-22T18:47:47Z logger.go:66: user2_secret_key = AQECAHj0ior/LD5LXMzmwFwEYlbqXWdHuCRWGQNeqhU6VNir+gAAAIcwgYQGCSqGSIb3DQEHBqB3MHUCAQAwcAYJKoZIhvcNAQcBMB4GCWCGSAFlAwQBLjARBAxyo66cMnxkOCrHjhoCARCAQzbpGYCzH6Ed+XvDFinBSbrK0LDk0YMXh39JCcztYwoJDFMbAtnWlS4cUyrmncf5paxE2oB7w2ujtpds/dBxUtsw6Lg=
I refactored a lot of the code as I tried to reproduce it...
Finally got it working.
What I found suspicious was your > encrypted_access_key to latter read it back, we can just load that into a var and consume it without the intermediary file, and that is what I did.
module
variable "name" {
type = string
}
resource "aws_iam_user" "iam_user" {
name = var.name
}
resource "aws_iam_access_key" "iam_keys" {
user = aws_iam_user.iam_user.name
}
data "external" "stdout" {
program = [ "bash", "${path.module}/encrypt.sh"]
query = {
id = aws_iam_access_key.iam_keys.id
se = aws_iam_access_key.iam_keys.secret
}
}
output "out" {
value = data.external.stdout.result
}
#!/bin/bash
eval "$(jq -r '#sh "ID=\(.id) SE=\(.se)"')"
access=$(aws kms encrypt --key-id alias/xxxx --plaintext $ID --output text --query CiphertextBlob --region us-east-1)
secret=$(aws kms encrypt --key-id alias/xxxx --plaintext $SE --output text --query CiphertextBlob --region us-east-1)
jq -n --arg a "$access" --arg s "$secret" '{"access_value":$a,"secret_value":$s}'
main
provider "aws" {
region = "us-east-1"
}
module "test1user" {
source = "./aws_user"
name = "test1"
}
output "user1_out" {
value = module.test1user.out
}
module "test2user" {
source = "./aws_user"
name = "test2"
}
output "user2_out" {
value = module.test2user.out
}
output
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Outputs:
user1_out = {
"access_value" = "AQICAHgxynd50R/zNmpbsZ8biySxfHUL9kNuyyylE5GSqkiK7wHYbkBH3jxR3zvkFLogYVAsAAAAcjBwBgkqhkiG9w0BBwagYzBhAgEAMFwGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMWhETOYT+qhL/IibfAgEQgC+kdJy7fJLZBW/AUk7YdjqDeAyymt6xBxeS1kBJIOWdVnwOujAkLG0wI+JAUqin8w=="
"secret_value" = "AQICAHgxynd50R/zNmpbsZ8biySxfHUL9kNuyyylE5GSqkiK7wFvozPjgGKbxj61aKEbxYUwAAAAhzCBhAYJKoZIhvcNAQcGoHcwdQIBADBwBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDJOwAiWgVWPtIzwURAIBEIBD7Q78YneG+/FMlkDTUnCkczf8TQBezQyMCI5cUx4qVX7iECvzx/5qAfKdy3tI4ViUGR5XV12WBvWIXj8iRN55D0jK4A=="
}
user2_out = {
"access_value" = "AQICAHgxynd50R/zNmpbsZ8biySxfHUL9kNuyyylE5GSqkiK7wFIms+isXNTAl6xWDiXcz1gAAAAcjBwBgkqhkiG9w0BBwagYzBhAgEAMFwGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMxiChWwPDGCdImUtXAgEQgC9vJfi6GaHXbqal/2nSc9FSkXEOPOsn7J+a5u8JiI2x6flBoeia9QMjVv9tOxpzYA=="
"secret_value" = "AQICAHgxynd50R/zNmpbsZ8biySxfHUL9kNuyyylE5GSqkiK7wFBLdzTFeCSk2Zv16sSHZ8bAAAAhzCBhAYJKoZIhvcNAQcGoHcwdQIBADBwBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDNGAphqIxZPthA+IkgIBEIBDAufp2xtAsfNctmnEa4grTb15MatDKJuqIB8qWCBaht563qp+RbL1aoZ8oxPYYtiU2LuHUnvbhHtWklvn2SkdSDN90w=="
}
I tested locally on Ubuntu 18.04.4 with:
Terraform v0.12.24
+ provider.aws v2.54.0
+ provider.external v1.2.0
Here is the entire code:
https://github.com/heldersepu/hs-scripts/tree/master/TerraForm/encrypt_output
Related
Hello I am very new to AWS and currently exploring KMS. Now I have a code that can push to KMS as follows:
provider "aws"{
region = "us-east-1"
access_key = "..............."
secret_key = "..............."
}
variable "region" {
type = string
default = "us-east-1"
}
data "aws_availability_zones" "azs" {
state = "available"
}
resource "aws_kms_key" "cipher"{
description = "Ciphertext"
key_usage = "ENCRYPT_DECRYPT"
customer_master_key_spec = "SYMMETRIC_DEFAULT"
enable_key_rotation = true
}
resource "aws_kms_alias" "cipher"{
name = "alias/cipherkey"
target_key_id = aws_kms_key.cipher.key_id
}
resource "aws_kms_ciphertext" "services"{
key_id = aws_kms_key.cipher.key_id
plaintext = <<EOF
{
"Jason" : "Password",
"Ralph" : "Password"
}
EOF
}
data "aws_kms_secrets" "services"{
secret {
name = "services_password"
payload = aws_kms_ciphertext.services.ciphertext_blob
}
}
I want it to get back the values saved in my KMS, that plaintext values as shown below. How to do that via Terraform?
"Jason" : "Password",
"Ralph" : "Password"
Currently the terraform documentation for cloud run here shows you an example on how to mount 1 single secret volume to the cloud run service.
template {
spec {
containers {
image = "gcr.io/cloudrun/hello"
volume_mounts {
name = "a-volume"
mount_path = "/secrets"
}
}
volumes {
name = "a-volume"
secret {
secret_name = google_secret_manager_secret.secret.secret_id
default_mode = 292 # 0444
items {
key = "1"
path = "my-secret"
mode = 256 # 0400
}
}
}
}
}
I've tried to add multiple volumes and secret blocks but this will error out saying only 1 is allowed.
I've also tried to look through the documentation for a general example of multiple volumes but no such example exists.
For those wondering per 2022, since the documentation is still somewhat unclear: Multiple secrets can be mounted under multiple mount points for Cloud Run by repeating the entries (assuming a secondary secret entry as well):
spec {
containers {
image = "gcr.io/cloudrun/hello"
volume_mounts {
name = "a-volume"
mount_path = "/secrets"
}
volume_mounts {
name = "secondary-volume"
mount_path = "/somewhere-else"
}
}
volumes {
name = "a-volume"
secret {
secret_name = google_secret_manager_secret.secret.secret_id
default_mode = 292 # 0444
items {
key = "1"
path = "my-secret"
mode = 256 # 0400
}
}
}
volumes {
name = "secondary-volume"
secret {
secret_name = google_secret_manager_secret.secondary_secret.secret_id
default_mode = 292 # 0444
items {
key = "1"
path = "my-secondary-secret"
mode = 256 # 0400
}
}
}
}
In terraform documentation you can see : "The spec block supports: ...... volumes - (Optional) Volume represents a named volume in a container. Structure is"
You need to use the volume tag in spec context. like this
spec {
containers {
volume_mounts {
mount_path = "/secrets"
name = "secret"
}
}
**volumes {
name = "secret"
secret {
secret_name = "secret name"
}
}**
}
My goal is to have something like a common.tfvars file eg:
users = {
"daniel.meier" = {
path = "/"
force_destroy = true
tag_email = "foo#example.com"
github = "dme86"
}
"linus.torvalds" = {
path = "/"
force_destroy = true
tag_email = "bar#example.com"
github = "torvalds"
}
}
Via data you'll be able to retrieve informations about the github accounts:
data "github_user" "this" {
for_each = var.users
username = each.value["github"]
}
Output of ssh keys is also possible:
output "current_github_ssh_key" {
value = values(data.github_user.this).*.ssh_keys
}
But how can i get the SSH keys from output into a resource like:
resource "aws_key_pair" "deployer" {
for_each = var.users
key_name = each.value["github"]
public_key = values(data.github_user.this).*.ssh_keys
}
If i'm trying like in this example terraform errors
Inappropriate value for attribute "public_key": string required.
which makes sense, cause the keys are a list AFAIK - but how to convert this correctly?
Output looks like this:
Changes to Outputs:
+ current_github_ssh_key = [
+ [
+ "ssh-rsa AAAAB3NzaC1yc2EAAAAD(...)ElQ==",
],
+ [
+ "ssh-rsa AAAAB3NzaC1yc2EAAGVD(...)TXxrF",
],
]
If you want to test this code you have to include a github token for your provider like:
provider "github" {
token = "123456"
}
I've created a JSON string via template/interpolation.
I need to pass that to local-exec, which in turn uses a Powershell template to make a CLI call.
Originally I tried just referencing the json template in the Powershell command itself
--cli-input-json file://lfsetup.tpl
.. however, the template does not get interpolated.
Next, I tried setting the json to a local. However, this is multi-line and the CLI does not like that. Maybe if I could convert to single line ?
Any sugestions or guidance welcome !!
Thanks
JSON (.tpl or variable)
{
"CatalogId": "${account_id}",
"DataLakeSettings": {
"DataLakeAdmins": [
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role1"
},
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role2"
}
],
"CreateDatabaseDefaultPermissions": [],
"CreateTableDefaultPermissions": []
}
}
.tf
locals {
assume_role_arn = "arn:aws:iam::${local.account_id}:role/role_to_assume"
lf_json_settings = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
cli_region = "region"
}
resource "null_resource" "settings" {
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", { role_arn = local.assume_role_arn, json_settings = local.lf_json_settings, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
.ps
$ErrorActionPreference = "Stop"
$json = aws sts assume-role --role-arn ${role_arn} --role-session-name sessionname
$accessTokens = ConvertFrom-Json (-join $json)
$env:AWS_ACCESS_KEY_ID = $accessTokens.Credentials.AccessKeyId
$env:AWS_SECRET_ACCESS_KEY = $accessTokens.Credentials.SecretAccessKey
$env:AWS_SESSION_TOKEN = $accessTokens.Credentials.SessionToken
aws lakeformation put-data-lake-settings --cli-input-json file://lfsetup.tpl --region ${region}
$env:AWS_ACCESS_KEY_ID = ""
$env:AWS_SECRET_ACCESS_KEY = ""
$env:AWS_SESSION_TOKEN = ""
Output:
For these I put the template output into a local and passed the local to powershell. Then did variations with/out jsonencde and trying to replace '\n'. Strange results in some cases:
Use file provisioner to create .json file from rendered .tpl file:
locals {
...
settings_json_file = "/tmp/lfsetup.json"
}
resource "null_resource" "settings" {
provisioner "file" {
content = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
destination = local.settings_json_file
}
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", role_arn = local.assume_role_arn, json_settings = local.settings_json_file, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
Update your .ps file
replace file://lfsetup.tpl by file://${json_settings}
aws lakeformation put-data-lake-settings --cli-input-json file://${json_settings} --region ${region}
You may also use jsonencode function
I need to have "client_secret" output value as an input for "tenant_app_password"
variables.tf
variable "tenant_app_password" {
description = ""
}
Create-service-principal.tf
resource "random_string" "password" {
length = 32
special = true
}
# Create Service Principal Password
resource "azuread_service_principal_password" "test_sp_pwd" {
service_principal_id = azuread_service_principal.test_sp.id
value = random_string.password.result
end_date = "2020-01-12T07:10:53+00:00"
}
OUTPUT
output "client_secret" {
value = "${azuread_service_principal_password.wvd_sp_pwd.value}"
sensitive = true
}
Is we have any possible way ???
I'm assuming you want to use the output of one Terraform run in another one. You can do this by using a remote state datasource provider.
You cannot put the original output in a variable, but you can use the remote output as a variable directly in another template. For example, in your second template:
// set up the remote state data source
data "terraform_remote_state" "foo" {
backend = "s3"
config = {
bucket = "<your bucket name>"
key = "<your statefile name.tfstate"
region = "<your region>"
}
}
// use it
resource "kubernetes_secret" "bar" {
metadata {
name = "bar"
}
data = {
client_secret = data.terraform_remote_state.foo.outputs.client_secret
}
}
Also check out this question.