I have following requirements to store multiple Secret/Key value (around 200 secrets) in AWS Secrets Manager and I do not want to enter each and every secret key/value manually from AWS Secrets Manager console. Although I have researched a bit and found from AWS docs that I can create a JSON file where I can write all Secret Key/Value and then pass that file to AWS Secrets manager command:
aws secretsmanager create-secret --name MyTestDatabaseSecret \
--description "My test database secret created with the CLI" \
--secret-string file://mycreds.json
Is this a good way of doing this or can there be any other better way to store all these secrets all at once?
testing in my AWS account this works like a charm:
aws secretsmanager create-secret --name TestMultiplesValues4 --description "Mytest with multiples values" --secret-string file://secretmanagervalues.json
Then the json file will be:
{"Juan":"1","Pedro":"2","Pipe":"3"}
where Juan is the key #1 and 1 the secret value of that key...
It is very simple and easy,
Note: this is to create one secret manager with multiples key/Value pair, not to create multiples secret managers with one Key/Value each one. Be careful with this concept.
Hope this help you more.
Related
I'm trying to avoid having secrets in Terraform state.
Is there a better way of setting an RDS password from a secret in Secrets Manager that does this?
resource "null_resource" "master_password" {
triggers = {
db_host = module.myrdsdatabase.cluster_id
}
provisioner "local-exec" {
command = <<TOF
password=$(aws secretsmanager get-secret-value --secret-id myrdscreds | jq '.SecretString | fromjson | .password' | tr -d '"')
aws rds modify-db-cluster --db-cluster-identifier ${module.myrdsdatabase.cluster_id} --master-user-password $password --apply-immediately
TOF
interpreter = ["bash", "-c"]
}
}
There is no concrete solution to this issue. There is nearly 7 year old, still active, discussion on TF github issue about handling secrets in TF.
In your question you are already avoiding aws_secretsmanager_secret_version which is good practice. aws_secretsmanager_secret_version will not protect your secrets from being in plain text in TF state file!
Generally, there are two things people do to keep the secrets secret:
Store your TF state in a remote backend. Such as S3 with strict IAM and bucket policy controls, along with encryption at rest.
Use external procedure to set the passwords for your database. One way is local-exec, other could be using remote lambda through aws_lambda_invocation.
Other ways are possible, such as creating RDS databases through CloudFormation (CFN) from your TF. CFN has a proper way of accessing secret manager securely through dynamic references.
We have to accept that the secrets are in the state and manage access and encryption to the remote state accordingly.
You can get a value right out of secrets manager with the aws_secretsmanager_secret_version data source
your local-exec can be simplified a bit by using jq --raw-format or -r
password=$(aws secretsmanager get-secret-value --secret-id myrdscreds | jq -r .SecretString | jq -r .password)
I prefer to fetch and then pass in the secret in a variable from a build script, rather than local-exec. It seems cleaner, in some cases the trigger on the null resource may not fire when the secret value has been changed.
The current possible workaround is using SecretHub
https://secrethub.io/docs/guides/terraform/
but again as #marcin said no native support from Terraform, and the idea to encrypt your state file and restrict access to the backend eg. S3
another way I would do this by using aws cli outside of terraform to rest the password and upload it to secret manager.
I am starting to learn terraform/github actions. Is it possible to get TF to read Github secrets as part of the Github action ? For example ..
My main.tf file creates an AWS EC2 instance, and, needs to install nginx using a provisioner. in order to do that i need to provide my private/public key information to the provisoner for it to authentiate to the EC2 instance to install the app. I have created a github secret that contains my private key.
At the moment the workflow keeps failing becuase i cannot get it to read the github secret that contains the private key info.
How can i achieve this ?
any advise would be most welcome ! thanks
The simplest way is to use an environment variable.
Terraform reads the value for its variables from environment.
The next piece is to translate the GitHub secret in an environment variable.
In practice, if your Terraform script has a variable declaration like
variable "my_public_key" {}
and you have a GitHub secret NGINX_PUBKEY, then you can use this syntax in your workflow
steps:
- run: terraform apply -auto-approve
env:
TF_VAR_my_public_key: ${{ secrets.NGINX_PUBKEY }}
This said I would not recommend using GitHub secrets for this kind of data: they are better managed in a secret-management store like AWS Secrets Manager, Azure KeyVault, Hashicorp Vault, etc.
I am currently attempting to create a redis cache for azure via the cli using the typical example
az redis create --location westus2 --name MyRedisCache --resource-group MyResourceGroup --sku Basic --vm-size c0
However, what I'd love to do is use the ----redis-configuration add-on to tell redis I do NOT want to deal with security via the "requirepass" : " property
No matter how I try to add this property, I'm given an error.
Has anyone successfully used --redis-configuration to pass in additional requirements for the deployment?
Considering Azure Redis is a fully managed service where Microsoft creates and manages the Redis instance(s) (updates, automatic failover etc.) on behalf of the customer, not all configuration settings (like requirepass) are exposed to users.
Looking at the REST API documentation for creating an Azure Redis instance, few configuration settings that can be changed are:
rdb-backup-enabled,rdb-storage-connection-string,rdb-backup-frequency,maxmemory-delta,maxmemory-policy,notify-keyspace-events,maxmemory-samples,slowlog-log-slower-than,slowlog-max-len,list-max-ziplist-entries,list-max-ziplist-value,hash-max-ziplist-entries,hash-max-ziplist-value,set-max-intset-entries,zset-max-ziplist-entries,zset-max-ziplist-value
etc.
I am using the IBM Cloud CLI and tried to generate credentials for my cloud object storage service. However, the following command does not create HMAC credentials needed for using some S3 tools and APIs:
ibmcloud resource service-key-create cos-hmac-cli Writer --instance-name myobjectstorage
How can I create HMAC credentials using the command line interface?
The trick is to provide an additional parameter that tells the service to generate the HMAC part, too:
ibmcloud resource service-key-create cos-hmac-cli Writer \
--instance-name myobjectstorage --parameters '{"HMAC":true}'
The --parameters '{"HMAC":true}' adds the feature request in JSON format.
We are dealing with the problem of providing build time and run time secrets to our applications built using AWS CodePipeline and being deployed to ECS.
Ultimately, our vision is to create a generic pipeline for each of our applications that achieves the following goals:
Complete separation of access
The services in the app-a-pipeline CANNOT access any of the credentials or use any of the keys used in the app-b-pipeline and visa-versa
Secret management by assigned developers
Only developers responsible for app-a may read and write secrets for app-a
Here are the issues at hand:
Some of our applications require access to private repositories for dependency resolution at build time
For example, our java applications require access to a private maven repository to successfully build
Some of our applications require database access credentials at runtime
For example, the servlet container running our app requires an .xml configuration file containing credentials to find and access databases
Along with some caveats:
Our codebase resides in a public repository. We do not want to expose secrets by putting either the plaintext or the cyphertext of the secret in our repository
We do not want to bake runtime secrets into our Docker images created in CodeBuild even if ECR access is restricted
The Cloudformation template for the ECS resources and its associated parameter file reside in the public repository in plaintext. This eliminates the possibility of passing runtime secrets to the ECS Cloudformation template through parameters (As far as I understand)
We have considered using tools like credstash to help with managing credentials. This solution requires that both CodeBuild and ECS task instances have the ability to use the AWS CLI. As to avoid shuffling around more credentials, we decided that it might be best to assign privileged roles to instances that require the use of AWS CLI. That way, the CLI can infer credentials from the role in the instances metadata
We have tried to devise a way to manage our secrets given these restrictions. For each app, we create a pipeline. Using a Cloudformation template, we create:
4 resources:
DynamoDB credential table
KMS credential key
ECR repo
CodePipeline (Build, deploy, etc)
3 roles:
CodeBuildRole
Read access to DynamoDB credential table
Decrypt permission with KMS key
Write to ECR repo
ECSTaskRole
Read access to DynamoDB credential table
Decrypt permission with KMS key
Read from ECR repo
DeveloperRole
Read and write access to DynamoDB credential table
Encrypt and decrypt permission with KMS key
The CodeBuild step of the CodePipeline assumes the CodeBuildRole to allow it to read build time secrets from the credential table. CodeBuild then builds the project and generates a Docker Image which it pushes to ECR. Eventually, the deploy step creates an ECS service using the Cloudformation template and the accompanying parameter file present in the projects public repository The ECS task definition includes assuming the ECSTaskRole to allow the tasks to read runtime secrets from the credential table and to pull the required image from ECR.
Here is a simple diagram of the AWS resources and their relationships as stated above
Our current proposed solution has the following issues:
Role heavy
Creating roles is a privileged action in our organization. Not all developers who try to create the above pipeline will have permission to create the necessary roles
Manual assumption of DeveloperRole:
As it stands, developers would need to manually assume the DeveloperRole. We toyed with the idea of passing in a list of developer user ARNs as a parameter to the pipeline Cloudformation template. Does Cloudformation have a mechanism to assign a role or policy to a specified user?
Is there a more well established way to pass secrets around in CodePipeline that we might be overlooking, or is this the best we can get?
Three thoughts:
AWS Secret Manager
AWS Parameter Store
IAM roles for Amazon ECS tasks
AWS Secret ManagerAWS Secrets Manager helps you protect secrets to access applications, services, and IT resources. With you can rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
AWS Parameter Store can protect access keys with granular access. This access can be based on ServiceRoles.
ECS provides access to the ServiceRole via this pattern:
build:
commands:
- curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq 'to_entries | [ .[] | select(.key | (contains("Expiration") or contains("RoleArn")) | not) ] | map(if .key == "AccessKeyId" then . + {"key":"AWS_ACCESS_KEY_ID"} else . end) | map(if .key == "SecretAccessKey" then . + {"key":"AWS_SECRET_ACCESS_KEY"} else . end) | map(if .key == "Token" then . + {"key":"AWS_SESSION_TOKEN"} else . end) | map("export \(.key)=\(.value)") | .[]' -r > /tmp/aws_cred_export.txt
- chmod +x /tmp/aws_cred_export.txt
- /aws_cred_export.txt && YOUR COMMAND HERE
If your ServiceRole provided to the CodeBuild task has access to use the Parameter store key you should be good to go.
Happy hunting and hope this helps
At a high level, you can either isolate applications in a single AWS account with granular permissions (this sounds like what you're trying to do) or by using multiple AWS accounts. Neither is right or wrong per se, but I tend to favor separate AWS accounts over managing granular permissions because your starting place is complete isolation.