Terraform: Add resource-specific secrets - terraform

I know that you can pass general secrets to a resource through terraform variables. Is there a way to configure secrets which change at the resource level?
Specifically, I'm using terraform as a back-end to an app which allows users to set up a server with a password. That password is different for each server. Is there some way to set something like self.password for a single instance so that it:
Is not visible in the github repo where I track the terraform files
and
Can be changed for each individual instance
Right now I'm just going to be creating terraform files like password=var.{unique_id}_password but if feels like there should be a better way
More detail on the use-case:
I have a web application to provision servers for users running another web app. The password for that server is set-up by my application. The password is configured right now using a set-up script that I would like to port to terraform.
The passwords change for each server because a user can set the password for their server only, and that variable should not effect other resources
Here's a super-simplified version of the expected output when a user tries to provision a server
# new-server.tf
resource "digitalocean_droplet" "new_server" {
name = "new_server"
password = "${var.get_the_password_somehow}"
provisioner "remote-exec" {
inline = [
"set-password ${self.password}"
]
}
}

You can use the random_password provider to generate a random string.
Reference: https://www.terraform.io/docs/providers/random/r/password.html
Not sure if your use case requires management or storage of the password, but that is also possible depending on your needs. I see that you are using DO for provisioning resources.
Maybe you can put Hashicorp Vault in place to manage the randomly generated passwords. I'm an AWS guy so I would stick the password in secrets manager.

Related

How to hide Terraform "artifactory" backend credentials?

Terraform 1.0.x
It's my first time using an artifactory backend to store my state files. In my case it's a Nexus repository, and followed this article to set up the repository.
I have the following
terraform {
backend "artifactory" {
# URL of Nexus-OSS repository
url = "http://x.x.x:8081/repository/"
# Repository name (must be terraform)
repo = "terraform"
# Unique path for this particular plan
subpath = "exa-v30-01"
# Nexus-OSS creds (nust have r/w privs)
username = "user"
password = "password"
}
}
Since the backend configuration does not accept variables for the username and password key/value pairs, how can I hide the credentials so they're not in plain site when I store my files in our Git repo?
Check out the "Partial Configuration" section of the Backend Configuration documentation. You have three options:
Specify the credentials in a backend config file (that isn't kept in version control) and specify the -backend-config=PATH option when you run terraform init.
Specify the credentials in the command line using the -backend-config="KEY=VALUE" option when you run terraform init (in this case, you would run terraform init -backend-config="username=user" -backend-config="password=password").
Specify them interactively. If you just don't include them in the backend config block, and don't provide a file or CLI option for them, then Terraform should ask you to type them in on the command line.
For settings related to authentication or identifying the current user running Terraform, it's typically best to leave those unconfigured in the Terraform configuration and use the relevant system's normal out-of-band mechanisms for passing credentials.
For example, for s3 backend supports all of the same credentials sources that the AWS CLI does, so typically we just configure the AWS CLI with suitable credentials and let Terraform's backend pick up the same settings.
For systems that don't have a standard way to configure credentials out of band, the backends usually support environment variables as a Terraform-specific replacement. In the case of the artifactory backend it seems that it supports ARTIFACTORY_USERNAME and ARTIFACTORY_PASSWORD environment variables as the out-of-band credentials source, and so I would suggest setting those environment variables and then omitting username and password altogether in your backend configuration.
Note that this out-of-band credentials strategy is subtly different than using partial backend configuration. Anything you set as part of the backend configuration -- whether in a backend block in configuration or on the command line -- will be saved by Terraform into a local cache of your backend settings and into every plan file Terraform saves.
Partial backend configuration is therefore better suited to situations where the location of the state is configured systematically by some automation wrapper, and thus it's easier to set it on the command line than to generate a configuration file. In that case, it's beneficial to write out the location to the cached backend configuration so that you can be sure all future Terraform commands in that directory will use the same settings. It's not good for credentials and other sensitive information, because those can sometimes vary over time during your session and should ideally only be known temporarily in memory rather than saved as part of artifacts like the plan file.
Out-of-band mechanisms like environment variables and credentials files are handled directly by the backend itself and are not recorded directly by Terraform, and so they are a good fit for anything which is describing who is currently running Terraform, as opposed to where state snapshots will be saved.

Can docker on Azure Linux App Service authenticate with the ACR without us specifying the password in the app settings?

We deploy a Linux App Service to Azure using terraform. The relevant configuration code is:
resource "azurerm_app_service" "webapp" {
app_settings = {
DOCKER_REGISTRY_SERVER_URL = "https://${local.ctx.AcrName}.azurecr.io"
DOCKER_REGISTRY_SERVER_USERNAME = data.azurerm_key_vault_secret.acr_admin_user.value
DOCKER_REGISTRY_SERVER_PASSWORD = data.azurerm_key_vault_secret.acr_admin_password.value
...
}
...
}
The problem is that terraform does not consider app_settings a secret and so it outputs in the clear the DOCKER_REGISTRY_SERVER_PASSWORD value in the Azure DevOps output (I obfuscated the actual values):
So, I am wondering - can docker running on an Azure Linux App Service host authenticate with the respective ACR without us having to pass the password in a way that makes it so obvious to every one who can inspect the pipeline output?
The following article seems relevant in general - https://docs.docker.com/engine/reference/commandline/login, but it is unclear how we can apply it in my context, if at all.
Also, according to https://feedback.azure.com/forums/169385-web-apps/suggestions/36145444-web-app-for-containers-acr-access-requires-admin#%7Btoggle_previous_statuses%7D Microsoft has started working on something relevant, but looks like this is still a work in progress (almost 5 months).
I'm afraid you must set the environment variables about DOCKER_REGISTRY_* to pull the images from the ACR, it's the only way to do that designed by Azure. But for the sensitive info about the password, it also provides a way to hide it. You can use the Key Vault to store the password in secret, and then get the password from the secret. Take a look at the document Use Key Vault references for App Service. So you can change the app_setting for the password like this:
DOCKER_REGISTRY_SERVER_PASSWORD = "#Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931)"
Or
DOCKER_REGISTRY_SERVER_PASSWORD = "#Microsoft.KeyVault(VaultName=myvault;SecretName=mysecret;SecretVersion=ec96f02080254f109c51a1f14cdb1931)"
Then it just shows the reference of the Key Vault, not the exact password.
Unfortunately Azure Web Apps do not support interacting with ACR using a managed identity, you must pass those Environment Variables to the App Service.
Terraform does not currently support applying a "sensitive" flag to arbitrary values. You can define outputs as sensitive, but it will not help with values you want to hide during the plan phase.
I would suggest checking out https://github.com/cloudposse/tfmask, using the TFMASK_RESOURCES_REGEX configuration to block the output you want to hide during your pipeline. If you're averse to adding dependencies, similar effect could be achieved by piping terraform apply through grep --invert-match "DOCKER_REGISTRY" instead.
#charles-xu has a good answer as well if you want to set up mappings between keyvault and your web app then push your tokens into kv secrets.
Now it's possible to use managed identity to pull images from ACR.
You may do the next:
go to your Container Registry page in the Azure portal
Open the tab Access Control (IAM)
The open Role assignments tab
Add role assignment AcrPull to your App Service or Function App
In the Deployment Center of your App Service choose Managed Identity for the Authentication setting.
Or you may use CLI by following the steps from the official documentation (link below):
https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#use-managed-identity-to-pull-image-from-azure-container-registry
After you added role assignment DOCKER_REGISTRY_SERVER_URL, DOCKER_REGISTRY_SERVER_USERNAME and DOCKER_REGISTRY_SERVER_PASSWORD settings may be removed from App Service's App Settings.

where to store the azure service principal data when using with terraform from CI or docker

I am reading all the terraform docs about using a service principal with a client secret when in CI or docker file or whatever and I quote:
We recommend using either a Service Principal or Managed Service Identity when running Terraform non-interactively (such as when running Terraform in a CI server) - and authenticating using the Azure CLI when running Terraform locally.
It then goes into great detail about creating a service principal and then gives an awful example at the end where the client id and client secret are hardcoded in the file by either storing them in environment variables:
export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
export ARM_CLIENT_SECRET="00000000-0000-0000-0000-000000000000"
export ARM_SUBSCRIPTION_ID="00000000-0000-0000-0000-000000000000"
export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"
or in the terraform provider block:
provider "azurerm" {
# Whilst version is optional, we /strongly recommend/ using it to pin the version of the Provider being used
version = "=1.43.0"
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000"
client_secret = "${var.client_secret}"
tenant_id = "00000000-0000-0000-0000-000000000000"
}
It does put a nice yellow box about it saying do not do this but there is no suggestion of what to do.
I don't think client_secret in an environment variable is a particularly good idea.
Should I be using the client certificate and if so, the same question arises about where to keep the configuration.
I want to avoid azure-cli if possible.
Azure-cli will not return the client secret anyway.
How do I go about getting these secrets into environment variables? Should I be putting them into a vault or is there another way?
For your requirements, I think you're a little confused that how to choose a suitable one from the four ways.
You can see that the Managed Service Identity is only available for the services with the Managed Service Identity feature. So docker cannot use it. And you need also to assign it with appropriate permission as the service principal. You don't want to use Azure CLI if possible, I don't know why, but let's skip it first.
The service principal is a good way I think. It recommends you do not put the secret into a variable inside the Terraform file. So you can only use the environment variable. And if you also do not want to set the environment variable, then I don't think there is a way to use the service principal. The certificate for the service principal only needs to set the certificate path more than the other one.
And there is a caution for the service principal. You can see the secret of the service principal only one time when you finish creating it and then it will do not display anymore. If you forget, you can only reset the secret.
So I think the service principal is the most suitable way for you. You can set the environment variables with the parameter --env of the command docker run. Or just set them in the Dockerfile with ENV. The way to store the secret in the key vault, I think you can get the answer in my previous answer.

Is possible to keep secrets out of state?

For example could reference the password as an environment variable? Even if I did do that it would still be stored in state right?
# Configure the MySQL provider
provider "mysql" {
endpoint = "my-database.example.com:3306"
username = "app-user"
password = "app-password"
}
State snapshots include only the results of resource, data, and output blocks, so that Terraform can compare these with the configuration when creating a plan.
The arguments inside a provider block are not saved in state snapshots, because Terraform only needs the current arguments for the provider configuration, and never needs to compare with the previous.
Even though the provider arguments are not included in the state, it's best to keep specific credentials out of your configuration. Providers tend to offer arguments for credentials as a last resort for unusual situations, but should also offer other ways to provide credentials. For some providers there is some existing standard way to pass credentials, such as the AWS provider using the same credentials mechanisms as the AWS CLI. Other providers define their own mechanisms, such as environment variables.
For the MySQL provider in particular, we should set endpoint in the configuration because that describes what Terraform is managing, but we should use environment variables to specify who is running Terraform. We can use the MYSQL_USERNAME and MYSQL_PASSWORD environment variables to specify the credentials for the individual or system that is running Terraform.
A special exception to this is when Terraform itself is the one responsible for managing the credentials. In that case, the resource that provisioned the credentials will have its data (including the password) stored in the state. There is no way to avoid that because otherwise it would not be possible to use the password elsewhere in the configuration.
For Terraform configurations that manage credentials (rather than just using credentials), they should ideally be separated from other Terraform configurations and have their state snapshots stored in a location where they can be encrypted at rest and accessible only to the individuals or systems that will run Terraform against those configurations. In that case, treat the state snapshot itself as a secret.
No, it's not possible. Your best option is using a safe and encrypted remote backend such as S3 + Dynamodb to keep your state files. I've also read about people using git-crypt, but never tried myself.
That said, you can keep secrets out of your source code using environment variables for inputs.

Terraform and cleartext password in (remote) state file

There are many Git issues opened on the Terraform repo about this issue, with lots of interesting comments, but as of now I still see no solution to this issue.
Terraform stores plain text values, including passwords, in tfstate files.
Most users are required to store them remotely so the team can work concurrently on the same infrastructure with most of them storing the state files in S3.
So how do you hide your passwords?
Is there anyone here using Terraform for production? Do you keep you passwords in plain text?
Do you have a special workflow to remove or hide them? What happens when you run a terraform apply then?
I've considered the following options:
store them in Consul - I don't use Consul
remove them from the state file - this requires another process to be executed each time and I don't know how Terraform will handle the resource with an empty/unreadable/not working password
store a default password that is then changed (so Terraform will have a not working password in the tfstate file) - same as above
use the Vault resource - sounds it's not a complete workflow yet
store them in Git with git-repo-crypt - Git is not an option either
globally encrypt the S3 bucket - this will not prevent people from seeing plain text passwords if they have access to AWS as a "manager" level but it seems to be the best option so far
From my point of view, this is what I would like to see:
state file does not include passwords
state file is encrypted
passwords in the state file are "pointers" to other resources, like "vault:backend-type:/path/to/password"
each Terraform run would gather the needed passwords from the specified provider
This is just a wish.
But to get back to the question - how do you use Terraform in production?
I would like to know what to do about best practice, but let me share about my case, although it is a limited way to AWS. Basically I do not manage credentials with Terraform.
Set an initial password for RDS, ignore the difference with lifecycle hook and change it later. The way to ignore the difference is as follows:
resource "aws_db_instance" "db_instance" {
...
password = "hoge"
lifecycle {
ignore_changes = ["password"]
}
}
IAM users are managed by Terraform, but IAM login profiles including passwords are not. I believe that IAM password should be managed by individuals and not by the administrator.
API keys used by applications are also not managed by Terraform. They are encrypted with AWS KMS(Key Management Service) and the encrypted data is saved in the application's git repository or S3 bucket. The advantage of KMS encryption is that decryption permissions can be controlled by the IAM role. There is no need to manage keys for decryption.
Although I have not tried yet, recently I noticed that aws ssm put-parameter --key-id can be used as a simple key value store supporting KMS encryption, so this might be a good alternative as well.
I hope this helps you.
The whole remote state stuff is being reworked for 0.9 which should open things up for locking of remote state and potentially encrypting of the whole state file/just secrets.
Until then we simply use multiple AWS accounts and write state for the stuff that goes into that account into an S3 bucket in that account. In our case we don't really care too much about the secrets that end up in there because if you have access to read the bucket then you normally have a fair amount of access in that account. Plus our only real secrets kept in state files are RDS database passwords and we restrict access on the security group level to just the application instances and the Jenkins instances that build everything so there is no direct access from the command line on people's workstations anyway.
I'd also suggest adding encryption at rest on the S3 bucket (just because it's basically free) and versioning so you can retrieve older state files if necessary.
To take it further, if you are worried about people with read access to your S3 buckets containing state you could add a bucket policy that explicitly denies access from anyone other than some whitelisted roles/users which would then be taken into account above and beyond any IAM access. Extending the example from a related AWS blog post we might have a bucket policy that looks something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MyTFStateFileBucket",
"arn:aws:s3:::MyTFStateFileBucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AROAEXAMPLEID:*",
"AIDAEXAMPLEID"
]
}
}
}
]
}
Where AROAEXAMPLEID represents an example role ID and AIDAEXAMPLEID represents an example user ID. These can be found by running:
aws iam get-role -–role-name ROLE-NAME
and
aws iam get-user -–user-name USER-NAME
respectively.
If you really want to go down the encrypting the state file fully then you'd need to write a wrapper script that makes Terraform interact with the state file locally (rather than remotely) and then have your wrapper script manage the remote state, encrypting it before it is uploaded to S3 and decrypting it as it's pulled.

Resources