How do I pass resources that were created by Terraform to Kustomize - terraform

Am using a combination of these tools
Terraform - To deploy the Application specific AWS resources I need
(For instance a secret)
Skaffold - To help with the inner
development loop, surrounding the deployment of K8s resources to
local and remote cluster
Kustomize - To help with templating of
different configurations for different environment
My github action steps are as follows
Terraform to create the AWS resources. At this point it creates a AWS
secrets arn.
Skaffold to deploy the k8s manifests. Skaffold in-turn delegates K8s manifest generation to Kustomize. Within the Kustomize overlay files i need to be able to access the Secrets arn that was created earlier, this arn needs to be injected into the container that is being deployed. How do I achieve this?
Rephrasing the question: How do I pass resources that were created by terraform to be consumed by something like Kustomize (Which is used by skaffold)
(p.s, I really like the choice of my tools thus far as each one excels at one thing. I realize that terraform can possibly do all of it, but that is a choice that I dont want to make unless there are no easier options)

Here is what I have learnt:
I don't think there are any industry standards in terms of how to share this data between the tools across different steps within github actions. That being said here are some of the options
Have the Terraform store the secrets arn in a parameter store. Retrieve the arn from the parameter store in later steps. This means that the steps have to share a static key
Have Terraform update the kustomize files directly (or use kustomize_overlays as datasource)
There could be other similar approaches, but none of these tools have a native way of passing/sharing data

Related

SecureString in ARM template deployment through Terraform does an update in place everytime?

I am using Terraform to provision my Azure resources which works great, however, for some resources such as Logic Apps, doing this natively doesn't really work so I am using the Logic Apps ARM template and doing a Terraform "azurerm_resource_group_template_deployment" in order to provision. I know doing an ARM template deployment within Terraform is a bit of a last resort. It works ok though and deploys fine but I have a Service Bus connection defined and that is of type "securestring". By default, these are not saved as part of ARM deployment so everytime Terraform runs in my pipeline, even if the Logic App ARM template has not changed, it still does the deployment as the top level deployment state Terraform knows about previously did not have the value saved so will always see it as new. Is there any way around this other than changing the "securestring" to "string" which I obviously do not want to do given the endpoint contains the SAS key etc?
Hit same issues today - really limits what is viable. Managed to work around my two scenario's.
For things like keys and connection strings you can use the listkeys function inside of the ARM template - some examples here. I had this exact issue trying to get a log analytics workspace key in to the template - https://github.com/Azure/azure-quickstart-templates/blob/master/demos/arm-template-retrieve-azure-storage-access-keys/azuredeploy.json - Get connection strings in ARM
Another scenario I had was wanting to pass a service principal secret from TF to template as securestring, to get around this I ended up getting the secret from keyvault inside of the ARM template instead.

Passing arguments to Docker containers via Kubernetes Deployment YML

I'm using a CI/CD pipeline to deploy a Node.js app on my Kubernetes cluster. Now, we use different sensible environment variables in local, and we would like to deploy them as environment variables within the cluster to be used by the different containers...
Which strategy should I go with?
TIA
There are many tools created in order to let you inject secret into kubernetes safely.
Natively you can use the "Secrets" object: https://kubernetes.io/docs/concepts/configuration/secret/
And mount the secret as env var.
Alternatively, you can use some open source tools that make this process more secured by encrypting the secrets, here are some I recommend:
https://learnk8s.io/kubernetes-secrets-in-git
https://www.vaultproject.io/docs/platform/k8s

GitHub Actions for Terraform - How to provide "terraform.tfvars" file with aws credentials

I am trying to setup GitHub Actions for execute a terraform template.
My confusion is - how do I provide *.tfvars file which has aws credentials. (I can't check-in these files).
Whats the best practice to share the variable's values expected by terraform commands like plan or apply where they need aws_access_key and aws_secret_key.
Here is my GitHub project - https://github.com/samtiku/terraform-ec2
Any guidance here...
You don't need to provide all variables through *.tfvars file. Apart from -var-file option, terraform command provides also -var parameter, which you can use for passing secrets.
In general, secrets are passed to scripts through environment variables. CI tools give you an option to define environment variables in project configuration. It's a manual step, because as you have already noticed, secrets cannot be stored in the repository.
I haven't used Github Actions in particular, but after setting environment variables, all you need to do is run terraform with secrets read from them:
$ terraform -var-file=some.tfvars -var "aws-secret=${AWS_SECRET_ENVIRONMENT_VARIABLE}
This way no secrets are ever stored in the repository code. If you'd like to run terraform locally, you'll need first to export these variables in your shell :
$ export AWS_SECRET_ENVIRONMENT_VARIABLE="..."
Although Terraform allows providing credentials to some providers via their configuration arguments for flexibility in complex situations, the recommended way to pass credentials to providers is via some method that is standard for the vendor in question.
For AWS in particular, the main standard mechanisms are either a credentials file or via environment variables. If you configure the action to follow what is described in one of those guides then Terraform's AWS provider will automatically find those credentials and use them in the same way that the AWS CLI does.
It sounds like environment variables will be the easier way to go within GitHub actions, in which case you can just set the necessary environment variables directly and the AWS provider should use them automatically. If you are using the S3 state storage backend then it will also automatically use the standard AWS environment variables.
If your system includes multiple AWS accounts then you may wish to review the Terraform documentation guide Multi-account AWS Architecture for some ideas on how to model that. The summary of what that guide recommends is to have a special account set aside only for your AWS users and their associated credentials, and then configure your other accounts to allow cross-account access via roles, and then you can use a single set of credentials to run Terraform but configure each instance of the AWS provider to assume the appropriate role for whatever account that provider instance should interact with.

How can I apply one of my resources by name in Terraform?

Terraform will try to deploy all resources defined on Terraform configuration files. There are a lot of resources in my application, like lmabda, api gateway, ECS etc. I wonder whether I can specify deploying only one resource. For example, I want to deploy one lambda only and don't want to apply other resources. How can I make it in Terraform?
terraform apply -target=aws_lambda_function.test_function
More information on the usage of -target can be found in the terraform apply documentation.

Deploying a VMSS and injecting secrets

I am wondering if there is any straightforward way of injecting files/secrets into the vms of a scaleset, either as you perform the (ARM) deployment or change the image.
This would be application-level passwords, certificates, and so on, that we would not want to be stored on the images.
I am using the linux custum script extension for the entrypoint script, and realize that it's possible to inject some secrets as parameters to that script. I assume this would not work with certificates however (too big/long), and it would not be very future-proof as we would need to redeploy the template (and rewrite the entrypoint script) whenever we want to add or remove a secret.
Windows based VMSS can get certificates from the KV directly during deployment, but linux ones cannot do that. Also, there is a customData property which allows you to pass in whatever you want (i think its limited to 64kb base64 encoded data), but that is not really flexible as well.
One way of solving this - write an init script that would use Managed Service Identity to get secrets from the Key Vault, this way you get several advantages:
You dont store secrets in the templates\vm configuration
You can update the secret and all the VMSS will get new version on the next deployment
You dont have to edit the init script unless secret names changed or new secrets got introduced.

Resources