AWS parameter store should create/update with specific environment - terraform

I wanted to create or update aws parameter store with specific environment. now I have a terraform code to create or update aws parameter store for all the environments but I wanted to create or updated aws parameter store for specific environment only using terraform code so how to achieve it using terraform code ? I will pass that specific environment at the run time.
I am expecting when I run terraform code it will create or update for specific environment only.

Related

Create variable in Azure pipeline to use in a different pipeline

We have separate pipelines for our infrastructure and our application deployments. For our infrastructure, we are using terraform and i know that you can use terraform outputs as variables in later tasks within the same pipeline but is it possible to save the output as a variable in azure so that it can be used in a different pipeline.
We are looking to use this for S3 bucket names to use in the application code and for VPC subnet and SG ids in serverless.
Is this possible to save variables in the pipeline?
There is a variable group in Azure DevOps to share static values across pipelines.
In your case, if you want to save the terraform output as a variable in the variable group, you need to do something e.g. call the REST API dynamically to set the variable in the variable group, then you can use it in another pipeline.
You could also refer to this similar issue.

Can we enable Cloudwatch logs for AWS Step Functions via terraform

Can we enable logging in Cloudwatch for a AWS Step Function created via terraform to track individual states? The resource "aws_sfn_state_machine" available in terraform does not seem to provide any argument to configure Cloudwatch logging. As per documentation only below arguments are supported :
name,definition,role_arn,tags
or can we configure inside the state machine definition?
Yes, there is a property called logging_configuration : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sfn_state_machine . It was not possible before, but it was added in the AWS terraform provider >3.x , I don't remember the specific minor version it was added to so just make sure you have the latest version. Remember to also add the necessary permissions to your step function IAM role.

GitHub Actions for Terraform - How to provide "terraform.tfvars" file with aws credentials

I am trying to setup GitHub Actions for execute a terraform template.
My confusion is - how do I provide *.tfvars file which has aws credentials. (I can't check-in these files).
Whats the best practice to share the variable's values expected by terraform commands like plan or apply where they need aws_access_key and aws_secret_key.
Here is my GitHub project - https://github.com/samtiku/terraform-ec2
Any guidance here...
You don't need to provide all variables through *.tfvars file. Apart from -var-file option, terraform command provides also -var parameter, which you can use for passing secrets.
In general, secrets are passed to scripts through environment variables. CI tools give you an option to define environment variables in project configuration. It's a manual step, because as you have already noticed, secrets cannot be stored in the repository.
I haven't used Github Actions in particular, but after setting environment variables, all you need to do is run terraform with secrets read from them:
$ terraform -var-file=some.tfvars -var "aws-secret=${AWS_SECRET_ENVIRONMENT_VARIABLE}
This way no secrets are ever stored in the repository code. If you'd like to run terraform locally, you'll need first to export these variables in your shell :
$ export AWS_SECRET_ENVIRONMENT_VARIABLE="..."
Although Terraform allows providing credentials to some providers via their configuration arguments for flexibility in complex situations, the recommended way to pass credentials to providers is via some method that is standard for the vendor in question.
For AWS in particular, the main standard mechanisms are either a credentials file or via environment variables. If you configure the action to follow what is described in one of those guides then Terraform's AWS provider will automatically find those credentials and use them in the same way that the AWS CLI does.
It sounds like environment variables will be the easier way to go within GitHub actions, in which case you can just set the necessary environment variables directly and the AWS provider should use them automatically. If you are using the S3 state storage backend then it will also automatically use the standard AWS environment variables.
If your system includes multiple AWS accounts then you may wish to review the Terraform documentation guide Multi-account AWS Architecture for some ideas on how to model that. The summary of what that guide recommends is to have a special account set aside only for your AWS users and their associated credentials, and then configure your other accounts to allow cross-account access via roles, and then you can use a single set of credentials to run Terraform but configure each instance of the AWS provider to assume the appropriate role for whatever account that provider instance should interact with.

Terraform refresh not refreshing aws_api_gateway_deployment deployment ID

When using Terraform to deploy our AWS infra, if we use the AWS console to redeploy an API Gateway deployment, it creates a new deployment ID. Afterwards when running terraform apply again, Terraform lists aws_api_gateway_stage.deployment_id as an in-place update item. Running terraform refresh does not update the state. It continues to show this as a delta. Am I missing something?
Terraform computes the delta based on what it already has saved on the backend and what there is on AWS at the moment of application. You can use Terraform to apply changes to AWS, but not the other way around, i.e. you cannot change your AWS configuration and expect Terraform to pull the changes into its state.

Terraform : how to dynamically create microservices with ECS?

I am stuck with terraform. I want to create dynamically ECS services with terraform.
I have a configuration like this :
module/cluster/cluster.tf
module/service/service.tf
What I want to do is inject the service name from jenkins into the terraform configuration, so if the service doesnt exist, it creates it (update it if it exists)
I tried to set up different backend s3 remote state but I don't manage to build the whole infrastructure in one terraform apple.
Is there any way to specify dynamically the service configuration so its create them on demand ?
terraform supports to use variable TF_VAR_<variable> to do on fly change.
From environment variables
Terraform will read environment variables in the form of TF_VAR_name to find the value for a variable. For example, the TF_VAR_access_key variable can be set to set the access_key variable.
Note: Environment variables can only populate string-type variables. List and map type variables must be populated via one of the other mechanisms.
For example,
TF_VAR_environment=development terraform plan
https://www.terraform.io/intro/getting-started/variables.html#from-environment-variables

Resources