How to make aws_cloudwatch_event_rule with terraform and localstack? - terraform

I am using terraform(terraform) and localstack(localstack) and trying to create a aws_cloudwatch_event_rule. I get an error:
Error: Updating CloudWatch Event Rule failed: UnrecognizedClientException: The security token included in the request is invalid.
status code: 400, request id: 2d0671b9-cb55-4872-8e8c-82e26f4336cb
Im not sure why im getting this error because this works to create the resource in AWS but not on localstack 🤷‍♂️. Does anybody have any suggestions as to how to fix this? Thanks.
Its a large terraform project so I cant share all the code. This is the relevant section.
resource "aws_cloudwatch_event_rule" "trigger" {
name = "trigger-event"
description = "STUFF"
schedule_expression = "cron(0 */1 * * ? *)"
}
resource "aws_cloudwatch_event_target" "trigger_target" {
rule = "${aws_cloudwatch_event_rule.trigger.name}"
arn = "${trigger.arn}"
}

I realize this is an old question, but I just ran into this problem. I wanted to share what resolved it for me, in case it helps others who end up here. This works for me with terraform 0.12 (should work for 0.13 as well) and AWS provider 3.x.
When you get the The security token included in the request is invalid error, it usually means terraform attempted to perform the operation against real AWS rather than localstack.
The following should resolve the issue with creating CloudWatch Event rules.
Make sure you're running the events service in localstack. It's this service, and not cloudwatch, that provides the CloudWatch Events interface. E.g. if you're running localstack from the command line:
SERVICES=cloudwatch,events localstack start
Make sure the AWS provider in the terraform config is pointed to localstack. Like from step (1), we need to make sure to have a setting specifically for CloudWatch Events. In the AWS provider config, that's cloudwatchevents.
provider "aws" {
version = "~> 3.0"
profile = "<profile used for localstack>"
region = "<region configured for localstack>"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
# Update the urls below if you've e.g. customized localstack's port
cloudwatch = "http://localhost:4566"
cloudwatchevents = "http://localhost:4566"
iam = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
Now, the terraform apply should successfully run against localstack.
One more gotcha to be aware of is that localstack currently doesn't persist CloudWatch or CloudWatch Events data, even if you enable persistence. So when you kill or restart localstack, any CloudWatch Events rules will be lost.

Related

Terraform AWS OpenSearch using Cognito module circular problem

Hi im trying to create a terraform module to deploy aws opensearch using cognito, but it seems it is not possible to complete!
To create an opensearch cluster with cognito, you need to create
cognito user pool
cognito user pool client app
cognito identity pool (giving it the user pool and client app)
Pass the user pool and identity pool to opensearch
After the opensearch cluster is installed, it creates a new client app That you then have to add to the identity pool!
Any know how to get around a terraform deploy -> manual update.
[EDIT]
Added code snippet that resolve the issue, i didnt attach the starter code as to deploy open-search with cognito as its a good few hundred lines of code and seemed redundant.
## calls after elasticsearch and cognito has been built to
## add the elasticsearch client app to the cognito identity pool
data "external" "cognito" {
depends_on = [
aws_opensearch_domain.this
]
program = ["sh", "-c", "aws cognito-idp list-user-pool-clients --user-pool-id ${aws_cognito_user_pool.cognito-user-pool.id}| jq '.UserPoolClients | .[] | select(.ClientName | contains(\"AmazonOpenSearchService\"))'"]
}
output "cognito" {
value = data.external.cognito.result["ClientId"]
}
resource "aws_cognito_identity_pool" "cognito-identity-pool-opensearch" {
depends_on = [
data.external.cognito
]
identity_pool_name = "opensearch-${var.domain_name}-identity-pool"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = data.external.cognito.result["ClientId"]
provider_name = aws_cognito_user_pool.cognito-user-pool.endpoint
server_side_token_check = false
}
}
Although your question should provide some sample code, I happen to know exactly what you're referring to because I've had to deal with it in several projects.
Unless things have changed since I last dealt with this, there is no easy solution and it's a gaping hole in the AWS API and Terraform AWS provider. The workaround I've used is:
Create the OpenSearch domain and allow it to create the Cognito user pool client app.
Use an external data source to make an AWS CLI call to read the OpenSearch domain, which will get you the details of the client app it created.
Use an external data source to update the client app using the AWS CLI and change the necessary settings.
It sucks, yes.

Running 'terragrunt apply' on an EC2 Instance housed in a No Internet Environment

I have been trying to set up my Terragrunt EC2 environment in a no/very limited internet setting.
Current Setup:
AWS network firewall that whitelists domains to allow traffic, and most internet traffic is blocked excepted a few domains.
EC2 instance where I run the terragrunt code, it has an instance profile that can assume the role in providers
VPC endpoints set up for sts, s3, dynamodb, codeartifact etc
All credentials (assumed role etc) work and have been verified
Remote State and Providers File
remote_state {
backend = "s3"
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
config = {
bucket = "***"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "ap-southeast-1"
encrypt = true
dynamodb_table = "***"
}
}
# Dynamically changes the role depending on which account is being modified
generate "providers" {
path = "providers.tf"
if_exists = "overwrite"
contents = <<EOF
provider "aws" {
region = "${local.env_vars.locals.aws_region}"
assume_role {
role_arn = "arn:aws:iam::$***"
endpoints {
sts = "https://sts.ap-southeast-1.amazonaws.com"
s3 = "https://s3.ap-southeast-1.amazonaws.com"
dynamodb = "https://dynamodb.ap-southeast-1.amazonaws.com"
}
}
EOF
}
With Internet (Turning off the firewall):
I am able to run all the terragrunt commands
Without Internet
I only allow "registry.terraform.io" to pass the firewall
I am able to assume the role listed in providers via aws sts assume-role, and I can list the tables in dynamodb and files in the s3 bucket
I am able to run terragrunt init on my EC2 instance with the instance profile, I assume terragrunt does use the correct sts_endpoint
However when I run terragrunt apply, it hangs at the stage `DEBU[0022] Running command: terraform plan prefix=[***]
In my CloudTrail I do see that Terragrunt has assumed the username aws-go-sdk-1660077597688447480 for the event GetCallerIdentity, so I think the provider is able to assume the role that was declared in the providers block
I tried adding custom endpoints for sts, s3, and dynamodb, but it still hangs.
I suspect that terraform is still trying to use the internet when making the AWS SDK calls, which leads to terragrunt apply being stuck.
Is there a comprehensive list of endpoints I need to custom add, or a list of domains I should whitelist to be able to run terragrunt apply?
I set the environment variable TF_LOG to debug, and besides the registry.terraform.io domain, I was able to gather these ones:
github.com
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/go-tfe v1.0.0
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/hcl/v2 v2.12.0
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/terraform-config-inspect v0.0.0-20210209133302-4fd17a0faac2
2022-08-18T15:33:03.106-0600 [DEBUG] using github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734
sts.region.amazonaws.com
resource.region.amazonaws.com
You'll want to add those domains to your whitelist in the firewall settings, something like *.region.amazonaws.com should do the trick, of course, you can be more restrictive, and rather than use a wildcard, you can specify the exact resource.
For reference: https://docs.aws.amazon.com/general/latest/gr/rande.html

Terraform: How to obtain VPCE service name when it was dynamically created

I am trying to obtain (via terraform) the dns name of a dynamically created VPCE endpoint using a data resource but the problem I am facing is the service name is not known until resources have been created. See notes below.
Is there any way of retrieving this information as a hard-coded service name just doesn’t work for automation?
e.g. this will not work as the service_name is dynamic
resource "aws_transfer_server" "sftp_lambda" {
count = local.vpc_lambda_enabled
domain = "S3"
identity_provider_type = "AWS_LAMBDA"
endpoint_type = "VPC"
protocols = ["SFTP"]
logging_role = var.loggingrole
function = var.lambda_idp_arn[count.index]
endpoint_details = {
security_group_ids = var.securitygroupids
subnet_ids = var.subnet_ids
vpc_id = var.vpc_id
}
tags = {
NAME = "tf-test-transfer-server"
ENV = "test"
}
}
data "aws_vpc_endpoint" "vpce" {
count = local.vpc_lambda_enabled
vpc_id = var.vpc_id
service_name = "com.amazonaws.transfer.server.c-001"
depends_on = [aws_transfer_server.sftp_lambda]
}
output "transfer_server_dnsentry" {
value = data.aws_vpc_endpoint.vpce.0.dns_entry[0].dns_name
}
Note: The VPCE was created automatically from an AWS SFTP transfer server resource that was configured with endpoint type of VPC (not VPC_ENDPOINT which is now deprecated). I had no control over the naming of the endpoint service name. It was all created in the background.
Minimum AWS provider version: 3.69.0 required.
Here is an example cloudformation script to setup an SFTP transfer server using Lambda as the IDP.
This will create the VPCE automatically.
So my aim here is to output the DNS name from the auto-created VPC endpoint using terraform if at all possible.
example setup in cloudFormation
data source: aws_vpc_endpoint
resource: aws_transfer_server
I had a response from Hashicorp Terraform Support on this and this is what they suggested:
you can get the service SFTP-Server-created-VPC-Endpoint by calling the following exported attribute of the vpc_endpoint_service resource [a].
NOTE: There are certain setups that causes AWS to create additional resources outside of what you configured. The AWS SFTP transfer service is one of them. This behavior is outside Terraform's control and more due to how AWS designed the service.
You can bring that VPC Endpoint back under Terraform's control however, by importing the VPC endpoint it creates on your behalf AFTER the transfer service has been created - via the VPCe ID [b].
If you want more ideas of pulling the service name from your current AWS setup, feel free to check out this example [c].
Hope that helps! Thank you.
[a] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint_service#service_name
[b] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint#import
[c] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint#gateway-load-balancer-endpoint-type
There is a way forward like I shared earlier with the imports but it not going to be fully automated unfortunately.
Optionally, you can use a provisioner [1] and the aws ec2 describe-vpc-endpoint-services --service-names command [2] to get the service names you need.
I'm afraid that's the last workaround I can provide, as explained in our doc here [3] - which will explain how - as much as we'd like to, Terraform isn't able to solve all use-cases.
[1] https://www.terraform.io/language/resources/provisioners/remote-exec
[2] https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-vpc-endpoint-services.html
[3] https://www.terraform.io/language/resources/provisioners/syntax
I've finally found the solution:
data "aws_vpc_endpoint" "transfer_server_vpce" {
count = local.is_enabled
vpc_id = var.vpc_id
filter {
name = "vpc-endpoint-id"
values = ["${aws_transfer_server.transfer_server[0].endpoint_details[0].vpc_endpoint_id}"]
}
}

Terraform - Unimplemented AWS API services - SES CreateCustomVerificationEmailTemplate

I've recently started using Terraform and I love it. However in migrating an application to use terraform I have encountered an AWS service that doesn't appear to be implemented using terraforms aws provider.
What does one do in such a situation? Is there a way i can hack this in to my terraform code to call this api?
https://docs.aws.amazon.com/ses/latest/APIReference/API_CreateCustomVerificationEmailTemplate.html
I'm using the latest aws provider.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.5.0"
}
}
}
The only possibility I could imagine is to run using local-exec and call the missing API manually.
E.g. you can use null_resource (https://www.terraform.io/language/resources/provisioners/null_resource) and execute a bash script or aws cli directly.
Like mentioned before, search https://github.com/hashicorp/terraform-provider-aws/issues for your issue, vote for it or create a new feature request.

Terraform profile field usage in AWS provider

I have a $HOME/.aws/credentials file like this:
[config1]
aws_access_key_id=accessKeyId1
aws_secret_access_key=secretAccesskey1
[config2]
aws_access_key_id=accessKeyId2
aws_secret_access_key=secretAccesskey2
So I was expecting that with this configuration, terraform will choose the second credentials:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
But when I try terraform init it says it hasn't found any valid credentials:
Initializing the backend...
Error: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
As as workaround, I changed config2 by default in my credentials file and I removed the profile field from the provider block so it works but I really need to use something like the first approach. What am I missing here?
Unfortunately you also need to provide the IAM credential configuration to the backend configuration as well as your AWS provider configuration.
The S3 backend configuration takes the same parameters here as the AWS provider so you can specify the backend configuration like this:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
profile = "config2"
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
There's a few reasons behind this needing to be done separately. One of the reasons would be that you can independently use different IAM credentials, accounts and regions for the S3 bucket and the resources you will be managing with the AWS provider. You might also want to use S3 as a backend even if you are creating resources in another cloud provider or not using a cloud provider at all, Terraform can manage resources in a lot of places that don't have a way to store Terraform state. The main reason though is that the backends are actually managed by the core Terraform binary rather than the provider binaries and the backend initialisation happens before pretty much anything else.

Resources