How do you specify an integration request with API gateway and Kinesis data streams in terraform? - terraform

I'm having trouble declaring an integration request resource in terraform that allows an API GateWay Method to push to a Kinesis data stream. I found this (See line 510) however I'm having a hard time understanding since it seems to specify which shard to post to and i thought this was done dynamically based on some id (eg. user_id). Any help?
resource "aws_api_gateway_integration" "post_method_integration" {
rest_api_id = aws_api_gateway_rest_api.my_api.id
resource_id = aws_api_gateway_resource.the_resource.id
http_method = aws_api_gateway_method.the_method.http_method
integration_http_method = "POST"
type = "AWS"
uri = format(
"arn:%s:apigateway:%s:kinesis:action/PutRecord",
data.aws_partition.current.partition,
var.default_region
)
}
Edit: I solved it with
uri = "arn:aws:apigateway:${var.default_region}:kinesis:action/PutRecord"
credentials = aws_iam_role.api_gw_kinesis_role.arn

Related

Dynamic workspace selection when importing state from s3

I am using below terraform datasource for importing shared state from s3. Terraform is giving me error " No stored state was found for the given workspace in the given backend". I am expecting terraform to pick up the workspace "dev-use1" as I have set the workspace using terraform workspace select "dev-use1".
data "terraform_remote_state" "shared_jobs_state" {
backend = "s3"
config = {
bucket = "cicd-backend"
key = "analyticsjobs.tfstate"
workspace_key_prefix = "pipeline/v2/db"
region = "us-east-1"
}
}
Version = Terraform v1.1.9 on darwin_arm64
After enabling the DEBUG in terraform by setting TF_LOG="DEBUG". I can see that s3 api call is giving 404 error.
from the request xml I can see that the prefix is wrong.
As a workaround I have done below changes to datasource.
Not sure this is the recommended way of doing but it works. There is less clarity in docs regards to this https://www.terraform.io/language/state/remote-state-data
data "terraform_remote_state" "shared_jobs_state" {
backend = "s3"
config = {
bucket = "cicd-backend"
key = "pipeline/v2/db/${terraform.workspace}/analyticsjobs.tfstate"
region = "us-east-1"
}
}

AWS WAF not blocking requests using aws_wafregional_regex_pattern_set

I'm kind of surprised that I'm running into this problem. I created an aws_wafregional_regex_pattern_set to block incoming requests that contain php in their URI. I expected all requests with php in them to be blocked. However the requests are still making it through. Perhaps, I'm misunderstanding what this resource actually does? I have attached some sample code below.
resource "aws_wafregional_rule" "block_uris_containining_php" {
name = "BlockUrisContainingPhp"
metric_name = "BlockUrisContainingPhp"
predicate {
data_id = "${aws_wafregional_regex_match_set.block_uris_containing_php.id}"
negated = false
type = "RegexMatch"
}
}
resource "aws_wafregional_regex_match_set" "block_uris_containing_php" {
name = "BlockUrisContainingPhp"
regex_match_tuple {
field_to_match {
type = "URI"
}
regex_pattern_set_id = "${aws_wafregional_regex_pattern_set.block_uris_containing_php.id}"
text_transformation = "NONE"
}
}
resource "aws_wafregional_regex_pattern_set" "block_uris_containing_php" {
name = "BlockUrisContainingPhp"
regex_pattern_strings = [ "php$" ]
}
This code creates a String and regex matching condition in AWS WAF. So, I know it's at least getting created. I used cloudwatch to check for blocked requests as I sent requests containing php to the load balancer, but each request went through successfully. Any help with this would be greatly appreciated.
I can't tell from snippet but did you add the rule to web ACL and set the rule action to block?
Also you should try using wafv2 instead of wafregional as wafv2 comes with new features and easier to express rules.

Terraform backend SignatureDoesNotMatch

I'm pretty new to terraform, but I'm stuck trying to setup a terraform backend to use S3.
INIT:
terraform init -backend-config="access_key=XXXXXXX" -backend-config="secret_key=XXXXX"
TERRAFORM BACKEND:
resource "aws_dynamodb_table" "terraform_state_lock" {
name = "terraform-lock"
read_capacity = 5
write_capacity = 5
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
resource "aws_s3_bucket" "bucket" {
bucket = "tfbackend"
}
terraform {
backend "s3" {
bucket = "tfbackend"
key = "terraform"
region = "eu-west-1"
dynamodb_table = "terraform-lock"
}
}
ERROR:
Error: error using credentials to get account ID: error calling sts:GetCallerIdentity: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
status code: 403, request id: xxxx-xxxx
I really am at a loss because these same credentials are used for my Terraform Infrastructure and is working perfectly fine. The IAM user on AWS also has permissions for both Dynamo & S3.
Am I suppose to tell Terraform to use a different authentication method?
Remove .terraform/ and try again and really double check your credentials.
I regenerate the access keys and password and works good.

Is there any specific resource tag in terraform to create a rule in eventbridge

e.g. We have a resource tag for event rule in cloudwatch as
aws_cloudwatch_event_rule
You will need to use a combination of aws_cloudwatch_event_rule to match the events you want to process with EventBridge and aws_cloudwatch_event_target to forward to another EventBridge Event Bus or to something like a Lambda which can directly process the events or to an SQS queue where the events can be processed by a consumer.
Here's a completely generic module for this purpose:
main.tf:
# ------------------------------------------------------------------------------
# CREATE CLOUDWATCH RULES FOR EACH LOGICAL ROUTE TO MATCH EVENTS OF INTEREST
# ------------------------------------------------------------------------------
resource "aws_cloudwatch_event_rule" "captures" {
for_each = var.event_routes
name = replace(replace(each.key, "[^\\.\\-_A-Za-z0-9]+", "-"), "_", "-")
description = each.value.description
event_pattern = jsonencode({
"detail-type" = each.value.event_names
})
}
# ------------------------------------------------------------------------------
# CONFIGURE EACH RULE TO FORWARD MATCHING EVENTS TO THE CORRESPONDING TARGET ARN
# ------------------------------------------------------------------------------
resource "aws_cloudwatch_event_target" "route" {
for_each = var.event_routes
target_id = each.key
rule = aws_cloudwatch_event_rule.captures[each.key].name
arn = each.value.target_arn
}
variables.tf:
variable "event_routes" {
description = "A map from a meaningful operator shorthand to the target ARN and list of the event names that CloudWatch should forward to them."
type = map(object({
description = string
event_names = list(string)
target_arn = string
}))
/*
event_routes = {
forward_to_kpi_tracker = {
description = "Forward events to KPI tracker"
event_names = [
"UserSignedUp",
"UserWatchedLessonVideo",
]
target_arn = "arn:aws:events:ca-central-1:000000000000:event-bus/default"
}
}
*/
}
outputs.tf:
output "event_rule_name" {
value = { for route_shorthand, route_details in var.event_routes :
route_shorthand => aws_cloudwatch_event_rule.captures[route_shorthand].name
}
}
output "event_rule_arn" {
value = { for route_shorthand, route_details in var.event_routes :
route_shorthand => aws_cloudwatch_event_rule.captures[route_shorthand].arn
}
}
The target can be any of the following:
EC2 instances
SSM Run Command
SSM Automation
AWS Lambda functions
Data streams in Amazon Kinesis Data Streams
Data delivery streams in Amazon Kinesis Data Firehose
Amazon ECS tasks
AWS Step Functions state machines
AWS Batch jobs
AWS CodeBuild projects
Pipelines in AWS CodePipeline
Amazon Inspector assessment templates
Amazon SNS topics
Amazon SQS queues, including FIFO queues
The default event bus of another AWS account
From the PutTargets API actions docs.

GitHub webhook created twice when using Terraform aws_codebuild_webhook

I'm creating a an AWS CodeBuild using the following (partial) Terraform Configuration:
resource "aws_codebuild_webhook" "webhook" {
project_name = "${aws_codebuild_project.abc-web-pull-build.name}"
branch_filter = "master"
}
resource "github_repository_webhook" "webhook" {
name = "web"
repository = "${var.github_repo}"
active = true
events = ["pull_request"]
configuration {
url = "${aws_codebuild_webhook.webhook.payload_url}"
content_type = "json"
insecure_ssl = false
secret = "${aws_codebuild_webhook.webhook.secret}"
}
}
for some reason two Webhooks are created on GitHub for that spoken project, one with events pull_request and push, and the second with pull request (the only one I've expected).
I've tried removing the first block (aws_codebuild_webhook) even though terraform documentation give an example with both:
https://www.terraform.io/docs/providers/aws/r/codebuild_webhook.html
but than I'm in a pickle because there isn't a way to acquire the payload_url the Webhook require and currently accept it from aws_codebuild_webhook.webhook.payload_url.
not sure what is the right approach here, Appreciate any suggestion.

Resources