AWS API Gateway Appending Query String Twice - terraform

I am trying to setup a simple demo endpoint via AWS API Gateway. Bellow is Terraform manifest which describes it.
It is essentially a GET /demo/hello/world endpoint which accepts a query string parameter return_to.
The terraform correctly creates all resources in AWS.
However, when I then make a request to gateway at /demo/hello/world?return_to=bbb, the backend service receives this request:
/demo/hello/world%3Freturn_to=bbb?return_to=bbb
As you can see the ?return_to=bbb from API Gateway is being URL encoded as if it were part of the path and then another query string is appended at the end.
Anybody could help me out how to fix this? I have been going through all the settings for few hours and can't figure out what is the issue and how to fix it.
resource "aws_api_gateway_rest_api" "api" {
name = "origin-${var.name}.${data.terraform_remote_state.setup.outputs.domain-name}"
description = "Proxy to handle requests to our API test"
}
resource "aws_api_gateway_resource" "demo" {
depends_on = ["aws_api_gateway_rest_api.api"]
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
parent_id = "${aws_api_gateway_rest_api.api.root_resource_id}"
path_part = "demo"
}
resource "aws_api_gateway_resource" "hello" {
depends_on = ["aws_api_gateway_rest_api.api", "aws_api_gateway_resource.demo"]
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
parent_id = "${aws_api_gateway_resource.demo.id}"
path_part = "hello"
}
resource "aws_api_gateway_resource" "world" {
depends_on = ["aws_api_gateway_rest_api.api", "aws_api_gateway_resource.hello"]
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
parent_id = "${aws_api_gateway_resource.hello.id}"
path_part = "world"
}
resource "aws_api_gateway_method" "hello-world" {
depends_on = ["aws_api_gateway_resource.world"]
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_resource.world.id}"
http_method = "GET"
authorization = "NONE"
request_parameters = {
"method.request.querystring.return_to" = true
}
}
resource "aws_api_gateway_integration" "hello-world" {
depends_on = ["aws_api_gateway_method.hello-world"]
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_resource.world.id}"
http_method = "${aws_api_gateway_method.hello-world.http_method}"
integration_http_method = "GET"
type = "HTTP"
uri = "http://${lookup(var.demo-map, var.environment)}/demo/hello/world"
connection_type = "VPC_LINK"
connection_id = "${data.aws_api_gateway_vpc_link.vpclink.id}"
request_parameters = {
"integration.request.querystring.return_to" = "method.request.querystring.return_to"
}
}

I had the same error. Having the query parameters in the aws_api_gateway_method and removing from aws_api_gateway_integration solved my problem

Related

Terraform Azure CDN Custom Domain Certificate not supported for this profile

I am trying to enable https for cdn endpoint custom domain. When trying to submit the code, i get the following error:
CertificateType value provided is not supported for this profile for enabling https.
The custom domain code:
resource "azurerm_cdn_endpoint_custom_domain" "endpointfrontend" {
name = "mykappdev"
cdn_endpoint_id = azurerm_cdn_endpoint.cdnendpoint.id
host_name = "${azurerm_dns_cname_record.cnamefrontend.name}.${data.azurerm_dns_zone.dnszone.name}"
cdn_managed_https {
certificate_type = "Dedicated"
protocol_type = "ServerNameIndication"
}
}
The rest of the cdn code:
resource "azurerm_cdn_profile" "cdnprofile" {
name = "mycdn${var.environment}"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
sku = "Standard_Microsoft"
}
resource "azurerm_cdn_endpoint" "cdnendpoint" {
name = "${var.environment}-example"
profile_name = azurerm_cdn_profile.cdnprofile.name
location = azurerm_cdn_profile.cdnprofile.location
resource_group_name = data.azurerm_resource_group.rg.name
is_https_allowed = true
origin {
name = "${var.environment}-origin"
host_name = azurerm_storage_account.frontend.primary_web_host
}
depends_on = [
azurerm_cdn_profile.cdnprofile
]
}
data "azurerm_dns_zone" "dnszone" {
name = "my.app"
resource_group_name = "rg-my"
}
Everything works fine when doing it via UI so the problem has to be in the code.
Edit the block azurerm_cdn_endpoint
resource "azurerm_cdn_endpoint" "cdnendpoint" {
name = "${var.environment}-example"
profile_name = azurerm_cdn_profile.cdnprofile.name
location = azurerm_cdn_profile.cdnprofile.location
resource_group_name = data.azurerm_resource_group.rg.name
is_https_allowed = true
origin {
name = "${var.environment}-origin"
host_name = azurerm_storage_account.frontend.primary_web_host
}
### Code added
delivery_rule {
name = "EnforceHTTPS"
order = "1"
request_scheme_condition {
operator = "Equal"
match_values = ["HTTP"]
}
url_redirect_action {
redirect_type = "Found"
protocol = "Https"
}
}
### End code added
depends_on = [
azurerm_cdn_profile.cdnprofile
]
}
Also, you can check this blog post https://www.emilygorcenski.com/post/migrating-a-static-site-to-azure-with-terraform/
Hope this helps!
After enabling custom https once per hand in the azure portal and than disabling it in portal, it was possible to change it via terraform.
I hope this helps!

Azure Apim. Create operation in schema from openapi definition

I have to create a schema so then I can create operation in Azure APIM. the code that I know is need to be used is:
resource "azurerm_api_management_api_schema" "example" {
api_name = azurerm_api_management_api.sample-api.name
api_management_name = azurerm_api_management_api.sample-api.api_management_name
resource_group_name = azurerm_api_management_api.sample-api.resource_group_name
schema_id = "example-schema"
content_type = "application/vnd.oai.openapi.components+json"
value = <<JSON
{
"properties": {
"contentType": "application/vnd.oai.openapi.components+json",
"document": {
"components": {
"schemas": {
operation Get 1
operation Get 2
operation post 1.....
but how Im supposed to extract Get and POST operations from this openapi definition and include them in that Schema?
https://github.com/RaphaelYoshiga/TerraformBasics/blob/master/conference-api.json
I havenĀ“t find a clear example so far .
Below is the example of Azure apim operation with apim service
data "azurerm_api_management_api" "example" {
name = "search-api"
api_management_name = "search-api-management"
resource_group_name = "search-service"
revision = "2"
}
resource "azurerm_api_management_api_operation" "example" {
operation_id = "user-delete"
api_name = data.azurerm_api_management_api.example.name
api_management_name = data.azurerm_api_management_api.example.api_management_name
resource_group_name = data.azurerm_api_management_api.example.resource_group_name
display_name = "Delete User Operation"
method = "DELETE"
url_template = "/users/{id}/delete"
description = "This can only be done by the logged in user."
response {
status_code = 200
}
}
For further information check this.

Unable to pass variable from .tf file to .json policy template

I'm a newbe to terraform world and experiencing some tough time around passing variables from .tf file to .json
My sample tf lambda function is as follows
data "template_file" "task" {
template = file("./iam/grange_rest_dynlambda_policy.json")
vars = {
resource="${var.stage}_grange_dynamodb"
}
}
resource "aws_lambda_function" "grange_rest_dynlambda" {
function_name = "${var.stage}_grange_rest_dynlambda"
handler = "lambda/src/index.handler"
memory_size = "256"
timeout = 10
reserved_concurrent_executions = "-1"
filename = "${path.module}/../dist/lambda.zip"
role = aws_iam_role.grange_rest_dynlambda_iam_role.arn
runtime = "nodejs14.x"
publish = true
}
resource "aws_lambda_alias" "grange_rest_dynlambda_alias" {
depends_on = ["aws_lambda_function.grange_rest_dynlambda"]
name = var.stage
description = var.stage
function_name = aws_lambda_function.grange_rest_dynlambda.arn
function_version = aws_lambda_function.grange_rest_dynlambda.version
}
// Enable cloudwatch for lambda
resource "aws_cloudwatch_log_group" "example" {
name = "/aws/lambda/${var.stage}_grange_rest_dynlambda"
retention_in_days = 14
}
# See also the following AWS managed policy: AWSLambdaBasicExecutionRole
resource "aws_iam_policy" "lambda_logging" {
name = "lambda_logging"
path = "/"
description = "IAM policy for logging from a lambda"
policy = file("./iam/grange_rest_dynlambda_logging_policy.json")
}
// Lambda + DynamoDB
resource "aws_iam_role" "grange_rest_dynlambda_iam_role" {
name = "grange_rest_dynlambda_iam_role"
assume_role_policy = file("./iam/grange_rest_dynlambda_assume_policy.json")
}
resource "aws_iam_role_policy" "grange_rest_dynlambda_iam_policy" {
policy = file("./iam/grange_rest_dynlambda_policy.json")
role = aws_iam_role.grange_rest_dynlambda_iam_role.id
}
resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = aws_iam_role.grange_rest_dynlambda_iam_role.name
policy_arn = aws_iam_policy.lambda_logging.arn
}
// API Gateway + Lambda
resource "aws_api_gateway_resource" "grange_rest_dynlambda_api" {
parent_id = aws_api_gateway_rest_api.grange_rest_api_gateway.root_resource_id
path_part = "grange_rest_dynlambda_api"
rest_api_id = aws_api_gateway_rest_api.grange_rest_api_gateway.id
}
resource "aws_api_gateway_method" "grange_rest_dynlambda_api_get" {
authorization = "NONE"
http_method = "GET"
resource_id = aws_api_gateway_resource.grange_rest_dynlambda_api.id
rest_api_id = aws_api_gateway_rest_api.grange_rest_api_gateway.id
}
resource "aws_api_gateway_method" "grange_rest_dynlambda_api_post" {
authorization = "NONE"
http_method = "POST"
resource_id = aws_api_gateway_resource.grange_rest_dynlambda_api.id
rest_api_id = aws_api_gateway_rest_api.grange_rest_api_gateway.id
}
resource "aws_lambda_permission" "apigw" {
action = "lambda:InvokeFunction"
statement_id = "AllowExecutionFromAPIGateway"
function_name = aws_lambda_function.grange_rest_dynlambda.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_api_gateway_rest_api.grange_rest_api_gateway.execution_arn}/*/*"
}
output "base_url" {
value = aws_api_gateway_deployment.apigwdeployment.invoke_url
}
I inject policy from a JSON file and expect "resource" variable to be passed into JSON. But, that's not how it works
{
"Version": "2012-10-17",
"Statement":[{
"Effect": "Allow",
"Action": [
"dynamodb:BatchGetItem",
"dynamodb:GetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:BatchWriteItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem"
],
"Resource": "arn:aws:dynamodb:us-east-2:741573820784:table/${resource}"
}
]
}
What am I missing?
The template_file data source does not replace the variables in the actual file. It just reads the file and provides the "rendered" output directly to your Terraform.
Therefore, you need to change your Terraform where you want to consume the "rendered" output:
Before:
resource "aws_iam_role_policy" "grange_rest_dynlambda_iam_policy" {
policy = file("./iam/grange_rest_dynlambda_policy.json")
role = aws_iam_role.grange_rest_dynlambda_iam_role.id
}
After:
resource "aws_iam_role_policy" "grange_rest_dynlambda_iam_policy" {
policy = data.template_file.task.rendered
role = aws_iam_role.grange_rest_dynlambda_iam_role.id
}
You need to access the rendered property of the template_file data source:
data.template_file.task.rendered
This will replace ${resource} with the value of "${var.stage}_grange_dynamodb".
Please note, that the documentation recommends to use the templatefile function instead of this data source.

Terraform azurerm_api_management_api bumping revision does not apply all required changes

We are managing our API Management platform in Azure with terraform. Sometimes we need to bump the revision, and when we do that there are several issues:
The API that gets bumped is recreated (expected)
The recreated API is lost from all products it belonged to
The recreated API does not get any policies applied
So when the revision is bumped, the pipeline has to be run twice:
The API is recreated
The API is added to the product again and gets its policy applied
This is how the template looks like:
# Fetch existing API management instance
data "azurerm_api_management" "storemanager_api_management" {
name = local.api_management_name
resource_group_name = local.api_management_resource_group_name
}
# Create the API
resource "azurerm_api_management_api" "api" {
api_management_name = data.azurerm_api_management.storemanager_api_management.name
resource_group_name = local.api_management_resource_group_name
name = "api"
path = "api"
display_name = "api 1"
protocols = ["https"]
revision = var.api_revision
subscription_required = true
subscription_key_parameter_names {
header = local.api_subscription_key_name
query = local.api_subscription_key_name
}
import {
content_format = "swagger-link-json"
content_value = format("https://%s.blob.core.windows.net/%s/%s/%s",
data.azurerm_storage_account.storage_account.name,
data.azurerm_storage_container.open_api_definition_storage_container.name,
var.api_api_version,
local.api_api_definition_name
)
}
}
# Create the product
resource "azurerm_api_management_product" "product" {
api_management_name = data.azurerm_api_management.storemanager_api_management.name
resource_group_name = local.api_management_resource_group_name
product_id = "product"
display_name = "Product"
description = "Collection of APIs"
subscription_required = true
subscriptions_limit = 1
approval_required = true
published = true
}
# Associate the API with the product
resource "azurerm_api_management_product_api" "product_api" {
api_management_name = data.azurerm_api_management.storemanager_api_management.name
resource_group_name = local.api_management_resource_group_name
product_id = azurerm_api_management_product.product.product_id
api_name = azurerm_api_management_api.api.name
}
# Apply policy to the API
resource "azurerm_api_management_api_policy" "policy" {
api_name = azurerm_api_management_api.api.name
api_management_name = data.azurerm_api_management.storemanager_api_management.name
resource_group_name = local.api_management_resource_group_name
xml_content = templatefile("./policy.tmpl", { x_functions_key_value = var.function_key, backend_name = azurerm_api_management_backend.generic_function_app_backend.name })
}
Is this a bug, or am I not using terraform correctly? How do I re-add the recreated API to the product and get its policy applied in one run of the pipeline?

Terraform API Gateway HTTP API - Getting the error Insufficient permissions to enable logging

My terraform script for deploying an HTTP API looks like the following. I am getting the following error when I run this -
error creating API Gateway v2 stage: BadRequestException: Insufficient permissions to enable logging
Do I need to add something else to make it work?
resource "aws_cloudwatch_log_group" "api_gateway_log_group" {
name = "/aws/apigateway/${var.location}-${var.custom_tags.Layer}-demo-publish-api"
retention_in_days = 7
tags = var.custom_tags
}
resource "aws_apigatewayv2_api" "demo_publish_api" {
name = "${var.location}-${var.custom_tags.Layer}-demo-publish-api"
description = "API to publish event payloads"
protocol_type = "HTTP"
tags = var.custom_tags
}
resource "aws_apigatewayv2_vpc_link" "demo_vpc_link" {
name = "${var.location}-${var.custom_tags.Layer}-demo-vpc-link"
security_group_ids = local.security_group_id_list
subnet_ids = local.subnet_ids_list
tags = var.custom_tags
}
resource "aws_apigatewayv2_integration" "demo_apigateway_integration" {
api_id = aws_apigatewayv2_api.demo_publish_api.id
integration_type = "HTTP_PROXY"
connection_type = "VPC_LINK"
integration_uri = var.alb_listener_arn
connection_id = aws_apigatewayv2_vpc_link.demo_vpc_link.id
integration_method = "POST"
timeout_milliseconds = var.api_timeout_milliseconds
}
resource "aws_apigatewayv2_route" "demo_publish_api_route" {
api_id = aws_apigatewayv2_api.demo_publish_api.id
route_key = "POST /api/event"
target = "integrations/${aws_apigatewayv2_integration.demo_apigateway_integration.id}"
}
resource "aws_apigatewayv2_stage" "demo_publish_api_default_stage" {
depends_on = [aws_cloudwatch_log_group.api_gateway_log_group]
api_id = aws_apigatewayv2_api.demo_publish_api.id
name = "$default"
auto_deploy = true
tags = var.custom_tags
route_settings {
route_key = aws_apigatewayv2_route.demo_publish_api_route.route_key
throttling_burst_limit = var.throttling_burst_limit
throttling_rate_limit = var.throttling_rate_limit
}
default_route_settings {
detailed_metrics_enabled = true
logging_level = "INFO"
}
access_log_settings {
destination_arn = aws_cloudwatch_log_group.api_gateway_log_group.arn
format = jsonencode({ "requestId":"$context.requestId", "ip": "$context.identity.sourceIp"})
}
}
I was stuck on this for a couple of days before reaching out to AWS support. If you have been deploying a lot of HTTP APIs, then you might have run into the same issue where an IAM policy gets very large.
Run this AWS CLI command to find the associated CloudWatch Logs resource policy:
aws logs describe-resource-policies
Look for AWSLogDeliveryWrite20150319. You'll notice this policy has a large number of associated LogGroup resources. You have three options:
Adjust this policy by removing some of the potentially unused entries.
Change the resource list to "*"
You can add another policy. Based on this policy, split the resource records between them.
Apply updates via this AWS CLI command:
aws logs put-resource-policy
Here's the command I ran to set resources. Use "*" for the policy:
aws logs put-resource-policy --policy-name AWSLogDeliveryWrite20150319 --policy-document "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"AWSLogDeliveryWrite\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"delivery.logs.amazonaws.com\"},\"Action\":[\"logs:CreateLogStream\",\"logs:PutLogEvents\"],\"Resource\":[\"*\"]}]}"
#Marcin Your initial comment about the aws_api_gateway_account was correct. I added the following resources and now it is working fine -
resource "aws_api_gateway_account" "demo" {
cloudwatch_role_arn = var.apigw_cloudwatch_role_arn
}
data "aws_iam_policy_document" "demo_apigw_allow_manage_resources" {
version = "2012-10-17"
statement {
actions = [
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:GetLogEvents",
"logs:FilterLogEvents"
]
resources = [
"*"
]
}
statement {
actions = [
"logs:CreateLogDelivery",
"logs:PutResourcePolicy",
"logs:UpdateLogDelivery",
"logs:DeleteLogDelivery",
"logs:CreateLogGroup",
"logs:DescribeResourcePolicies",
"logs:GetLogDelivery",
"logs:ListLogDeliveries"
]
resources = [
"*"
]
}
}
data "aws_iam_policy_document" "demo_apigw_allow_assume_role" {
version = "2012-10-17"
statement {
effect = "Allow"
actions = [
"sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["apigateway.amazonaws.com"]
}
}
}
resource "aws_iam_role_policy" "demo_apigw_allow_manage_resources" {
policy = data.aws_iam_policy_document.demo_apigw_allow_manage_resources.json
role = aws_iam_role.demo_apigw_cloudwatch_role.id
name = var.demo-apigw-manage-resources_policy_name
}
resource "aws_iam_role" "demo_apigw_cloudwatch_role" {
name = "demo_apigw_cloudwatch_role"
tags = var.custom_tags
assume_role_policy = data.aws_iam_policy_document.demo_apigw_allow_assume_role.json
}
You can route CW logs (aws_cloudwatch_log_group) to /aws/vendedlogs/* and it will resolve issue. Or create aws_api_gateway_account

Resources