Error when creating Kinesis Delivery Streams with OpenSearch - terraform

I created an OpenSearch domain using Terraform with the OpenSearch_2.3 engine. I also managed to create Kinesis data streams without any issues but when I want to add a delivery stream I need to configure elasticsearch_configuration for the delivery stream as I want to send data to OpenSearch. But I get an error so I am not sure what I am doing wrong, is something wrong with the aws_opensearch_domain resource or is it Kinesis related?
resource "aws_opensearch_domain" "domain" {
domain_name = "test"
engine_version = "OpenSearch_2.3"
cluster_config {
instance_type = "r4.large.search"
}
tags = {
Domain = "TestDomain"
}
}
resource "aws_kinesis_stream" "stream" {
name = "terraform-kinesis-test"
shard_count = 1
retention_period = 48
stream_mode_details {
stream_mode = "PROVISIONED"
}
tags = {
Environment = "test"
}
}
resource "aws_kinesis_firehose_delivery_stream" "delivery_stream" {
name = "terraform-kinesis-firehose-delivery-stream"
destination = "elasticsearch"
s3_configuration {
role_arn = aws_iam_role.firehose_role.arn
bucket_arn = aws_s3_bucket.bucket.arn
buffer_size = 10
buffer_interval = 400
compression_format = "GZIP"
}
elasticsearch_configuration {
domain_arn = aws_opensearch_domain.domain.arn
role_arn = aws_iam_role.firehose_role.arn
index_name = "test"
type_name = "test"
processing_configuration {
enabled = "true"
processors {
type = "Lambda"
parameters {
parameter_name = "LambdaArn"
parameter_value = "${aws_lambda_function.lambda_processor.arn}:$LATEST"
}
}
}
}
}
Error: elasticsearch domain `my-domain-arn` has an unsupported version: OpenSearch_2.3 How is it not supported? Supported Versions
I am new to Kinesis and OpenSearch, pardon my lack of understanding.

A few weeks ago, I had a similar problem as I thought 2.3 was supported. However, Kinesis Firehose actually does not support OpenSearch_2.3 (yet). I downgraded to OpenSearch_1.3 and it worked as expected. You can find more information in the upgrade guide.
Supported Upgrade Paths
resource "aws_opensearch_domain" "domain" {
domain_name = "test"
engine_version = "OpenSearch_1.3"
cluster_config {
instance_type = "r4.large.search"
}
tags = {
Domain = "TestDomain"
}
}

Related

Terraform plan not working for a long time with AWS S3

I am using terraform to deploy the backend code to AWS. While configuring the terraform environments, I typed the terraform init. It works fine, however, the next command - terraform plan is not working for a long time. The command doesn't tell anything. I am waiting for a long time but can't see any message from cli. I would love to get helped from you developers.
Here is my main.tf code.
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
default_tags {
tags = {
Owner = "Example Owner"
Project = "Example"
}
}
}
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "react-app/build"
template_vars = {
vpc_id = "vpc-abc123123123"
}
}
resource "aws_s3_bucket" "test_tf_bucket" {
bucket = local.test_tf_creds.bucket
website {
index_document = "index.html"
}
tags = {
Bucket = "Example Terraform Bucket"
}
}
resource "aws_s3_bucket_object" "build_test_tf" {
for_each = module.template_files.files
bucket = local.test_tf_creds.bucket
key = each.key
content_type = each.value.content_type
source = each.value.source_path
content = each.value.content
etag = each.value.digests.md5
tags = {
Bucket-Object = "Example Bucket Object"
}
}
I would love you developers to help me solve this problem.

Terraform- GCP Data Proc Component Gateway Enable Issue

I’m trying to create data proc cluster in GCP using terraform resource google_dataproc_cluster. I would like to create Component gateway along with that. Upon seeing the documentation, it has been stated as to use the below snippet for creation:
cluster_config {
endpoint_config {
enable_http_port_access = "true"
}
}
Upon running the terraform plan, i see the error as " Error: Unsupported block type". And also tried using the override_properties and in the GCP data proc, i could see that the property is enabled, but still the Gateway Component is disabled. Wanted to understand, is there an issue upon calling the one given in the Terraform documentation and also is there an alternate for me to use it what?
software_config {
image_version = "${var.image_version}"
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
"dataproc:dataproc.enable_component_gateway" = "true"
}
}
The below is the error while running the terraform apply.
Error: Unsupported block type
on main.tf line 35, in resource "google_dataproc_cluster" "dataproc_cluster":
35: endpoint_config {
Blocks of type "endpoint_config" are not expected here.
RESOURCE BLOCK:
resource "google_dataproc_cluster" "dataproc_cluster" {
name = "${var.cluster_name}"
region = "${var.region}"
graceful_decommission_timeout = "120s"
labels = "${var.labels}"
cluster_config {
staging_bucket = "${var.staging_bucket}"
/*endpoint_config {
enable_http_port_access = "true"
}*/
software_config {
image_version = "${var.image_version}"
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
"dataproc:dataproc.enable_component_gateway" = "true" /* Has Been Added as part of Component Gateway Enabled which is already enabled in the endpoint_config*/
}
}
gce_cluster_config {
// network = "${var.network}"
subnetwork = "${var.subnetwork}"
zone = "${var.zone}"
//internal_ip_only = true
tags = "${var.network_tags}"
service_account_scopes = [
"cloud-platform"
]
}
master_config {
num_instances = "${var.master_num_instances}"
machine_type = "${var.master_machine_type}"
disk_config {
boot_disk_type = "${var.master_boot_disk_type}"
boot_disk_size_gb = "${var.master_boot_disk_size_gb}"
num_local_ssds = "${var.master_num_local_ssds}"
}
}
}
depends_on = [google_storage_bucket.dataproc_cluster_storage_bucket]
timeouts {
create = "30m"
delete = "30m"
}
}
Below is the snippet that worked for me to enable component gateway in GCP
provider "google-beta" {
project = "project_id"
}
resource "google_dataproc_cluster" "dataproc_cluster" {
name = "clustername"
provider = google-beta
region = us-east1
graceful_decommission_timeout = "120s"
cluster_config {
endpoint_config {
enable_http_port_access = "true"
}
}
This issue is discussed in this Git thread.
You can enable the component gateways in Cloud Dataproc by using google-beta provider in the Dataproc cluster and root configuration of terraform.
sample configuration:
# Terraform configuration goes here
provider "google-beta" {
project = "my-project"
}
resource "google_dataproc_cluster" "mycluster" {
provider = "google-beta"
name = "mycluster"
region = "us-central1"
graceful_decommission_timeout = "120s"
labels = {
foo = "bar"
}
...
...
}

Terraform order of ip filter rules for IoT Hub

I want to deploy multiple azure cloud resources with terraform. My problem is with the terraform script for an azure IoT Hub, exspecially the ip restriction rules. According to the documentation I can do something like this
resource "azurerm_iothub" "iothubname" {
name = "somename"
resource_group_name = azurerm_resource_group.someresourcegroup
location = azurerm_resource_group.somelocation
sku {
name = "B2"
capacity = "2"
}
fallback_route {
enabled = true
}
ip_filter_rule {
action = "Accept"
ip_mask ="some_ip_range_1"
name = "some_name_1"
}
ip_filter_rule {
action = "Accept"
ip_mask ="some_ip_range_2"
name = "some_name_2" }
ip_filter_rule {
action = "Accept"
ip_mask ="some_ip_range_3"
name = "some_name_3"
}
ip_filter_rule {
action = "Reject"
ip_mask ="0.0.0.0/0"
name = "everything_else"
}
}
Everything works fine, ecept that the ordering of the ip rules is not the same as above and in my case I definitely want the last rule to be the the one with the lowest priority on azure. Azure IoT hub applies the filter rules in order.
How can I enforce a certain ordering of ip filter?
You can try to use dynamic blocks
https://www.terraform.io/docs/configuration/expressions/dynamic-blocks.html
File main.tf
resource "azurerm_iothub" "iothubname" {
name = "somename"
resource_group_name = azurerm_resource_group.someresourcegroup
location = azurerm_resource_group.somelocation
sku {
name = "B2"
capacity = "2"
}
fallback_route {
enabled = true
}
dynamic "ip_filter_rule" {
for_each = var.ip_filter_rule_list
content {
action = ip_filter_rule.value.action
ip_mask = ip_filter_rule.value.ip_mask
name = ip_filter_rule.value.name
}
}
}
File variables.tf
variable "ip_filter_rule_list" {
type = list
default = []
}
Update
Bug is fixed in terraform provider azurerm v2.57.0
https://github.com/terraform-providers/terraform-provider-azurerm/pull/11390

Terraform API Gateway HTTP API - Getting the error Insufficient permissions to enable logging

My terraform script for deploying an HTTP API looks like the following. I am getting the following error when I run this -
error creating API Gateway v2 stage: BadRequestException: Insufficient permissions to enable logging
Do I need to add something else to make it work?
resource "aws_cloudwatch_log_group" "api_gateway_log_group" {
name = "/aws/apigateway/${var.location}-${var.custom_tags.Layer}-demo-publish-api"
retention_in_days = 7
tags = var.custom_tags
}
resource "aws_apigatewayv2_api" "demo_publish_api" {
name = "${var.location}-${var.custom_tags.Layer}-demo-publish-api"
description = "API to publish event payloads"
protocol_type = "HTTP"
tags = var.custom_tags
}
resource "aws_apigatewayv2_vpc_link" "demo_vpc_link" {
name = "${var.location}-${var.custom_tags.Layer}-demo-vpc-link"
security_group_ids = local.security_group_id_list
subnet_ids = local.subnet_ids_list
tags = var.custom_tags
}
resource "aws_apigatewayv2_integration" "demo_apigateway_integration" {
api_id = aws_apigatewayv2_api.demo_publish_api.id
integration_type = "HTTP_PROXY"
connection_type = "VPC_LINK"
integration_uri = var.alb_listener_arn
connection_id = aws_apigatewayv2_vpc_link.demo_vpc_link.id
integration_method = "POST"
timeout_milliseconds = var.api_timeout_milliseconds
}
resource "aws_apigatewayv2_route" "demo_publish_api_route" {
api_id = aws_apigatewayv2_api.demo_publish_api.id
route_key = "POST /api/event"
target = "integrations/${aws_apigatewayv2_integration.demo_apigateway_integration.id}"
}
resource "aws_apigatewayv2_stage" "demo_publish_api_default_stage" {
depends_on = [aws_cloudwatch_log_group.api_gateway_log_group]
api_id = aws_apigatewayv2_api.demo_publish_api.id
name = "$default"
auto_deploy = true
tags = var.custom_tags
route_settings {
route_key = aws_apigatewayv2_route.demo_publish_api_route.route_key
throttling_burst_limit = var.throttling_burst_limit
throttling_rate_limit = var.throttling_rate_limit
}
default_route_settings {
detailed_metrics_enabled = true
logging_level = "INFO"
}
access_log_settings {
destination_arn = aws_cloudwatch_log_group.api_gateway_log_group.arn
format = jsonencode({ "requestId":"$context.requestId", "ip": "$context.identity.sourceIp"})
}
}
I was stuck on this for a couple of days before reaching out to AWS support. If you have been deploying a lot of HTTP APIs, then you might have run into the same issue where an IAM policy gets very large.
Run this AWS CLI command to find the associated CloudWatch Logs resource policy:
aws logs describe-resource-policies
Look for AWSLogDeliveryWrite20150319. You'll notice this policy has a large number of associated LogGroup resources. You have three options:
Adjust this policy by removing some of the potentially unused entries.
Change the resource list to "*"
You can add another policy. Based on this policy, split the resource records between them.
Apply updates via this AWS CLI command:
aws logs put-resource-policy
Here's the command I ran to set resources. Use "*" for the policy:
aws logs put-resource-policy --policy-name AWSLogDeliveryWrite20150319 --policy-document "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"AWSLogDeliveryWrite\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"delivery.logs.amazonaws.com\"},\"Action\":[\"logs:CreateLogStream\",\"logs:PutLogEvents\"],\"Resource\":[\"*\"]}]}"
#Marcin Your initial comment about the aws_api_gateway_account was correct. I added the following resources and now it is working fine -
resource "aws_api_gateway_account" "demo" {
cloudwatch_role_arn = var.apigw_cloudwatch_role_arn
}
data "aws_iam_policy_document" "demo_apigw_allow_manage_resources" {
version = "2012-10-17"
statement {
actions = [
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:GetLogEvents",
"logs:FilterLogEvents"
]
resources = [
"*"
]
}
statement {
actions = [
"logs:CreateLogDelivery",
"logs:PutResourcePolicy",
"logs:UpdateLogDelivery",
"logs:DeleteLogDelivery",
"logs:CreateLogGroup",
"logs:DescribeResourcePolicies",
"logs:GetLogDelivery",
"logs:ListLogDeliveries"
]
resources = [
"*"
]
}
}
data "aws_iam_policy_document" "demo_apigw_allow_assume_role" {
version = "2012-10-17"
statement {
effect = "Allow"
actions = [
"sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["apigateway.amazonaws.com"]
}
}
}
resource "aws_iam_role_policy" "demo_apigw_allow_manage_resources" {
policy = data.aws_iam_policy_document.demo_apigw_allow_manage_resources.json
role = aws_iam_role.demo_apigw_cloudwatch_role.id
name = var.demo-apigw-manage-resources_policy_name
}
resource "aws_iam_role" "demo_apigw_cloudwatch_role" {
name = "demo_apigw_cloudwatch_role"
tags = var.custom_tags
assume_role_policy = data.aws_iam_policy_document.demo_apigw_allow_assume_role.json
}
You can route CW logs (aws_cloudwatch_log_group) to /aws/vendedlogs/* and it will resolve issue. Or create aws_api_gateway_account

Use for_each to create multiple disks and their snapshots using a list input

I am writing TF code to create multiple disks in GCP. The aim is to have dry code and have a list as an input.
My var app_disks has the following definition
variable "app_disks" {
type = list(object({
name = string
size = number
}))
}
And in my main.tf, im using the variable like this
app_disks = [
{
name = loki
size = 200
},
{
name = repo
size = 100
}
]
And in my module, my disk.tf looks like this
locals {
app_disk_map = {
for disk in var.app_disks : "${disk.name}" => disk
}
}
resource "google_compute_resource_policy" "app_disk_backup" {
for_each = local.app_disk_map
name = "${each.value.name}-backup"
snapshot_schedule_policy {
schedule {
hourly_schedule {
hours_in_cycle = 8
start_time = "04:00"
}
}
retention_policy {
max_retention_days = 14
on_source_disk_delete = "APPLY_RETENTION_POLICY"
}
}
}
resource "google_compute_disk" "app_disk" {
for_each = local.app_disk_map
provider = google-beta
name = each.value.name
zone = "${var.region}-a"
size = each.value.size
resource_policies = [each.google_compute_resource_policy.app_disk_backup[${each.value.name}-backup].self_link]
}
What im not sure about it how to link the resource_policies of the disk to its relevant google_compute_resource_policy".
Ive tried combinations like
each.google_compute_resource_policy.app_disk_backup[${each.value.name}-backup].self_link
each.google_compute_resource_policy.app_disk_backup."${each.value.name}-backup".self_link
But none seem to be working
I am not completely sure if I get the problem right (as an error output is missing), but from what I understood you want to have the following reference: google_compute_resource_policy.app_disk_backup[each.key].self_link so the resource would look something like:
resource "google_compute_disk" "app_disk" {
for_each = local.app_disk_map
....
resource_policies = [google_compute_resource_policy.app_disk_backup[each.key].self_link]
}
this will reference the same key that was used to create the dependent resource and create a 1:1 mapping between dependencies.

Resources