Terraform- GCP Data Proc Component Gateway Enable Issue - terraform

I’m trying to create data proc cluster in GCP using terraform resource google_dataproc_cluster. I would like to create Component gateway along with that. Upon seeing the documentation, it has been stated as to use the below snippet for creation:
cluster_config {
endpoint_config {
enable_http_port_access = "true"
}
}
Upon running the terraform plan, i see the error as " Error: Unsupported block type". And also tried using the override_properties and in the GCP data proc, i could see that the property is enabled, but still the Gateway Component is disabled. Wanted to understand, is there an issue upon calling the one given in the Terraform documentation and also is there an alternate for me to use it what?
software_config {
image_version = "${var.image_version}"
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
"dataproc:dataproc.enable_component_gateway" = "true"
}
}
The below is the error while running the terraform apply.
Error: Unsupported block type
on main.tf line 35, in resource "google_dataproc_cluster" "dataproc_cluster":
35: endpoint_config {
Blocks of type "endpoint_config" are not expected here.
RESOURCE BLOCK:
resource "google_dataproc_cluster" "dataproc_cluster" {
name = "${var.cluster_name}"
region = "${var.region}"
graceful_decommission_timeout = "120s"
labels = "${var.labels}"
cluster_config {
staging_bucket = "${var.staging_bucket}"
/*endpoint_config {
enable_http_port_access = "true"
}*/
software_config {
image_version = "${var.image_version}"
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
"dataproc:dataproc.enable_component_gateway" = "true" /* Has Been Added as part of Component Gateway Enabled which is already enabled in the endpoint_config*/
}
}
gce_cluster_config {
// network = "${var.network}"
subnetwork = "${var.subnetwork}"
zone = "${var.zone}"
//internal_ip_only = true
tags = "${var.network_tags}"
service_account_scopes = [
"cloud-platform"
]
}
master_config {
num_instances = "${var.master_num_instances}"
machine_type = "${var.master_machine_type}"
disk_config {
boot_disk_type = "${var.master_boot_disk_type}"
boot_disk_size_gb = "${var.master_boot_disk_size_gb}"
num_local_ssds = "${var.master_num_local_ssds}"
}
}
}
depends_on = [google_storage_bucket.dataproc_cluster_storage_bucket]
timeouts {
create = "30m"
delete = "30m"
}
}

Below is the snippet that worked for me to enable component gateway in GCP
provider "google-beta" {
project = "project_id"
}
resource "google_dataproc_cluster" "dataproc_cluster" {
name = "clustername"
provider = google-beta
region = us-east1
graceful_decommission_timeout = "120s"
cluster_config {
endpoint_config {
enable_http_port_access = "true"
}
}

This issue is discussed in this Git thread.
You can enable the component gateways in Cloud Dataproc by using google-beta provider in the Dataproc cluster and root configuration of terraform.
sample configuration:
# Terraform configuration goes here
provider "google-beta" {
project = "my-project"
}
resource "google_dataproc_cluster" "mycluster" {
provider = "google-beta"
name = "mycluster"
region = "us-central1"
graceful_decommission_timeout = "120s"
labels = {
foo = "bar"
}
...
...
}

Related

Error when creating Kinesis Delivery Streams with OpenSearch

I created an OpenSearch domain using Terraform with the OpenSearch_2.3 engine. I also managed to create Kinesis data streams without any issues but when I want to add a delivery stream I need to configure elasticsearch_configuration for the delivery stream as I want to send data to OpenSearch. But I get an error so I am not sure what I am doing wrong, is something wrong with the aws_opensearch_domain resource or is it Kinesis related?
resource "aws_opensearch_domain" "domain" {
domain_name = "test"
engine_version = "OpenSearch_2.3"
cluster_config {
instance_type = "r4.large.search"
}
tags = {
Domain = "TestDomain"
}
}
resource "aws_kinesis_stream" "stream" {
name = "terraform-kinesis-test"
shard_count = 1
retention_period = 48
stream_mode_details {
stream_mode = "PROVISIONED"
}
tags = {
Environment = "test"
}
}
resource "aws_kinesis_firehose_delivery_stream" "delivery_stream" {
name = "terraform-kinesis-firehose-delivery-stream"
destination = "elasticsearch"
s3_configuration {
role_arn = aws_iam_role.firehose_role.arn
bucket_arn = aws_s3_bucket.bucket.arn
buffer_size = 10
buffer_interval = 400
compression_format = "GZIP"
}
elasticsearch_configuration {
domain_arn = aws_opensearch_domain.domain.arn
role_arn = aws_iam_role.firehose_role.arn
index_name = "test"
type_name = "test"
processing_configuration {
enabled = "true"
processors {
type = "Lambda"
parameters {
parameter_name = "LambdaArn"
parameter_value = "${aws_lambda_function.lambda_processor.arn}:$LATEST"
}
}
}
}
}
Error: elasticsearch domain `my-domain-arn` has an unsupported version: OpenSearch_2.3 How is it not supported? Supported Versions
I am new to Kinesis and OpenSearch, pardon my lack of understanding.
A few weeks ago, I had a similar problem as I thought 2.3 was supported. However, Kinesis Firehose actually does not support OpenSearch_2.3 (yet). I downgraded to OpenSearch_1.3 and it worked as expected. You can find more information in the upgrade guide.
Supported Upgrade Paths
resource "aws_opensearch_domain" "domain" {
domain_name = "test"
engine_version = "OpenSearch_1.3"
cluster_config {
instance_type = "r4.large.search"
}
tags = {
Domain = "TestDomain"
}
}

how to set deployment_mode when provisioning aws_mq_broker through terraform?

I am trying to provision Amazon MQ broker through terraform. I have written code for multi AZ deployment with deployment type is ACTIVE_STANDBY_MULTI_AZ. Now I want to provision the MQ broker in Test environment with SINGLE_INSTANCE deployment type. hence I parameterized the deployment_mode field and passing the values in variables.
this is my variables list:
variable "enviroment" {
default = "test"
}
variable "mq_multiAZ" {
default = "SINGLE_INSTANCE"
}
The below code is absolutely working fine when I replaced the variable (mq_multiAZ) value to "ACTIVE_STANDBY_MULTI_AZ". however, it is not working with variable value "SINGLE_INSTANCE". also note- We require 2 subnets for "ACTIVE_STANDBY_MULTI_AZ" deployments, we can't mention single subnet to work "SINGLE_INSTANCE" deployment.
mq_broker.tf:
resource "aws_mq_broker" "mymq_broker" {
broker_name = "${var.enviroment}-broker"
engine_type = "ActiveMQ"
engine_version = "5.15.9"
host_instance_type = "mq.t2.micro"
deployment_mode = "${var.mq_multiAZ}"
publicly_accessible = false
apply_immediately = false
security_groups = [aws_security_group.amazon_mq.id]
subnet_ids = [
data.aws_subnet.AppSubnetA.id,
data.aws_subnet.AppSubnetB.id,
]
user {
username = "${var.mq_master_user}"
password = "${var.mq_master_pwd}"
console_access = true
}
logs {
general = true
}
maintenance_window_start_time {
day_of_week = "SUNDAY"
time_of_day = "02:00"
time_zone = "UTC"
}
tags = {
Environment = "${var.enviroment}"
Name = "${var.enviroment}-broker"
}
}
The error I am getting for "SINGLE_INSTANCE" deployment:
Error: BadRequestException: Specify a single subnet in [SINGLE_INSTANCE] deployment mode.
{
RespMetadata: {
StatusCode: 400,
RequestID: "716aafdf-578a-4eb7-bfe4-f0f08998b6db"
},
ErrorAttribute: "subnetIds",
Message_: "Specify a single subnet in [SINGLE_INSTANCE] deployment mode."
}
with aws_mq_broker.empays_broker,
on amazonMQ.tf line 1, in resource "aws_mq_broker" "empays_broker":
1: resource "aws_mq_broker" "empays_broker" {

Azure Kubernetes Services with Terraform load balancer shows "Internal Server Error"?

I'm trying to setup Azure Kubernetes Services with Terraform with the 'Azure Voting'-app.
I'm using the code mentioned below, however I keep getting the error on the Load Balancer: "Internal Server Error". Any idea what is going wrong here?
Seems like the Load Balancer to Endpoint (POD) is configured correclt,y thus not sure what is missing here.
main.tf
provider "azurerm" {
features {}
}
data "azurerm_kubernetes_cluster" "aks" {
name = "kubernetescluster"
resource_group_name = "myResourceGroup"
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.aks.kube_config[0].host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}
resource "kubernetes_namespace" "azurevote" {
metadata {
annotations = {
name = "azurevote-annotation"
}
labels = {
mylabel = "azurevote-value"
}
name = "azurevote"
}
}
resource "kubernetes_service" "example" {
metadata {
name = "terraform-example"
}
spec {
selector = {
app = kubernetes_pod.example.metadata.0.labels.app
}
session_affinity = "ClientIP"
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
resource "kubernetes_pod" "example" {
metadata {
name = "terraform-example"
labels = {
app = "azure-vote-front"
}
}
spec {
container {
image = "mcr.microsoft.com/azuredocs/azure-vote-front:v1"
name = "example"
}
}
}
variables.tf
variable "prefix" {
type = string
default = "ab"
description = "A prefix used for all resources in this example"
}
It seems that your infrastructure setup is ok, the only thing is the application itself, you create only the front app, and you need to create the backend app to.
You can see the deployment examples here.
You also can see here the exception when you run the frontend without the backend.

Unable to create a new Nutanix VM and assign it to a project

I'm trying to create a new machine on Nutanix with terraform v0.15.0 and assign the machine to an existing project I already have (created on the UI without terraform). Using the Nutanix documentation
(https://registry.terraform.io/providers/nutanix/nutanix/latest/docs/resources/virtual_machine#project_reference) , I was able to create a VM, but not assign it to the existing project.
Using this main.tf works, however, uncommenting the project_reference attribute results in the following error:
nutanix_virtual_machine.vm1[0]: Creating...
Error: error: {
"api_version": "3.1",
"code": 422,
"message_list": [
{
"details": {
"metadata": [
"Additional properties are not allowed (u'project_reference' was unexpected)"
]
},
"message": "Request could not be processed.",
"reason": "INVALID_REQUEST"
}
],
"state": "ERROR"
}
on main.tf line 67, in resource "nutanix_virtual_machine" "vm1":
67: resource "nutanix_virtual_machine" "vm1" {
Here's my code:
provider "nutanix" {
username = "user"
password = "pass"
port = 1234
endpoint = "ip"
insecure = true
wait_timeout = 10
}
data "nutanix_cluster" "cluster" {
name = "NTNXCluster"
}
data "nutanix_image" "ubuntu-clone" {
image_name = "Ubuntu-20.04-Server"
}
variable "counter" {
type = number
default = 1
}
resource "nutanix_virtual_machine" "vm1" {
name = "test-${count.index+1}"
count = var.counter
description = "desc"
num_vcpus_per_socket = 2
num_sockets = 2
memory_size_mib = 4096
guest_customization_sysprep = {}
cluster_uuid = "my_uuid"
nic_list {
subnet_uuid = "my_uuid"
}
#project_reference = {
# kind = "project"
# uuid = "my_uuid"
# name = "my_project"
#}
disk_list {
data_source_reference = {
kind = "image"
uuid = data.nutanix_image.ubuntu-clone.id
}
device_properties {
disk_address = {
device_index = 0
adapter_type = "SATA"
}
device_type = "DISK"
}
}
}
After Nutanix support asked me to use debug mode in terraform I found the issue.
In debug mode, I saw that terraform is using API calls that can't be used on Nutanix Elements.
Creating a VM with a project can be done ONLY from Nutanix Prism, and I used the Nutanix Elements provider instead.
Switching the provider from Nutanix Elements to Nutanix Prism, using the same terraform main.yml worked as expected.
Use:
project_reference = {
kind = "project"
uuid = your_id
}
Leave project name out of the block, with ID is enough.

Terraform API Gateway HTTP API - Getting the error Insufficient permissions to enable logging

My terraform script for deploying an HTTP API looks like the following. I am getting the following error when I run this -
error creating API Gateway v2 stage: BadRequestException: Insufficient permissions to enable logging
Do I need to add something else to make it work?
resource "aws_cloudwatch_log_group" "api_gateway_log_group" {
name = "/aws/apigateway/${var.location}-${var.custom_tags.Layer}-demo-publish-api"
retention_in_days = 7
tags = var.custom_tags
}
resource "aws_apigatewayv2_api" "demo_publish_api" {
name = "${var.location}-${var.custom_tags.Layer}-demo-publish-api"
description = "API to publish event payloads"
protocol_type = "HTTP"
tags = var.custom_tags
}
resource "aws_apigatewayv2_vpc_link" "demo_vpc_link" {
name = "${var.location}-${var.custom_tags.Layer}-demo-vpc-link"
security_group_ids = local.security_group_id_list
subnet_ids = local.subnet_ids_list
tags = var.custom_tags
}
resource "aws_apigatewayv2_integration" "demo_apigateway_integration" {
api_id = aws_apigatewayv2_api.demo_publish_api.id
integration_type = "HTTP_PROXY"
connection_type = "VPC_LINK"
integration_uri = var.alb_listener_arn
connection_id = aws_apigatewayv2_vpc_link.demo_vpc_link.id
integration_method = "POST"
timeout_milliseconds = var.api_timeout_milliseconds
}
resource "aws_apigatewayv2_route" "demo_publish_api_route" {
api_id = aws_apigatewayv2_api.demo_publish_api.id
route_key = "POST /api/event"
target = "integrations/${aws_apigatewayv2_integration.demo_apigateway_integration.id}"
}
resource "aws_apigatewayv2_stage" "demo_publish_api_default_stage" {
depends_on = [aws_cloudwatch_log_group.api_gateway_log_group]
api_id = aws_apigatewayv2_api.demo_publish_api.id
name = "$default"
auto_deploy = true
tags = var.custom_tags
route_settings {
route_key = aws_apigatewayv2_route.demo_publish_api_route.route_key
throttling_burst_limit = var.throttling_burst_limit
throttling_rate_limit = var.throttling_rate_limit
}
default_route_settings {
detailed_metrics_enabled = true
logging_level = "INFO"
}
access_log_settings {
destination_arn = aws_cloudwatch_log_group.api_gateway_log_group.arn
format = jsonencode({ "requestId":"$context.requestId", "ip": "$context.identity.sourceIp"})
}
}
I was stuck on this for a couple of days before reaching out to AWS support. If you have been deploying a lot of HTTP APIs, then you might have run into the same issue where an IAM policy gets very large.
Run this AWS CLI command to find the associated CloudWatch Logs resource policy:
aws logs describe-resource-policies
Look for AWSLogDeliveryWrite20150319. You'll notice this policy has a large number of associated LogGroup resources. You have three options:
Adjust this policy by removing some of the potentially unused entries.
Change the resource list to "*"
You can add another policy. Based on this policy, split the resource records between them.
Apply updates via this AWS CLI command:
aws logs put-resource-policy
Here's the command I ran to set resources. Use "*" for the policy:
aws logs put-resource-policy --policy-name AWSLogDeliveryWrite20150319 --policy-document "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"AWSLogDeliveryWrite\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"delivery.logs.amazonaws.com\"},\"Action\":[\"logs:CreateLogStream\",\"logs:PutLogEvents\"],\"Resource\":[\"*\"]}]}"
#Marcin Your initial comment about the aws_api_gateway_account was correct. I added the following resources and now it is working fine -
resource "aws_api_gateway_account" "demo" {
cloudwatch_role_arn = var.apigw_cloudwatch_role_arn
}
data "aws_iam_policy_document" "demo_apigw_allow_manage_resources" {
version = "2012-10-17"
statement {
actions = [
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:GetLogEvents",
"logs:FilterLogEvents"
]
resources = [
"*"
]
}
statement {
actions = [
"logs:CreateLogDelivery",
"logs:PutResourcePolicy",
"logs:UpdateLogDelivery",
"logs:DeleteLogDelivery",
"logs:CreateLogGroup",
"logs:DescribeResourcePolicies",
"logs:GetLogDelivery",
"logs:ListLogDeliveries"
]
resources = [
"*"
]
}
}
data "aws_iam_policy_document" "demo_apigw_allow_assume_role" {
version = "2012-10-17"
statement {
effect = "Allow"
actions = [
"sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["apigateway.amazonaws.com"]
}
}
}
resource "aws_iam_role_policy" "demo_apigw_allow_manage_resources" {
policy = data.aws_iam_policy_document.demo_apigw_allow_manage_resources.json
role = aws_iam_role.demo_apigw_cloudwatch_role.id
name = var.demo-apigw-manage-resources_policy_name
}
resource "aws_iam_role" "demo_apigw_cloudwatch_role" {
name = "demo_apigw_cloudwatch_role"
tags = var.custom_tags
assume_role_policy = data.aws_iam_policy_document.demo_apigw_allow_assume_role.json
}
You can route CW logs (aws_cloudwatch_log_group) to /aws/vendedlogs/* and it will resolve issue. Or create aws_api_gateway_account

Resources