terraform create global rds instance won't work - terraform

I’m trying to get this tutorial to work https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_global_cluster#attributes-reference
Currently the configuration I’m using looks like this, which is failing when I use terraform plan to see how it will work.
resource "aws_rds_global_cluster" "example" {
global_cluster_identifier = "global-test"
database_name = "example"
engine = "aurora-postgresql"
engine_version = "12.6"
}
resource "aws_rds_cluster" "primary" {
# provider = aws.primary
count = "${local.resourceCount == "2" ? "1" : "0"}"
identifier = "kepler-example-global-cluster-${lookup(var.kepler-env-name, var.env)}-${count.index}"
database_name = "example"
engine = "aurora-postgresql"
engine_version = "12.6"
vpc_security_group_ids = ["${var.rds_security_group_survey_id}"]
# cluster_identifier = aws_rds_global_cluster.example_global_cluster.id
master_username = "root"
master_password = "somepass123"
# master_password = var.credential
global_cluster_identifier = aws_rds_global_cluster.example.id
db_subnet_group_name = var.db_subnet_group_id
}
At the moment I’m getting this error currently,
Error: Error loading modules: module openworld: Error parsing .terraform/modules/d375d2d1997599063f4fb9e7587fec26/main.tf: At 63:31: Unknown token: 63:31 IDENT aws_rds_global_cluster.example.id
I’m confused why this error would come up

Related

Terraform - ADF to DB connectivity issue when tenant_id is provided in LS configuration - azurerm_data_factory_linked_service_azure_sql_database

Terraform Version
1.2.3
AzureRM Provider Version
v3.13.0
Affected Resource(s)/Data Source(s)
Azure data factory, SQL Database
Terraform Configuration Files
resource "azurerm_data_factory_linked_service_azure_sql_database" "sqldatabase_linked_service_10102022" {
count = (var.subResourcesInfo.sqlDatabaseName != "") ? 1 : 0
depends_on = [azurerm_data_factory_integration_runtime_azure.autoresolve_integration_runtime,
azurerm_data_factory_managed_private_endpoint.sqlserver_managed_endpoint]
name = "AzureSqlDatabase10102022"
data_factory_id = azurerm_data_factory.datafactory.id
integration_runtime_name = "AutoResolveIntegrationRuntime"
use_managed_identity = true
connection_string = format("Integrated Security=False;Data Source=%s.database.windows.net;Initial Catalog=%s;",
var.subResourcesInfo.sqlServerName,
var.subResourcesInfo.sqlDatabaseName)
}
Expected Behaviour
Issue is ADF to DB connectivity, error:
Operation on target DWH_DF_aaa failed: {'StatusCode':'DFExecutorUserError','Message':'Job failed due to reason: com.microsoft.dataflow.broker.InvalidOperationException: Only one valid authentication should be used for AzureSqlDatabase. ServicePrincipalAuthentication is invalid. One or two of servicePrincipalId/key/tenant is missing.','Details':''}
When we created this LS using TF, we get tenant="" in ADF LS Json file which we suspect that causing issue of above error.
When we created the same LS directly on ADF UI, then there is no field of tenant="" in its json file, and if we use this LS in dataflow/pipeline then communication works from ADF to DB.
Expected behavior should be, if we don't provide tenant_id parameter in TF code then in json also should not show tenant="" which then works for connectivity.
I tried to reproduce the scenario in my environment:
With below code , I could create a Linked Service (connection) between Azure SQL Database and Azure Data Factory.
Code:
resource "azurerm_data_factory" "example" {
name = "kaADFexample"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
managed_virtual_network_enabled = true
}
resource "azurerm_storage_account" "example" {
name = "kaaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_data_factory_managed_private_endpoint" "example" {
name = "example"
data_factory_id = azurerm_data_factory.example.id
target_resource_id = azurerm_storage_account.example.id
subresource_name = "blob"
}
resource "azurerm_user_assigned_identity" "main" {
depends_on = [data.azurerm_resource_group.example]
name = "kasupports01-mid"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_data_factory_integration_runtime_azure" "test" {
name = "AutoResolveIntegrationRuntime"
data_factory_id = azurerm_data_factory.example.id
location = "AutoResolve"
virtual_network_enabled = true
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "linked_service_azure_sql_database" {
name = "kaexampleLS"
data_factory_id = azurerm_data_factory.example.id
connection_string = "data source=serverhostname;initial catalog=master;user id=testUser;Password=test;integrated security=False;encrypt=True;connection timeout=30"
use_managed_identity = true
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.test.name
depends_on = [azurerm_data_factory_integration_runtime_azure.test,
azurerm_data_factory_managed_private_endpoint.example]
}
output "id" {
value = azurerm_data_factory_linked_service_azure_sql_database.linked_service_azure_sql_database.id
}
Executed: terraform plan
Output:
id = "/subscriptions/xxxxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.DataFactory/factories/kaADFexample/linkedservices/kaexampleLS"
If the error persists in your case ,try removing the tenant attribute in the data_factory just after deployment is done in terraform.
Please check this known issue and mentioned by #chgenzel in terraform-provider-azurerm issues | Github
ADF:
Managed Identity
Linked service : azure sql
Reference: data_factory_linked_service_azure_sql_database | terraformregistry

Unable to create Service Bus Authorization Rule in Azure

We are using terraform version of 0.12.19 and azurerm provider version 2.10.0 for deploying the service bus and its queues and authorization rules. So when we ran the terraform apply it created the service bus and queue but it throwed the below error for the creation of authorization rules.
But when we checked the azure portal these authorization rules were present and in tf state file as well we were able to find the entries of both the resources and they had a parameter Status as "Tainted" in it.. So when we tried to run the apply again to see if will recreate/replace the existing resources but it was failing with the same error. Now we are unable to proceed further as even when we run the plan for creating the new resources its failing at this point and not letting us proceed further.
We even tried to untainted it and run the apply but it seems still we are getting this issue though the resources doesn't have the status tainted parameter in tf state. Can you please help us here the solution so that we can resolve this. (We can't move forward to new version of terraform cli as there are so many modules dependent on it and it will impact our production deployments as well.)
Error: Error making Read request on Azure ServiceBus Queue Authorization Rule "" (Queue "sample-check-queue" / Namespace "sample-check-bus" / Resource Group "My-RG"): servicebus.QueuesClient#GetAuthorizationRule: Invalid input: autorest/validation: validation failed: parameter=authorizationRuleName constraint=MinLength value="" details: value length must be greater than or equal to 1
azurerm_servicebus_queue_authorization_rule.que-sample-check-lsr: Refreshing state... [id=/subscriptions//resourcegroups/My-RG/providers/Microsoft.ServiceBus/namespaces/sample-check-bus/queues/sample-check-queue/authorizationrules/lsr]
Below is the service_bus.tf file code:
provider "azurerm" {
version = "=2.10.0"
features {}
}
provider "azurerm" {
features {}
alias = "cloud_operations"
}
resource "azurerm_servicebus_namespace" "service_bus" {
name = "sample-check-bus"
resource_group_name = "My-RG"
location = "West Europe"
sku = "Premium"
capacity = 1
zone_redundant = true
tags = {
source = "terraform"
}
}
resource "azurerm_servicebus_queue" "que-sample-check" {
name = "sample-check-queue"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
dead_lettering_on_message_expiration = true
requires_duplicate_detection = false
requires_session = false
enable_partitioning = false
default_message_ttl = "P15D"
lock_duration = "PT2M"
duplicate_detection_history_time_window = "PT15M"
max_size_in_megabytes = 1024
max_delivery_count = 05
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-lsr" {
name = "lsr"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-AsyncReportBG-AsncRprt" {
name = "AsyncReportBG-AsncRprt"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
I have tried the below terraform code to create authorization rules and could create them successfully:
I have followed this azurerm_servicebus_queue_authorization_rule |
Resources | hashicorp/azurerm | Terraform Registry having latest
version of hashicorp/azurerm terraform provider.
This maybe even related to arguments queue_name. arguments of
resources changed to queue_id in 3.X.X versions
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "example" {
name = "xxxx"
location = "xx"
}
provider "azurerm" {
features {}
alias = "cloud_operations"
}
resource "azurerm_servicebus_namespace" "service_bus" {
name = "sample-check-bus"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Premium"
capacity = 1
zone_redundant = true
tags = {
source = "terraform"
}
}
resource "azurerm_servicebus_queue" "que-sample-check" {
name = "sample-check-queue"
#resource_group_name = "My-RG"
namespace_id = azurerm_servicebus_namespace.service_bus.id
#namespace_name =
azurerm_servicebus_namespace.service_bus.name
dead_lettering_on_message_expiration = true
requires_duplicate_detection = false
requires_session = false
enable_partitioning = false
default_message_ttl = "P15D"
lock_duration = "PT2M"
duplicate_detection_history_time_window = "PT15M"
max_size_in_megabytes = 1024
max_delivery_count = 05
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-lsr"
{
name = "lsr"
#resource_group_name = "My-RG"
#namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_id = azurerm_servicebus_queue.que-sample-check.id
#queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check- AsyncReportBG-AsncRprt" {
name = "AsyncReportBG-AsncRprt"
#resource_group_name = "My-RG"
#namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_id = azurerm_servicebus_queue.que-sample-check.id
#queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
Authorization rules created without error:
Please try to change the name of the authorization rule named “lsr” with increased length and also please try to create one rule at a time in your case .
Thanks all for your inputs and suggestions.
Code is working fine now with the terraform provider version 2.56.0 and terraform cli version 0.12.19. Please let me know if any concerns.

Error in creating resource in Open Telekom cloud using terraform

Below code snippet should work as per documentation
resource "opentelekomcloud_compute_instance_v2" "ecs_1" {
region = var.region
availability_zone = var.availability_zone
name = "${var.ecs_name}-notags"
image_id = var.image_id
flavor_id = var.flavor_id
key_pair = var.key_name
security_groups = var.security_groups
network {
uuid = var.subnet_id
}
}
Output
Error: error fetching OpenTelekomCloud CloudServers tags: Resource not found: [GET https://ecs.eu-de.otc.t-systems.com/v1
Not sure why this error is appearing despite having all the required permissions

Terraform eks datasource vpc subnets security group

I have a terraform script which creates an eks cluster
I have another terraform script which creates a rds ,I want this rds to be created
in the same VPC as the eks cluster .
data "aws_eks_cluster" "example" {
name = "example"
}
output "subnets" {
value = "${data.aws_eks_cluster.example.vpc_config.vpc_id}"
}
here is my rds.tf
resource "aws_db_instance" "rds" {
allocated_storage = "${var.rds_allocated_storage}"
storage_type = "${var.rds_storage_type}"
engine = "${var.rds_engine}"
engine_version = "${var.rds_engine_version}"
instance_class = "${var.rds_instance_class}"
name = "${var.project_name}_${var.env}_data_rds${var.rds_engine}"
username = "dbadmin"
password = "${var.rds_db_password}"
multi_az = false
skip_final_snapshot = true
db_subnet_group_name = "${aws_db_subnet_group.rds_subnet.name}"
vpc_security_group_ids = "${var.rds_vpc_security_group_ids}"
identifier = "${var.project_name}-${var.env}-data-rds${var.rds_engine}"
I want to get db_subnet_group_name and vpc_security_group_ids from my eks
and not from variables.tf
I believe you need something like
vpc_security_group_ids = "${data.aws_eks_cluster.example.vpc_config.0.security_group_ids}"

what's better way to store master_username and master_password in terraform rds configuration?

I need help on what's the better way to store the master_password in https://www.terraform.io/docs/providers/aws/r/rds_cluster.html. Currently I masked with XXX before commit to the github. Could you please advise a better way to store this? Thanks
resource "aws_rds_cluster" "default" {
cluster_identifier = "aurora-cluster-demo"
engine = "aurora-mysql"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
database_name = "mydb"
master_username = "foo"
master_password = "bar"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
}
Here is my solution.
(anyway, you can't hide the password its *.tfstate file, you should save the tfstate files in s3 with encrypted)
terraform plan --var-file master_username=xxxx --var-file master_password=xxxx
So your tf file can define them as variables.
variable "master_username" {}
variable "master_password" {}
resource "aws_rds_cluster" "default" {
cluster_identifier = "aurora-cluster-demo"
engine = "aurora-mysql"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
database_name = "mydb"
master_username = "${var.master_username}"
master_password = "${var.master_password}"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
}
When run terraform plan/apply with CI/CD pipeline, I used to put the passwords to s3 (encrypted) or SSM (now you can put in aws secrets manager). So write a wrapper script to get these key/value first and feed to terraform command.
Preferable you use AWS Secrets Manager and not store any sensitive data in the state file at all (even though your state file might be encrypted, it's a good practice to avoid such implementations).
Here I've created a secrets.tf that's responsible for generating a random password and storing it in AWS Secrets Manager.
resource "random_password" "master"{
length = 16
special = true
override_special = "_!%^"
}
resource "aws_secretsmanager_secret" "password" {
name = "${var.environment}-${var.service_name}-primary-cluster-password"
}
resource "aws_secretsmanager_secret_version" "password" {
secret_id = aws_secretsmanager_secret.password.id
secret_string = random_password.master.result
}
Now to get the data during the tf apply, I use a data provider (this gets data from AWS on the fly during the apply):
data "aws_secretsmanager_secret" "password" {
name = "${var.environment}-${var.service_name}-primary-cluster-password"
}
data "aws_secretsmanager_secret_version" "password" {
secret_id = data.aws_secretsmanager_secret.password.id
}
I then reference them in my Aurora cluster as following:
resource "aws_rds_global_cluster" "main" {
global_cluster_identifier = "${var.environment}-${var.service_name}-global-cluster"
engine = "aurora"
engine_version = "5.6.mysql_aurora.1.22.2"
database_name = "${var.environment}-${var.service_name}-global-cluster"
}
resource "aws_rds_cluster" "primary" {
provider = aws
engine = aws_rds_global_cluster.main.engine
engine_version = aws_rds_global_cluster.main.engine_version
cluster_identifier = "${var.environment}-${var.service_name}-primary-cluster"
master_username = "dbadmin"
master_password = data.aws_secretsmanager_secret_version.password
database_name = "example_db"
global_cluster_identifier = aws_rds_global_cluster.main.id
db_subnet_group_name = "default"
}
Hopefully this helps.

Resources