Creating a Cosmos DB via Terraform with Replicate data globally enabled times out after one hour with a status code:
StatusCode=202 -- Original Error: context deadline exceeded
Are there any solutions so Terraform can complete successfully?
We tried adding the timeout operation to the Terraform code, however it is not supported
Terraform code that is timing out:
resource "azurerm_resource_group" "resource_group" {
name = "${local.name}"
location = "${var.azure_location}"
tags = "${var.tags}"
}
resource "azurerm_cosmosdb_account" "db" {
name = "${local.name}"
location = "${var.azure_location}"
resource_group_name = "${azurerm_resource_group.resource_group.name}"
offer_type = "Standard"
kind = "GlobalDocumentDB"
tags = "${var.tags}"
enable_automatic_failover = false
consistency_policy {
consistency_level = "Session"
}
geo_location {
location = "${var.failover_azure_location}"
failover_priority = 1
}
geo_location {
location = "${azurerm_resource_group.resource_group.location}"
failover_priority = 0
}
}
I'd expect that Terraform would complete successfully since the Cosmos DB is created post the Terraform timeout without error.
Related
Terraform Version
1.2.3
AzureRM Provider Version
v3.13.0
Affected Resource(s)/Data Source(s)
Azure data factory, SQL Database
Terraform Configuration Files
resource "azurerm_data_factory_linked_service_azure_sql_database" "sqldatabase_linked_service_10102022" {
count = (var.subResourcesInfo.sqlDatabaseName != "") ? 1 : 0
depends_on = [azurerm_data_factory_integration_runtime_azure.autoresolve_integration_runtime,
azurerm_data_factory_managed_private_endpoint.sqlserver_managed_endpoint]
name = "AzureSqlDatabase10102022"
data_factory_id = azurerm_data_factory.datafactory.id
integration_runtime_name = "AutoResolveIntegrationRuntime"
use_managed_identity = true
connection_string = format("Integrated Security=False;Data Source=%s.database.windows.net;Initial Catalog=%s;",
var.subResourcesInfo.sqlServerName,
var.subResourcesInfo.sqlDatabaseName)
}
Expected Behaviour
Issue is ADF to DB connectivity, error:
Operation on target DWH_DF_aaa failed: {'StatusCode':'DFExecutorUserError','Message':'Job failed due to reason: com.microsoft.dataflow.broker.InvalidOperationException: Only one valid authentication should be used for AzureSqlDatabase. ServicePrincipalAuthentication is invalid. One or two of servicePrincipalId/key/tenant is missing.','Details':''}
When we created this LS using TF, we get tenant="" in ADF LS Json file which we suspect that causing issue of above error.
When we created the same LS directly on ADF UI, then there is no field of tenant="" in its json file, and if we use this LS in dataflow/pipeline then communication works from ADF to DB.
Expected behavior should be, if we don't provide tenant_id parameter in TF code then in json also should not show tenant="" which then works for connectivity.
I tried to reproduce the scenario in my environment:
With below code , I could create a Linked Service (connection) between Azure SQL Database and Azure Data Factory.
Code:
resource "azurerm_data_factory" "example" {
name = "kaADFexample"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
managed_virtual_network_enabled = true
}
resource "azurerm_storage_account" "example" {
name = "kaaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_data_factory_managed_private_endpoint" "example" {
name = "example"
data_factory_id = azurerm_data_factory.example.id
target_resource_id = azurerm_storage_account.example.id
subresource_name = "blob"
}
resource "azurerm_user_assigned_identity" "main" {
depends_on = [data.azurerm_resource_group.example]
name = "kasupports01-mid"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_data_factory_integration_runtime_azure" "test" {
name = "AutoResolveIntegrationRuntime"
data_factory_id = azurerm_data_factory.example.id
location = "AutoResolve"
virtual_network_enabled = true
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "linked_service_azure_sql_database" {
name = "kaexampleLS"
data_factory_id = azurerm_data_factory.example.id
connection_string = "data source=serverhostname;initial catalog=master;user id=testUser;Password=test;integrated security=False;encrypt=True;connection timeout=30"
use_managed_identity = true
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.test.name
depends_on = [azurerm_data_factory_integration_runtime_azure.test,
azurerm_data_factory_managed_private_endpoint.example]
}
output "id" {
value = azurerm_data_factory_linked_service_azure_sql_database.linked_service_azure_sql_database.id
}
Executed: terraform plan
Output:
id = "/subscriptions/xxxxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.DataFactory/factories/kaADFexample/linkedservices/kaexampleLS"
If the error persists in your case ,try removing the tenant attribute in the data_factory just after deployment is done in terraform.
Please check this known issue and mentioned by #chgenzel in terraform-provider-azurerm issues | Github
ADF:
Managed Identity
Linked service : azure sql
Reference: data_factory_linked_service_azure_sql_database | terraformregistry
I am trying to figure out a way to add tags to the failover DBs created with the help of terraform registry - azurerm_mssql_failover_group.
if I use the tags field mentioned as part of terraform documentation, it adds tags to the failover group but it does not set tags on the failover databases created.
My code for the resource group is as below
resource "azurerm_mssql_failover_group" "sql-database-failover" {
name = "sqldatabasefailover1"
server_id = "Id of Primary SQL Server"
databases = [
"Id of primary SQL dbs"
]
partner_server {
id = "Id of secondary (failover) SQL server"
}
read_write_endpoint_failover_policy {
mode = "Automatic"
grace_minutes = 60
}
tags = local.common_tags
}
I tried looking for terraform registry which can help us add tags to the already present DB but could not find any. Any help will be appreciated.
Regards Tarun
I tried to reproduce the same in my environment .
I created local tags as below and tried to add to failover group.
Code referred from : azurerm_sql_failover_group | Resources | hashicorp/azurerm | Terraform Registry
Code:
locals {
resource_tags = {
project_name = "failovergroup",
category = "devbackupresource"
}
}
resource "azurerm_mssql_server" "primary" {
name = "ka-sql-primary"
resource_group_name = data.azurerm_resource_group.example.name
location = "southeastasia"
version = "12.0"
administrator_login = "sqladmin"
administrator_login_password = "xxx"
}
resource "azurerm_mssql_server" "secondary" {
name = "ka-sql-secondary"
resource_group_name = data.azurerm_resource_group.example.name
location = "westeurope"
version = "12.0"
administrator_login = "sqladmin"
administrator_login_password = "xxx"
}
resource "azurerm_mssql_database" "db1" {
name = "kadb1"
server_id = azurerm_mssql_server.primary.id
}
resource "azurerm_mssql_failover_group" "example" {
name = "kav-example-failover-group"
server_id = azurerm_mssql_server.primary.id
databases = [azurerm_mssql_database.db1.id]
tags = local.resource_tags
partner_server {
id = azurerm_mssql_server.secondary.id
}
read_write_endpoint_failover_policy {
mode = "Automatic"
grace_minutes = 60
}
}
But the tags are not added to secondaryDb which is intended for failover group.
You can use the tags in Secondary Db resource block ” azurerm_mssql_server” for secondary as below.
locals {
resource_tags = {
...
}
}
resource "azurerm_mssql_server" "secondary" {
name = "ka-sql-secondary"
resource_group_name = data.azurerm_resource_group.example.name
location = "westeurope"
version = "12.0"
administrator_login = "sqladmin"
administrator_login_password = "pa$$w0rd"
tags = local.resource_tags
}
resource "azurerm_mssql_failover_group" "example" {
name = "kav-example-failover-group
server_id = azurerm_mssql_server.primary.id
databases = [azurerm_mssql_database.db1.id]
...
}
This created tags to my secondary sql server in azure .
Edit:
tag-support-microsoftsql
See Auto-failover groups limitations .One may need to create tags manually .
Also Note: Database restore operations don't restore the tags of the
original database.
Objective: Creating cosmosdb account and sqldb in azure with terraform
What I tried:
resource "azurerm_cosmosdb_account" "cosmosaccount" {
name = "cosmosdb"
location = var.location
resource_group_name = var.rg_name
offer_type = var.cosmosdb_offer_type
kind = var.cosmosdb_kind
is_virtual_network_filter_enabled = "true"
ip_range_filter = var.ip_range_filter
capabilities {
name = "EnableTable"
}
enable_automatic_failover = false
consistency_policy {
consistency_level = var.cosmosdb_consistancy_level
max_interval_in_seconds = 5
max_staleness_prefix = 100
}
geo_location {
location = var.location
failover_priority = 0
}
virtual_network_rule {
id = var.subnet_id
ignore_missing_vnet_service_endpoint = true
}
}
resource "azurerm_cosmosdb_sql_database" "comosdbsqldb" {
name = "driving"
resource_group_name = azurerm_cosmosdb_account.cosmosaccount.resource_group_name
account_name = azurerm_cosmosdb_account.cosmosaccount.name
throughput = 500
depends_on = [azurerm_cosmosdb_account.cosmosaccount]
}
resource "azurerm_cosmosdb_sql_container" "mdhistcontainer" {
name = "metadata_history"
resource_group_name = azurerm_cosmosdb_account.cosmosaccount.resource_group_name
account_name = azurerm_cosmosdb_account.cosmosaccount.name
database_name = azurerm_cosmosdb_sql_database.comosdbsqldb.name
partition_key_path = "/definition/id"
partition_key_version = 1
throughput = 500
indexing_policy {
indexing_mode = "consistent"
included_path {
path = "/*"
}
included_path {
path = "/included/?"
}
excluded_path {
path = "/excluded/?"
}
}
unique_key {
paths = ["/definition/idlong", "/definition/idshort"]
}
depends_on = [azurerm_cosmosdb_sql_database.comosdbsqldb]
}
Issue I am facing:
It created cosmosdb account successfully but failing at creating sqldb
Error I am getting:
Error: checking for presence of Sql Database: (Name "driving" / Database Account Name "cosmosaccount" /
Resource Group "xxxxxxxxxxxx"): documentdb.SQLResourcesClient#GetSQLDatabase:
Failure responding to request: StatusCode=405 -- Original Error: autorest/azure: Service returned an error.
Status=405 Code="MethodNotAllowed" Message="Requests for API sql are not supported for this account.
\r\nActivityId: 2d79ca83-9534-46e8-a7cf-cef1fd76e752, Microsoft.Azure.Documents.Common/2.14.0"
│
│ with module.cosmosdb.azurerm_cosmosdb_sql_database.comosdbsqldb,
│ on ../modules/cosmosdb/main.tf line 46, in resource "azurerm_cosmosdb_sql_database" "comosdbsqldb":
│ 46: resource "azurerm_cosmosdb_sql_database" "comosdbsqldb"
I have given DocumentDBContributor role also to service principal but still getting this error. I am following below documentation about syntax.. It seems I am exactly following that
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_sql_database
Please suggest how to fix this error.
I believe you are getting this error is because you are creating a Cosmos DB account targeting Table API and trying to create a database with custom name in that account.
capabilities {
name = "EnableTable"
}
With Table API, you cannot create a database with any name. There can only be a single database in the account and its name is fixed. The name of the database should be TablesDB.
We are using terraform version of 0.12.19 and azurerm provider version 2.10.0 for deploying the service bus and its queues and authorization rules. So when we ran the terraform apply it created the service bus and queue but it throwed the below error for the creation of authorization rules.
But when we checked the azure portal these authorization rules were present and in tf state file as well we were able to find the entries of both the resources and they had a parameter Status as "Tainted" in it.. So when we tried to run the apply again to see if will recreate/replace the existing resources but it was failing with the same error. Now we are unable to proceed further as even when we run the plan for creating the new resources its failing at this point and not letting us proceed further.
We even tried to untainted it and run the apply but it seems still we are getting this issue though the resources doesn't have the status tainted parameter in tf state. Can you please help us here the solution so that we can resolve this. (We can't move forward to new version of terraform cli as there are so many modules dependent on it and it will impact our production deployments as well.)
Error: Error making Read request on Azure ServiceBus Queue Authorization Rule "" (Queue "sample-check-queue" / Namespace "sample-check-bus" / Resource Group "My-RG"): servicebus.QueuesClient#GetAuthorizationRule: Invalid input: autorest/validation: validation failed: parameter=authorizationRuleName constraint=MinLength value="" details: value length must be greater than or equal to 1
azurerm_servicebus_queue_authorization_rule.que-sample-check-lsr: Refreshing state... [id=/subscriptions//resourcegroups/My-RG/providers/Microsoft.ServiceBus/namespaces/sample-check-bus/queues/sample-check-queue/authorizationrules/lsr]
Below is the service_bus.tf file code:
provider "azurerm" {
version = "=2.10.0"
features {}
}
provider "azurerm" {
features {}
alias = "cloud_operations"
}
resource "azurerm_servicebus_namespace" "service_bus" {
name = "sample-check-bus"
resource_group_name = "My-RG"
location = "West Europe"
sku = "Premium"
capacity = 1
zone_redundant = true
tags = {
source = "terraform"
}
}
resource "azurerm_servicebus_queue" "que-sample-check" {
name = "sample-check-queue"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
dead_lettering_on_message_expiration = true
requires_duplicate_detection = false
requires_session = false
enable_partitioning = false
default_message_ttl = "P15D"
lock_duration = "PT2M"
duplicate_detection_history_time_window = "PT15M"
max_size_in_megabytes = 1024
max_delivery_count = 05
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-lsr" {
name = "lsr"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-AsyncReportBG-AsncRprt" {
name = "AsyncReportBG-AsncRprt"
resource_group_name = "My-RG"
namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
I have tried the below terraform code to create authorization rules and could create them successfully:
I have followed this azurerm_servicebus_queue_authorization_rule |
Resources | hashicorp/azurerm | Terraform Registry having latest
version of hashicorp/azurerm terraform provider.
This maybe even related to arguments queue_name. arguments of
resources changed to queue_id in 3.X.X versions
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "example" {
name = "xxxx"
location = "xx"
}
provider "azurerm" {
features {}
alias = "cloud_operations"
}
resource "azurerm_servicebus_namespace" "service_bus" {
name = "sample-check-bus"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
sku = "Premium"
capacity = 1
zone_redundant = true
tags = {
source = "terraform"
}
}
resource "azurerm_servicebus_queue" "que-sample-check" {
name = "sample-check-queue"
#resource_group_name = "My-RG"
namespace_id = azurerm_servicebus_namespace.service_bus.id
#namespace_name =
azurerm_servicebus_namespace.service_bus.name
dead_lettering_on_message_expiration = true
requires_duplicate_detection = false
requires_session = false
enable_partitioning = false
default_message_ttl = "P15D"
lock_duration = "PT2M"
duplicate_detection_history_time_window = "PT15M"
max_size_in_megabytes = 1024
max_delivery_count = 05
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check-lsr"
{
name = "lsr"
#resource_group_name = "My-RG"
#namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_id = azurerm_servicebus_queue.que-sample-check.id
#queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
resource "azurerm_servicebus_queue_authorization_rule" "que-sample-check- AsyncReportBG-AsncRprt" {
name = "AsyncReportBG-AsncRprt"
#resource_group_name = "My-RG"
#namespace_name = azurerm_servicebus_namespace.service_bus.name
queue_id = azurerm_servicebus_queue.que-sample-check.id
#queue_name = azurerm_servicebus_queue.que-sample-check.name
listen = true
send = true
manage = false
}
Authorization rules created without error:
Please try to change the name of the authorization rule named “lsr” with increased length and also please try to create one rule at a time in your case .
Thanks all for your inputs and suggestions.
Code is working fine now with the terraform provider version 2.56.0 and terraform cli version 0.12.19. Please let me know if any concerns.
I have had to re-develop my pipeline for building out my infrastructure to use local agent pools and change Ubuntu code to windows (PowerShell code).
I am now in the position where I am building out my infrastructure and it failing on the most basic of tasks.
I have created my SQL Server and that seems to be OK. I have also got my logging system infra done OK too, but I am really struggling on building out a DB on my SQL Server.
At the most basic here is my code. Server build OK.
resource "azurerm_mssql_server" "main" {
name = local.sqlServerName
resource_group_name = local.resourceGroupName
location = var.location
version = "12.0"
minimum_tls_version = "1.2"
administrator_login = var.sql_administrator_login
administrator_login_password = var.sql_administrator_login_password
tags = var.tags
}
resource "azurerm_sql_active_directory_administrator" "main" {
server_name = azurerm_mssql_server.main.name
resource_group_name = local.resourceGroupName
login = local.sql_ad_login
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = local.object_id
}
resource "azurerm_sql_firewall_rule" "main" {
name = var.sql_firewall_rule
resource_group_name = local.resourceGroupName
server_name = azurerm_mssql_server.main.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
resource "azurerm_mssql_database" "main" {
name = "${local.raptSqlDatabaseName}-${var.environment}"
server_id = azurerm_mssql_server.main.id
min_capacity = 0.5
max_size_gb = 100
zone_redundant = false
collation = "SQL_Latin1_General_CP1_CI_AS"
sku_name = "GP_S_Gen5_2"
auto_pause_delay_in_minutes = 60
create_mode = "Default"
}
I get an error saying:
Error: waiting for creation of MsSql Database "xxx-xxx-xxx-xxx-Prod" (MsSql Server Name "xxx-xxx-xxx-prod" / Resource Group "rg-xxx-xxx-xxx"): Code="InternalServerError" Message="An unexpected error occured while processing the request.
Before I had to design it all in PS and not bash and use local pool, I never had a problem and this worked fine.
I found this but it's saying the correct error but nothing else seems correct. It is odd because I can build up my other infra fine in the same main.tf file.
https://github.com/hashicorp/terraform-provider-azurerm/issues/13194
I am also noticing that Terraform output is not working:
Here is my output file:
output "sql_server_name" {
value = azurerm_mssql_server.main.fully_qualified_domain_name
}
output "sql_server_user" {
value = azurerm_mssql_server.main.administrator_login
}
output "sql_server_password" {
value = azurerm_mssql_server.main.administrator_login_password
sensitive = true
}
#output "cl_sql_database_name" {
# value = azurerm_mssql_database.cl.name
#}
#output "rapt_sql_database_name" {
# value = azurerm_mssql_database.rapt.name
#}
output "app_insights_instrumentation_key" {
value = azurerm_application_insights.main.instrumentation_key
}
Is there any chance this is linked?
Please use latest terraform version i.e. 1.1.13 and azurerm-provider i.e. 2.92.0 as there were some bugs in previous Azure API version which resulted in 500 Error Code which has been fixed in the recent versions as mentioned in this GitHub Issue . Only after a successful apply , you get the output otherwise it won't get stored if there is an error.
I tested the same code with latest versions on both PowerShell and Bash as below :
provider "azurerm" {
features {}
}
data "azurerm_client_config" "current" {}
locals {
resourceGroupName="ansumantest"
sqlServerName="ansumantestsql"
sql_ad_login="sqladmin"
object_id= data.azurerm_client_config.current.object_id
raptSqlDatabaseName="ansserverdb"
}
variable "location" {
default="eastus"
}
variable "sql_administrator_login" {
default="4dm1n157r470r"
}
variable "sql_administrator_login_password" {
default="4-v3ry-53cr37-p455w0rd"
}
variable "sql_firewall_rule" {
default="ansumantestfirewall"
}
variable "environment" {
default="test"
}
resource "azurerm_mssql_server" "main" {
name = local.sqlServerName
resource_group_name = local.resourceGroupName
location = var.location
version = "12.0"
minimum_tls_version = "1.2"
administrator_login = var.sql_administrator_login
administrator_login_password = var.sql_administrator_login_password
}
resource "azurerm_sql_active_directory_administrator" "main" {
server_name = azurerm_mssql_server.main.name
resource_group_name = local.resourceGroupName
login = local.sql_ad_login
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = local.object_id
}
resource "azurerm_sql_firewall_rule" "main" {
name = var.sql_firewall_rule
resource_group_name = local.resourceGroupName
server_name = azurerm_mssql_server.main.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
resource "azurerm_mssql_database" "main" {
name = "${local.raptSqlDatabaseName}-${var.environment}"
server_id = azurerm_mssql_server.main.id
min_capacity = 0.5
max_size_gb = 100
zone_redundant = false
collation = "SQL_Latin1_General_CP1_CI_AS"
sku_name = "GP_S_Gen5_2"
auto_pause_delay_in_minutes = 60
create_mode = "Default"
}
Output.tf
output "sql_server_name" {
value = azurerm_mssql_server.main.fully_qualified_domain_name
}
output "sql_server_user" {
value = azurerm_mssql_server.main.administrator_login
}
output "sql_server_password" {
value = azurerm_mssql_server.main.administrator_login_password
sensitive = true
}
output "cl_sql_database_name" {
value = azurerm_mssql_database.main.name
}
Output :
Updated Terraform to latest version didnt really help...
I turned on logging with TF_LOGS (env variable in terraform).
Watching the activity logs in the resource group and i noticed it was building the DB server and then there was an error with AAD and then the DB Server build failed after that. So I removed the AAD work in my main.tf - then i reran the pipeline then it worked fine. Phew...