Objective: Creating cosmosdb account and sqldb in azure with terraform
What I tried:
resource "azurerm_cosmosdb_account" "cosmosaccount" {
name = "cosmosdb"
location = var.location
resource_group_name = var.rg_name
offer_type = var.cosmosdb_offer_type
kind = var.cosmosdb_kind
is_virtual_network_filter_enabled = "true"
ip_range_filter = var.ip_range_filter
capabilities {
name = "EnableTable"
}
enable_automatic_failover = false
consistency_policy {
consistency_level = var.cosmosdb_consistancy_level
max_interval_in_seconds = 5
max_staleness_prefix = 100
}
geo_location {
location = var.location
failover_priority = 0
}
virtual_network_rule {
id = var.subnet_id
ignore_missing_vnet_service_endpoint = true
}
}
resource "azurerm_cosmosdb_sql_database" "comosdbsqldb" {
name = "driving"
resource_group_name = azurerm_cosmosdb_account.cosmosaccount.resource_group_name
account_name = azurerm_cosmosdb_account.cosmosaccount.name
throughput = 500
depends_on = [azurerm_cosmosdb_account.cosmosaccount]
}
resource "azurerm_cosmosdb_sql_container" "mdhistcontainer" {
name = "metadata_history"
resource_group_name = azurerm_cosmosdb_account.cosmosaccount.resource_group_name
account_name = azurerm_cosmosdb_account.cosmosaccount.name
database_name = azurerm_cosmosdb_sql_database.comosdbsqldb.name
partition_key_path = "/definition/id"
partition_key_version = 1
throughput = 500
indexing_policy {
indexing_mode = "consistent"
included_path {
path = "/*"
}
included_path {
path = "/included/?"
}
excluded_path {
path = "/excluded/?"
}
}
unique_key {
paths = ["/definition/idlong", "/definition/idshort"]
}
depends_on = [azurerm_cosmosdb_sql_database.comosdbsqldb]
}
Issue I am facing:
It created cosmosdb account successfully but failing at creating sqldb
Error I am getting:
Error: checking for presence of Sql Database: (Name "driving" / Database Account Name "cosmosaccount" /
Resource Group "xxxxxxxxxxxx"): documentdb.SQLResourcesClient#GetSQLDatabase:
Failure responding to request: StatusCode=405 -- Original Error: autorest/azure: Service returned an error.
Status=405 Code="MethodNotAllowed" Message="Requests for API sql are not supported for this account.
\r\nActivityId: 2d79ca83-9534-46e8-a7cf-cef1fd76e752, Microsoft.Azure.Documents.Common/2.14.0"
│
│ with module.cosmosdb.azurerm_cosmosdb_sql_database.comosdbsqldb,
│ on ../modules/cosmosdb/main.tf line 46, in resource "azurerm_cosmosdb_sql_database" "comosdbsqldb":
│ 46: resource "azurerm_cosmosdb_sql_database" "comosdbsqldb"
I have given DocumentDBContributor role also to service principal but still getting this error. I am following below documentation about syntax.. It seems I am exactly following that
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_sql_database
Please suggest how to fix this error.
I believe you are getting this error is because you are creating a Cosmos DB account targeting Table API and trying to create a database with custom name in that account.
capabilities {
name = "EnableTable"
}
With Table API, you cannot create a database with any name. There can only be a single database in the account and its name is fixed. The name of the database should be TablesDB.
Related
Terraform Version
1.2.3
AzureRM Provider Version
v3.13.0
Affected Resource(s)/Data Source(s)
Azure data factory, SQL Database
Terraform Configuration Files
resource "azurerm_data_factory_linked_service_azure_sql_database" "sqldatabase_linked_service_10102022" {
count = (var.subResourcesInfo.sqlDatabaseName != "") ? 1 : 0
depends_on = [azurerm_data_factory_integration_runtime_azure.autoresolve_integration_runtime,
azurerm_data_factory_managed_private_endpoint.sqlserver_managed_endpoint]
name = "AzureSqlDatabase10102022"
data_factory_id = azurerm_data_factory.datafactory.id
integration_runtime_name = "AutoResolveIntegrationRuntime"
use_managed_identity = true
connection_string = format("Integrated Security=False;Data Source=%s.database.windows.net;Initial Catalog=%s;",
var.subResourcesInfo.sqlServerName,
var.subResourcesInfo.sqlDatabaseName)
}
Expected Behaviour
Issue is ADF to DB connectivity, error:
Operation on target DWH_DF_aaa failed: {'StatusCode':'DFExecutorUserError','Message':'Job failed due to reason: com.microsoft.dataflow.broker.InvalidOperationException: Only one valid authentication should be used for AzureSqlDatabase. ServicePrincipalAuthentication is invalid. One or two of servicePrincipalId/key/tenant is missing.','Details':''}
When we created this LS using TF, we get tenant="" in ADF LS Json file which we suspect that causing issue of above error.
When we created the same LS directly on ADF UI, then there is no field of tenant="" in its json file, and if we use this LS in dataflow/pipeline then communication works from ADF to DB.
Expected behavior should be, if we don't provide tenant_id parameter in TF code then in json also should not show tenant="" which then works for connectivity.
I tried to reproduce the scenario in my environment:
With below code , I could create a Linked Service (connection) between Azure SQL Database and Azure Data Factory.
Code:
resource "azurerm_data_factory" "example" {
name = "kaADFexample"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
managed_virtual_network_enabled = true
}
resource "azurerm_storage_account" "example" {
name = "kaaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_data_factory_managed_private_endpoint" "example" {
name = "example"
data_factory_id = azurerm_data_factory.example.id
target_resource_id = azurerm_storage_account.example.id
subresource_name = "blob"
}
resource "azurerm_user_assigned_identity" "main" {
depends_on = [data.azurerm_resource_group.example]
name = "kasupports01-mid"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_data_factory_integration_runtime_azure" "test" {
name = "AutoResolveIntegrationRuntime"
data_factory_id = azurerm_data_factory.example.id
location = "AutoResolve"
virtual_network_enabled = true
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "linked_service_azure_sql_database" {
name = "kaexampleLS"
data_factory_id = azurerm_data_factory.example.id
connection_string = "data source=serverhostname;initial catalog=master;user id=testUser;Password=test;integrated security=False;encrypt=True;connection timeout=30"
use_managed_identity = true
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.test.name
depends_on = [azurerm_data_factory_integration_runtime_azure.test,
azurerm_data_factory_managed_private_endpoint.example]
}
output "id" {
value = azurerm_data_factory_linked_service_azure_sql_database.linked_service_azure_sql_database.id
}
Executed: terraform plan
Output:
id = "/subscriptions/xxxxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.DataFactory/factories/kaADFexample/linkedservices/kaexampleLS"
If the error persists in your case ,try removing the tenant attribute in the data_factory just after deployment is done in terraform.
Please check this known issue and mentioned by #chgenzel in terraform-provider-azurerm issues | Github
ADF:
Managed Identity
Linked service : azure sql
Reference: data_factory_linked_service_azure_sql_database | terraformregistry
I am trying to figure out a way to add tags to the failover DBs created with the help of terraform registry - azurerm_mssql_failover_group.
if I use the tags field mentioned as part of terraform documentation, it adds tags to the failover group but it does not set tags on the failover databases created.
My code for the resource group is as below
resource "azurerm_mssql_failover_group" "sql-database-failover" {
name = "sqldatabasefailover1"
server_id = "Id of Primary SQL Server"
databases = [
"Id of primary SQL dbs"
]
partner_server {
id = "Id of secondary (failover) SQL server"
}
read_write_endpoint_failover_policy {
mode = "Automatic"
grace_minutes = 60
}
tags = local.common_tags
}
I tried looking for terraform registry which can help us add tags to the already present DB but could not find any. Any help will be appreciated.
Regards Tarun
I tried to reproduce the same in my environment .
I created local tags as below and tried to add to failover group.
Code referred from : azurerm_sql_failover_group | Resources | hashicorp/azurerm | Terraform Registry
Code:
locals {
resource_tags = {
project_name = "failovergroup",
category = "devbackupresource"
}
}
resource "azurerm_mssql_server" "primary" {
name = "ka-sql-primary"
resource_group_name = data.azurerm_resource_group.example.name
location = "southeastasia"
version = "12.0"
administrator_login = "sqladmin"
administrator_login_password = "xxx"
}
resource "azurerm_mssql_server" "secondary" {
name = "ka-sql-secondary"
resource_group_name = data.azurerm_resource_group.example.name
location = "westeurope"
version = "12.0"
administrator_login = "sqladmin"
administrator_login_password = "xxx"
}
resource "azurerm_mssql_database" "db1" {
name = "kadb1"
server_id = azurerm_mssql_server.primary.id
}
resource "azurerm_mssql_failover_group" "example" {
name = "kav-example-failover-group"
server_id = azurerm_mssql_server.primary.id
databases = [azurerm_mssql_database.db1.id]
tags = local.resource_tags
partner_server {
id = azurerm_mssql_server.secondary.id
}
read_write_endpoint_failover_policy {
mode = "Automatic"
grace_minutes = 60
}
}
But the tags are not added to secondaryDb which is intended for failover group.
You can use the tags in Secondary Db resource block ” azurerm_mssql_server” for secondary as below.
locals {
resource_tags = {
...
}
}
resource "azurerm_mssql_server" "secondary" {
name = "ka-sql-secondary"
resource_group_name = data.azurerm_resource_group.example.name
location = "westeurope"
version = "12.0"
administrator_login = "sqladmin"
administrator_login_password = "pa$$w0rd"
tags = local.resource_tags
}
resource "azurerm_mssql_failover_group" "example" {
name = "kav-example-failover-group
server_id = azurerm_mssql_server.primary.id
databases = [azurerm_mssql_database.db1.id]
...
}
This created tags to my secondary sql server in azure .
Edit:
tag-support-microsoftsql
See Auto-failover groups limitations .One may need to create tags manually .
Also Note: Database restore operations don't restore the tags of the
original database.
An error occurs when I try to create a AKS with Terraform. The AKS was created but the error still comes at the end, which is ugly.
│ Error: retrieving Access Profile for Cluster: (Managed Cluster Name
"aks-1" / Resource Group "pengine-aks-rg"):
containerservice.ManagedClustersClient#GetAccessProfile: Failure responding to request:
StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400
Code="BadRequest" Message="Getting static credential is not allowed because this cluster
is set to disable local accounts."
This is my terraform code:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.96.0"
}
}
}
resource "azurerm_resource_group" "aks-rg" {
name = "aks-rg"
location = "West Europe"
}
resource "azurerm_kubernetes_cluster" "aks-1" {
name = "aks-1"
location = azurerm_resource_group.aks-rg.location
resource_group_name = azurerm_resource_group.aks-rg.name
dns_prefix = "aks1"
local_account_disabled = "true"
default_node_pool {
name = "nodepool1"
node_count = 3
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Test"
}
}
Is this a Terraform bug? Can I avoid the error?
If you disable local accounts you need to activate AKS-managed Azure Active Directory integration as you have no more local accounts to authenticate against AKS.
This example enables RBAC, Azure AAD & Azure RBAC:
resource "azurerm_kubernetes_cluster" "aks-1" {
...
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
tenant_id = data.azurerm_client_config.current.tenant_id
admin_group_object_ids = ["OBJECT_IDS_OF_ADMIN_GROUPS"]
azure_rbac_enabled = true
}
}
}
If you dont want AAD integration you need set local_account_disabled = "false".
We are managing our API Management platform in Azure with terraform. Sometimes we need to bump the revision, and when we do that there are several issues:
The API that gets bumped is recreated (expected)
The recreated API is lost from all products it belonged to
The recreated API does not get any policies applied
So when the revision is bumped, the pipeline has to be run twice:
The API is recreated
The API is added to the product again and gets its policy applied
This is how the template looks like:
# Fetch existing API management instance
data "azurerm_api_management" "storemanager_api_management" {
name = local.api_management_name
resource_group_name = local.api_management_resource_group_name
}
# Create the API
resource "azurerm_api_management_api" "api" {
api_management_name = data.azurerm_api_management.storemanager_api_management.name
resource_group_name = local.api_management_resource_group_name
name = "api"
path = "api"
display_name = "api 1"
protocols = ["https"]
revision = var.api_revision
subscription_required = true
subscription_key_parameter_names {
header = local.api_subscription_key_name
query = local.api_subscription_key_name
}
import {
content_format = "swagger-link-json"
content_value = format("https://%s.blob.core.windows.net/%s/%s/%s",
data.azurerm_storage_account.storage_account.name,
data.azurerm_storage_container.open_api_definition_storage_container.name,
var.api_api_version,
local.api_api_definition_name
)
}
}
# Create the product
resource "azurerm_api_management_product" "product" {
api_management_name = data.azurerm_api_management.storemanager_api_management.name
resource_group_name = local.api_management_resource_group_name
product_id = "product"
display_name = "Product"
description = "Collection of APIs"
subscription_required = true
subscriptions_limit = 1
approval_required = true
published = true
}
# Associate the API with the product
resource "azurerm_api_management_product_api" "product_api" {
api_management_name = data.azurerm_api_management.storemanager_api_management.name
resource_group_name = local.api_management_resource_group_name
product_id = azurerm_api_management_product.product.product_id
api_name = azurerm_api_management_api.api.name
}
# Apply policy to the API
resource "azurerm_api_management_api_policy" "policy" {
api_name = azurerm_api_management_api.api.name
api_management_name = data.azurerm_api_management.storemanager_api_management.name
resource_group_name = local.api_management_resource_group_name
xml_content = templatefile("./policy.tmpl", { x_functions_key_value = var.function_key, backend_name = azurerm_api_management_backend.generic_function_app_backend.name })
}
Is this a bug, or am I not using terraform correctly? How do I re-add the recreated API to the product and get its policy applied in one run of the pipeline?
Creating a Cosmos DB via Terraform with Replicate data globally enabled times out after one hour with a status code:
StatusCode=202 -- Original Error: context deadline exceeded
Are there any solutions so Terraform can complete successfully?
We tried adding the timeout operation to the Terraform code, however it is not supported
Terraform code that is timing out:
resource "azurerm_resource_group" "resource_group" {
name = "${local.name}"
location = "${var.azure_location}"
tags = "${var.tags}"
}
resource "azurerm_cosmosdb_account" "db" {
name = "${local.name}"
location = "${var.azure_location}"
resource_group_name = "${azurerm_resource_group.resource_group.name}"
offer_type = "Standard"
kind = "GlobalDocumentDB"
tags = "${var.tags}"
enable_automatic_failover = false
consistency_policy {
consistency_level = "Session"
}
geo_location {
location = "${var.failover_azure_location}"
failover_priority = 1
}
geo_location {
location = "${azurerm_resource_group.resource_group.location}"
failover_priority = 0
}
}
I'd expect that Terraform would complete successfully since the Cosmos DB is created post the Terraform timeout without error.