Below code snippet should work as per documentation
resource "opentelekomcloud_compute_instance_v2" "ecs_1" {
region = var.region
availability_zone = var.availability_zone
name = "${var.ecs_name}-notags"
image_id = var.image_id
flavor_id = var.flavor_id
key_pair = var.key_name
security_groups = var.security_groups
network {
uuid = var.subnet_id
}
}
Output
Error: error fetching OpenTelekomCloud CloudServers tags: Resource not found: [GET https://ecs.eu-de.otc.t-systems.com/v1
Not sure why this error is appearing despite having all the required permissions
Related
Terraform initialize is working fine but when I do terraform plan getting below error.
Error: Failed to decode resource from state
│
│ Error decoding "azurerm_mssql_database.db" from previous state: unsupported attribute "extended_auditing_policy"
If I comment this particular resource then we start getting error for other resource.
Can some one please help me ?
Error: Failed to decode resource from state
│
│ Error decoding "azurerm_mssql_database.db" from previous state: unsupported attribute "extended_auditing_policy"
I tried in my environment and got below results:
Initially I tried with extended audit policy with new terraform provider version and got the same error:
extended_auditing_policy {
storage_endpoint = module.storageaccount.storage_account.self.primary_blob_endpoint
storage_account_access_key = module.storageaccount.storage_account.self.primary_access_key
storage_account_access_key_is_secondary = false
retention_in_days = 30
}
This problem occurs when attempting to import data using a provider version that is older than the one that was used to create the current state. The earlier provider version won't be able to decode an unknown attribute while loading the state file during the import if the attribute was added in the newer version of the provider.
I tried with new azurerm_mssql_server_extended_auditing_policy resource to solve this problem.
Terraform.tf
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "vs" {
name = "<rg name>"
location = "West Europe"
}
resource "azurerm_mssql_server" "ex" {
name = "demosqlserver3261"
resource_group_name = azurerm_resource_group.vs.name
location = azurerm_resource_group.vs.location
version = "12.0"
administrator_login = "missadministrator"
administrator_login_password = "AdminPassword123!"
}
resource "azurerm_mssql_database" "ext" {
name = "demodb3261"
server_id = azurerm_mssql_server.ex.id
}
resource "azurerm_storage_account" "vst" {
name = "venkat678"
resource_group_name = azurerm_resource_group.vs.name
location = azurerm_resource_group.vs.location
account_tier = "Standard"
account_replication_type = "GRS"
}
resource "azurerm_mssql_database_extended_auditing_policy" "example" {
database_id = azurerm_mssql_database.ext.id
storage_endpoint = azurerm_storage_account.vst.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.vst.primary_access_key
storage_account_access_key_is_secondary = false
retention_in_days = 6
}
Console:
Portal:
Reference:
Import fails with "Error: Invalid resource instance data in state" – HashiCorp Help Center
Terraform Version
1.2.3
AzureRM Provider Version
v3.13.0
Affected Resource(s)/Data Source(s)
Azure data factory, SQL Database
Terraform Configuration Files
resource "azurerm_data_factory_linked_service_azure_sql_database" "sqldatabase_linked_service_10102022" {
count = (var.subResourcesInfo.sqlDatabaseName != "") ? 1 : 0
depends_on = [azurerm_data_factory_integration_runtime_azure.autoresolve_integration_runtime,
azurerm_data_factory_managed_private_endpoint.sqlserver_managed_endpoint]
name = "AzureSqlDatabase10102022"
data_factory_id = azurerm_data_factory.datafactory.id
integration_runtime_name = "AutoResolveIntegrationRuntime"
use_managed_identity = true
connection_string = format("Integrated Security=False;Data Source=%s.database.windows.net;Initial Catalog=%s;",
var.subResourcesInfo.sqlServerName,
var.subResourcesInfo.sqlDatabaseName)
}
Expected Behaviour
Issue is ADF to DB connectivity, error:
Operation on target DWH_DF_aaa failed: {'StatusCode':'DFExecutorUserError','Message':'Job failed due to reason: com.microsoft.dataflow.broker.InvalidOperationException: Only one valid authentication should be used for AzureSqlDatabase. ServicePrincipalAuthentication is invalid. One or two of servicePrincipalId/key/tenant is missing.','Details':''}
When we created this LS using TF, we get tenant="" in ADF LS Json file which we suspect that causing issue of above error.
When we created the same LS directly on ADF UI, then there is no field of tenant="" in its json file, and if we use this LS in dataflow/pipeline then communication works from ADF to DB.
Expected behavior should be, if we don't provide tenant_id parameter in TF code then in json also should not show tenant="" which then works for connectivity.
I tried to reproduce the scenario in my environment:
With below code , I could create a Linked Service (connection) between Azure SQL Database and Azure Data Factory.
Code:
resource "azurerm_data_factory" "example" {
name = "kaADFexample"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
managed_virtual_network_enabled = true
}
resource "azurerm_storage_account" "example" {
name = "kaaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_data_factory_managed_private_endpoint" "example" {
name = "example"
data_factory_id = azurerm_data_factory.example.id
target_resource_id = azurerm_storage_account.example.id
subresource_name = "blob"
}
resource "azurerm_user_assigned_identity" "main" {
depends_on = [data.azurerm_resource_group.example]
name = "kasupports01-mid"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_data_factory_integration_runtime_azure" "test" {
name = "AutoResolveIntegrationRuntime"
data_factory_id = azurerm_data_factory.example.id
location = "AutoResolve"
virtual_network_enabled = true
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "linked_service_azure_sql_database" {
name = "kaexampleLS"
data_factory_id = azurerm_data_factory.example.id
connection_string = "data source=serverhostname;initial catalog=master;user id=testUser;Password=test;integrated security=False;encrypt=True;connection timeout=30"
use_managed_identity = true
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.test.name
depends_on = [azurerm_data_factory_integration_runtime_azure.test,
azurerm_data_factory_managed_private_endpoint.example]
}
output "id" {
value = azurerm_data_factory_linked_service_azure_sql_database.linked_service_azure_sql_database.id
}
Executed: terraform plan
Output:
id = "/subscriptions/xxxxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.DataFactory/factories/kaADFexample/linkedservices/kaexampleLS"
If the error persists in your case ,try removing the tenant attribute in the data_factory just after deployment is done in terraform.
Please check this known issue and mentioned by #chgenzel in terraform-provider-azurerm issues | Github
ADF:
Managed Identity
Linked service : azure sql
Reference: data_factory_linked_service_azure_sql_database | terraformregistry
I am trying to create some resources in azure with terraform.
What I have:
resource "azurerm_log_analytics_workspace" "logws" {
name = lower("log-${var.env}-${local.location_prefix[coalesce(var.location)]}-${random_string.postfix.result}")
resource_group_name = azurerm_resource_group.rg[0].name
location = azurerm_resource_group.rg[0].location
sku = var.log_analytics_workspace_sku
retention_in_days = var.log_analytics_logs_retention_in_days
tags = local.common_tags
}
resource "azurerm_monitor_private_link_scoped_service" "logscopelink" {
name = "scoped-${azurerm_log_analytics_workspace.logws.name}"
resource_group_name = azurerm_resource_group.rg[0].name
scope_name = azurerm_log_analytics_workspace.logws.name
linked_resource_id = azurerm_log_analytics_workspace.logws.id
depends_on = [azurerm_log_analytics_workspace.logws]
}
log analytics workspace is created but its when it try to create private_link_scoped_service it fails saying, parent resource not found.
Error I get:
│ Error: creating/updating Private Link Scoped Service: (Scoped Resource Name "scoped-log-sbx-we-oe728m" / Private Link Scope Name "log-sbx-we-oe728m" / Resource Group "hub"): insights.PrivateLinkScopedResourcesClient#CreateOrUpdate: Failure sending request: StatusCode=404 -- Original Error: Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource 'log-sbx-we-oe728m' not found."
I verified via azure portal, that logws does exist.
Can someone suggest what is wrong here.
You need to create a new azurerm_monitor_private_link_scope resource, then reference it in the scope_name attribute of the azurerm_monitor_private_link_scoped_service resource, example:
resource "azurerm_log_analytics_workspace" "logws" {
name = lower("log-${var.env}-${local.location_prefix[coalesce(var.location)]}-${random_string.postfix.result}")
resource_group_name = azurerm_resource_group.rg[0].name
location = azurerm_resource_group.rg[0].location
sku = var.log_analytics_workspace_sku
retention_in_days = var.log_analytics_logs_retention_in_days
tags = local.common_tags
}
# New resource required
resource "azurerm_monitor_private_link_scope" "example" {
name = var.private_link_scope_name
resource_group_name = azurerm_resource_group.rg[0].name
}
resource "azurerm_monitor_private_link_scoped_service" "logscopelink" {
name = "scoped-${azurerm_log_analytics_workspace.logws.name}"
resource_group_name = azurerm_resource_group.rg[0].name
scope_name = azurerm_monitor_private_link_scope.example.name
linked_resource_id = azurerm_log_analytics_workspace.logws.id
}
Note that I've removed the explicit depends_on attribute as Terraform can infer on its own the dependencies between resources when you reference an attribute from a resource in another resource block.
Objective: Creating cosmosdb account and sqldb in azure with terraform
What I tried:
resource "azurerm_cosmosdb_account" "cosmosaccount" {
name = "cosmosdb"
location = var.location
resource_group_name = var.rg_name
offer_type = var.cosmosdb_offer_type
kind = var.cosmosdb_kind
is_virtual_network_filter_enabled = "true"
ip_range_filter = var.ip_range_filter
capabilities {
name = "EnableTable"
}
enable_automatic_failover = false
consistency_policy {
consistency_level = var.cosmosdb_consistancy_level
max_interval_in_seconds = 5
max_staleness_prefix = 100
}
geo_location {
location = var.location
failover_priority = 0
}
virtual_network_rule {
id = var.subnet_id
ignore_missing_vnet_service_endpoint = true
}
}
resource "azurerm_cosmosdb_sql_database" "comosdbsqldb" {
name = "driving"
resource_group_name = azurerm_cosmosdb_account.cosmosaccount.resource_group_name
account_name = azurerm_cosmosdb_account.cosmosaccount.name
throughput = 500
depends_on = [azurerm_cosmosdb_account.cosmosaccount]
}
resource "azurerm_cosmosdb_sql_container" "mdhistcontainer" {
name = "metadata_history"
resource_group_name = azurerm_cosmosdb_account.cosmosaccount.resource_group_name
account_name = azurerm_cosmosdb_account.cosmosaccount.name
database_name = azurerm_cosmosdb_sql_database.comosdbsqldb.name
partition_key_path = "/definition/id"
partition_key_version = 1
throughput = 500
indexing_policy {
indexing_mode = "consistent"
included_path {
path = "/*"
}
included_path {
path = "/included/?"
}
excluded_path {
path = "/excluded/?"
}
}
unique_key {
paths = ["/definition/idlong", "/definition/idshort"]
}
depends_on = [azurerm_cosmosdb_sql_database.comosdbsqldb]
}
Issue I am facing:
It created cosmosdb account successfully but failing at creating sqldb
Error I am getting:
Error: checking for presence of Sql Database: (Name "driving" / Database Account Name "cosmosaccount" /
Resource Group "xxxxxxxxxxxx"): documentdb.SQLResourcesClient#GetSQLDatabase:
Failure responding to request: StatusCode=405 -- Original Error: autorest/azure: Service returned an error.
Status=405 Code="MethodNotAllowed" Message="Requests for API sql are not supported for this account.
\r\nActivityId: 2d79ca83-9534-46e8-a7cf-cef1fd76e752, Microsoft.Azure.Documents.Common/2.14.0"
│
│ with module.cosmosdb.azurerm_cosmosdb_sql_database.comosdbsqldb,
│ on ../modules/cosmosdb/main.tf line 46, in resource "azurerm_cosmosdb_sql_database" "comosdbsqldb":
│ 46: resource "azurerm_cosmosdb_sql_database" "comosdbsqldb"
I have given DocumentDBContributor role also to service principal but still getting this error. I am following below documentation about syntax.. It seems I am exactly following that
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_sql_database
Please suggest how to fix this error.
I believe you are getting this error is because you are creating a Cosmos DB account targeting Table API and trying to create a database with custom name in that account.
capabilities {
name = "EnableTable"
}
With Table API, you cannot create a database with any name. There can only be a single database in the account and its name is fixed. The name of the database should be TablesDB.
Creating a Cosmos DB via Terraform with Replicate data globally enabled times out after one hour with a status code:
StatusCode=202 -- Original Error: context deadline exceeded
Are there any solutions so Terraform can complete successfully?
We tried adding the timeout operation to the Terraform code, however it is not supported
Terraform code that is timing out:
resource "azurerm_resource_group" "resource_group" {
name = "${local.name}"
location = "${var.azure_location}"
tags = "${var.tags}"
}
resource "azurerm_cosmosdb_account" "db" {
name = "${local.name}"
location = "${var.azure_location}"
resource_group_name = "${azurerm_resource_group.resource_group.name}"
offer_type = "Standard"
kind = "GlobalDocumentDB"
tags = "${var.tags}"
enable_automatic_failover = false
consistency_policy {
consistency_level = "Session"
}
geo_location {
location = "${var.failover_azure_location}"
failover_priority = 1
}
geo_location {
location = "${azurerm_resource_group.resource_group.location}"
failover_priority = 0
}
}
I'd expect that Terraform would complete successfully since the Cosmos DB is created post the Terraform timeout without error.