I am setting up an alias record in an Azure-hosted DNS zone to point to the public (egress) IP of a K8s cluster, like this:
data "azurerm_dns_zone" "example" {
name = "example.com"
}
locals {
egress_id = tolist(azurerm_kubernetes_cluster.k8s.network_profile.0.load_balancer_profile.0.effective_outbound_ips)[0]
egress_name = reverse(split("/", local.egress_id))[0]
egress_resource_group = reverse(split("/", local.egress_id))[4]
}
resource "azurerm_dns_a_record" "k8s" {
name = var.dns_prefix
zone_name = data.azurerm_dns_zone.example.name
resource_group_name = data.azurerm_dns_zone.example.resource_group_name
ttl = 300
target_resource_id = local.egress_id
}
output "ptr_command" {
value = "az network public-ip update --name ${local.egress_name} --resource-group ${local.egress_resource_group} --reverse-fqdn ${var.dns_prefix}.example.com --dns-name ${var.dns_prefix}-${local.egress_name}"
}
This works, and (just to prove that it works) I can also add a PTR record for reverse lookup with the explicit API command produced by the output block -- but can I get terraform to do that as part of apply? (One problem is that it would have to happen after the creation of the A record since Azure will check that it points at the correct IP).
(A k8s egress does not need a PTR record, I hear you say, but something like an outgoing SMTP server does need correct reverse lookup).
What I ended up doing was to add a local-exec provisioner to the DNS record resource -- but one that modifies the public IP using an explicit CLI command. Not a good solution because it is not where you'd look, but at least the ordering is right. Also I think the way I do it only works if you did az login to give Terraform access to your Azure account, though I'm sure you can configure az to use the same credentials as Terraform in other cases.
Here is a worked example with an explicit azurerm_public_ip resource, illustrating another Catch 22: On next apply, Terraform will see the reverse_fqdn attribute and attempt to remove it, unless you tell it that it's OK. (In the OP, the public IP was created by an azurerm_kubernetes_cluster resource and Terraform does not store its full state).
data "azurerm_dns_zone" "example" {
name = "example.com"
}
resource "random_id" "domain_label" {
byte_length = 31 # 62 hex digits.
}
resource "azurerm_public_ip" "example" {
lifecycle {
# reverse_fqdn can only be set after a DNS record has
# been created, so we do it in a provisioner there.
# Do not try to change it back, please.
ignore_changes = [reverse_fqdn]
}
name = "example"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
allocation_method = "Static"
domain_name_label = "x${random_id.domain_label.hex}"
}
resource "azurerm_dns_a_record" "example" {
name = "example"
zone_name = data.azurerm_dns_zone.example.name
resource_group_name = data.azurerm_dns_zone.example.resource_group_name
ttl = 300
target_resource_id = azurerm_public_ip.example.id
provisioner "local-exec" {
command = "az network public-ip update --name ${azurerm_public_ip.example.name} --resource-group ${azurerm_resource_group.example.name} --reverse-fqdn example.example.com"
}
}
I still have a problem that the target_resource_id attribute on the A record seems to disappear if Terraform replaces the VM that is using the network interface that the public IP is associated with. But another apply solves that.
This works for me when I skip explicit reverse_fqdn and workaround with azurerm_dns_a_record...
resource "azurerm_dns_zone" "example" {
name = local.subdomain
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_public_ip_prefix" "example" {
name = local.aks_public_ip_prefix_name
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
prefix_length = 31
tags = {
environment = "Production"
}
}
resource "azurerm_public_ip" "aks_ingress" {
name = local.aks_public_ip_name
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
allocation_method = "Static"
sku = "Standard"
domain_name_label = local.subdomain_prefix
public_ip_prefix_id = azurerm_public_ip_prefix.example.id
tags = {
environment = "Production"
}
}
resource "azurerm_dns_a_record" "example" {
name = "#"
zone_name = azurerm_dns_zone.example.name
resource_group_name = azurerm_resource_group.example.name
ttl = 300
target_resource_id = azurerm_public_ip.aks_ingress.id
}
enter image description here
enter image description here
Related
The azure Database for PostgreSQL Flexible server automatically back up the databases. In case of any accidental deletion of any databases we can restore the database by creating a new flexible server for the recovery process from the back up database .I know how do it from azure portal.Does the terraform code can also configure "backup and restore" for PostgreSQL Flexible server - Restore server.
The exact summary of the manual task documented in the azure doc:https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-restore-server-portal. Just i want do the task using terraform . In addition to that ensure appropriate login and database level permission
I really appreciate any support and help
It is possible to create the azure database for PostgreSQL flexible server backup using terraform
Please use the below terraform code to restore the server
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "RG_NAME"
location = "EASTUS"
}
resource "azurerm_virtual_network" "example" {
name = "example-vn"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "example" {
name = "example-sn"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
service_endpoints = ["Microsoft.Storage"]
delegation {
name = "fs"
service_delegation {
name = "Microsoft.DBforPostgreSQL/flexibleServers"
actions = [
"Microsoft.Network/virtualNetworks/subnets/join/action",
]
}
}
}
resource "azurerm_private_dns_zone" "example" {
name = "example.postgres.database.azure.com"
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_private_dns_zone_virtual_network_link" "example" {
name = "exampleVnetZone.com"
private_dns_zone_name = azurerm_private_dns_zone.example.name
virtual_network_id = azurerm_virtual_network.example.id
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_postgresql_flexible_server" "example" {
name = "example-psqlflexibleserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = "12"
delegated_subnet_id = azurerm_subnet.example.id
private_dns_zone_id = azurerm_private_dns_zone.example.id
administrator_login = "psqladmin"
administrator_password = "H#Sh1CoR3!"
zone = "1"
storage_mb = 32768
backup_retention_days = 30
geo_redundant_backup_enabled = true
sku_name = "GP_Standard_D4s_v3"
depends_on = [azurerm_private_dns_zone_virtual_network_link.example]
}
Here I have mentioned the RG_name, subnet, VM, Vnet, db name, password and backup policy days
I have given the backup policy retention days are 30 the policy retention days should be in between 1 to 35 and the defaults value is 7 days
Before running the script we have to check the appropriate login server details
After the follow the below steps to execute the file
terraform init
It will initialize the file
Terraform plan
This will creates an execution plan and it will preview the changes that terraform plans to make the infrastructure
Terraform apply
This will creates or updates the infrastructure depending on the configuration
Previously it was default and the geo_redundant_backup_enabled is false I have set it to true and backup policy will be 30 days
For reference you can use this documentation
In my main terraform file I have:
resource "azurerm_resource_group" "rg" {
name = var.rg_name
location = var.location
}
resource "azurerm_public_ip" "public_ip" {
name = "PublicIP"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
domain_name_label = var.domain_name_label
allocation_method = "Dynamic"
}
And in my outputs file I have:
data "azurerm_public_ip" "public_ip" {
name = "PublicIP"
resource_group_name = azurerm_resource_group.rg.name
depends_on = [azurerm_resource_group.rg, azurerm_public_ip.public_ip]
}
output "public_ip" {
value = data.azurerm_public_ip.public_ip.ip_address
}
All the resources including IP get created, however the output is blank. How can I fix this?
Make sure output.tf contains only output tags and main.tf contains resources tags
The following works just fine for me:
Main.tf
resource "azurerm_resource_group" "example" {
name = "resourceGroup1"
location = "West US"
}
resource "azurerm_public_ip" "example" {
name = "acceptanceTestPublicIp1"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
allocation_method = "Static"
tags = {
environment = "Production"
}
}
Output.tf
output "azurerm_public_ip" {
value = azurerm_public_ip.example.ip_address
}
In case you want to have a dependency between resources, use depends_on inside the resource tag.
For example:
depends_on = [azurerm_resource_group.example]
Steps to reproduce:
terraform init
terraform plan
terraform apply
terraform output
Update-
The reason you get blank public IP is since declaring allocation_method = "Dynamic"
From the docs:
Note Dynamic - Public IP Addresses aren't allocated until they're assigned to a
resource (such as a Virtual Machine or a Load Balancer) by design
within Azure.
Full working example with dynamic allocation.
I had the same issue. The actual problem seems to be the dynamic allocation. The IP address is not known until it is actually used by a VM.
In my case, I could solve the issue by adding the VM (azurerm_linux_virtual_machine.testvm) to the depends_on list in the data source:
data "azurerm_public_ip" "public_ip" {
name = "PublicIP"
resource_group_name = azurerm_resource_group.rg.name
depends_on = [ azurerm_public_ip.public_ip, azurerm_linux_virtual_machine.testvm ]
}
Unfortunately, this seems not to be documented in https://www.terraform.io/docs/providers/azurerm/d/public_ip.html
I'm building a Terraform config for my infrastructure deployment, and trying to connect an azurerm_mariadb_server resource to an azurerm_subnet, using an azurerm_mariadb_virtual_network_rule, as per documentation.
The vnet, subnet, mariadb-server etc are all created, but I get the following when trying to create the vnet_rule.
Error: Error waiting for MariaDb Virtual Network Rule "vnet-rule" (MariaDb Server: "server", Resource Group: "rg")
to be created or updated: couldn't find resource (21 retries)
on main.tf line 86, in resource "azurerm_mariadb_virtual_network_rule" "vnet_rule":
86: resource "azurerm_mariadb_virtual_network_rule" "mariadb_vnet_rule" {
I can't determine which resource can't be found - all resources except the azurerm_mariadb_virtual_network_rule are created, according to both the bash shell output and Azure portal.
My config is below - details of some resources are omitted for brevity.
provider "azurerm" {
version = "~> 2.27.0"
features {}
}
resource "azurerm_resource_group" "rg" {
name = "${var.resource_group_name}-rg"
location = var.location
}
resource "azurerm_virtual_network" "vnet" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}Vnet"
address_space = ["10.0.0.0/16"]
location = var.location
}
resource "azurerm_subnet" "backend" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}backendSubnet"
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.0.1.0/24"]
service_endpoints = ["Microsoft.Sql"]
}
resource "azurerm_mariadb_server" "server" {
# DB server name can contain lower-case letters, numbers and dashes, NOTHING ELSE
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}-mariadb-server"
location = var.location
sku_name = "B_Gen5_2"
version = "10.3"
ssl_enforcement_enabled = true
}
resource "azurerm_mariadb_database" "mariadb_database" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}_mariadb_database"
server_name = azurerm_mariadb_server.server.name
charset = "utf8"
collation = "utf8_general_ci"
}
## Network Service Endpoint (add DB to subnet)
resource "azurerm_mariadb_virtual_network_rule" "vnet_rule" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}-mariadb-vnet-rule"
server_name = azurerm_mariadb_server.server.name
subnet_id = azurerm_subnet.backend.id
}
The issue looks to arise within 'func resourceArmMariaDbVirtualNetworkRuleCreateUpdate', but I don't know Go, so can't follow exactly what's causing it.
If anyone can see an issue, or knows how to get around this, please let me know!
Also, I'm not able to do it via the portal - step 3 here shows a section for configuring VNET rules, which is not present on my page for 'Azure database for mariaDB server'. I have the Global administrator role, so I don't think it's permissions-related.
From creating and manage Azure Database for MariaDB VNet service endpoints and VNet rules by using the Azure portal
The key point is that
Support for VNet service endpoints is only for General Purpose and
Memory Optimized servers.
So change the code sku_name = "B_Gen5_2" to sku_name = "GP_Gen5_2" or other eligible sku_name.
sku_name - (Required) Specifies the SKU Name for this MariaDB Server.
The name of the SKU, follows the tier + family + cores pattern (e.g.
B_Gen4_1, GP_Gen5_8). For more information see the product
documentation.
It takes a few minutes to deploy.
Trying to create Databricks workspace using terraform but unsupported arguments:
resource "azurerm_databricks_workspace" "workspace" {
name = "testdata"
resource_group_name = "cloud-terraform"
location = "east us"
sku = "premium"
virtual_network_id = azurerm_virtual_network.vnet.id
public_subnet_name = "databrickpublicsubnet"
public_subnet_cidr = "10.0.0.0/22"
private_subnet_name = "databrickprivatesubnet"
private_subnet_cidr = "10.0.0.0/22"
tags = {
Environment = "terraformtest"
}
}
Error: An argument named "virtual_network_id" is not expected here. An argument named "public_subnet_name" is not expected here. An argument named "public_subnet_cidr" is not expected here.
I haven't tried to set up databricks via Terraform, but I believe (per the docs) you need add those properties in a block:
resource "azurerm_databricks_workspace" "workspace" {
name = "testdata"
resource_group_name = "cloud-terraform"
location = "east us"
sku = "premium"
custom_parameters {
virtual_network_id = azurerm_virtual_network.vnet.id
public_subnet_name = "databrickpublicsubnet"
private_subnet_name = "databrickprivatesubnet"
}
tags = {
Environment = "terraformtest"
}
}
The two cidr entries aren't part of the TF documentation.
true. you can add terraform commands to create the subnets (assuming vnet already exists, you can use data azurerm_virtual_network then create the two new subnets, then reference the names of the two new public/private subnets.
Then you run into what seems to be a chicken/egg issue though.
You get Error: you must define a value for 'public_subnet_network_security_group_association_id' if 'public_subnet_name' is set.
Problem is, the network security group is typically auto-generated on creation of the databrick workspace (like databricksnsgrandomstring), which works when creating it in the portal, but via terraform, I have to define it to create the workspace, but it doesn't yet exist until I create the workspace. The fix is to not let it generate it's own nsg name, but name it yourself with an nsg resource block.
below is code I use (dbname means databricks name!). here I'm
adding to an existing resource group 'qa' and existing vnet as well, only showing the public subnet and nsg association, you can easily add the private ones). just copy/modify in your own tf file(s). and you'll definitely need to change the address_prefix to your own CIDR values that works within your vnet and not stomp on existing subnets within.
resource "azurerm_subnet" "public" {
name = "${var.dbname}-public-subnet"
resource_group_name = data.azurerm_resource_group.qa.name
virtual_network_name = data.azurerm_virtual_network.vnet.name
address_prefixes = ["1.2.3.4/24"]
delegation {
name = "databricks_public"
service_delegation {
name = "Microsoft.Databricks/workspaces"
}
}
}
resource "azurerm_network_security_group" "nsg" {
name = "${var.dbname}-qa-databricks-nsg"
resource_group_name = data.azurerm_resource_group.qa.name
location= data.azurerm_resource_group.qa.location
}
resource "azurerm_subnet_network_security_group_association" "nsga_public" {
network_security_group_id = azurerm_network_security_group.nsg.id
subnet_id = azurerm_subnet.public.id
}
Then in your databricks_workspace block, replace your custom parameters with
custom_parameters {
public_subnet_name = azurerm_subnet.public.name
public_subnet_network_security_group_association_id = azurerm_subnet_network_security_group_association.nsga_public.id
private_subnet_name = azurerm_subnet.private.name
private_subnet_network_security_group_association_id = azurerm_subnet_network_security_group_association.nsga_private.id
virtual_network_id = data.azurerm_virtual_network.vnet.id
}
I am creating azure app services via terraform and following there documentation located at this site :
https://www.terraform.io/docs/providers/azurerm/r/app_service.html
Here is the snippet for terraform script:
resource "azurerm_app_service" "app" {
name = "app-name"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
app_service_plan_id = "ommitted"
site_config {
java_version = "1.8"
java_container = "TOMCAT"
java_container_version = "8.5"
}
}
I need sub domain as well for my app services for which I am not able to find any help in terraform :
as of now url for app services is:
https://abc.azure-custom-domain.cloud
and I want my url to be :
https://*.abc.azure-custom-domain.cloud
I know this can be done via portal but is their any way by which we can do it via terraform?
This is now possible using app_service_custom_hostname_binding (since PR#1087 on 6th April 2018)
resource "azurerm_app_service_custom_hostname_binding" "test" {
hostname = "www.mywebsite.com"
app_service_name = "${azurerm_app_service.test.name}"
resource_group_name = "${azurerm_resource_group.test.name}"
}
This is not possible. You could the link you provided. If parameter is not in, the parameter is not supported by terraform.
You need do it on Azure Portal.
I have found it to be a tiny bit more complicated...
DNS Zone (then set name servers at the registrar)
App Service
Domain verification TXT record
CNAME record
Hostname binding
resource "azurerm_dns_zone" "dns-zone" {
name = var.azure_dns_zone
resource_group_name = var.azure_resource_group_name
}
resource "azurerm_linux_web_app" "app-service" {
name = "some-service"
resource_group_name = var.azure_resource_group_name
location = var.azure_region
service_plan_id = "some-plan"
site_config {}
}
resource "azurerm_dns_txt_record" "domain-verification" {
name = "asuid.api.domain.com"
zone_name = var.azure_dns_zone
resource_group_name = var.azure_resource_group_name
ttl = 300
record {
value = azurerm_linux_web_app.app-service.custom_domain_verification_id
}
}
resource "azurerm_dns_cname_record" "cname-record" {
name = "domain.com"
zone_name = azurerm_dns_zone.dns-zone.name
resource_group_name = var.azure_resource_group_name
ttl = 300
record = azurerm_linux_web_app.app-service.default_hostname
depends_on = [azurerm_dns_txt_record.domain-verification]
}
resource "azurerm_app_service_custom_hostname_binding" "hostname-binding" {
hostname = "api.domain.com"
app_service_name = azurerm_linux_web_app.app-service.name
resource_group_name = var.azure_resource_group_name
depends_on = [azurerm_dns_cname_record.cname-record]
}
I had the same issue & had to use PowerSHell to overcome it in the short-term. Maybe you could get Terraform to trigger the PSHell script... I haven't tried that yet!!!
PSHell as follows: -
$fqdn="www.yourwebsite.com"
$webappname="yourwebsite.azurewebsites.net"
Set-AzureRmWebApp -Name <YourAppServiceName> -ResourceGroupName <TheResourceGroupOfYourAppService> -HostNames #($fqdn,$webappname)
IMPORTANT: Make sure you configure DNS FIRST i.e. CNAME or TXT record for the custom domain you're trying to set, else PSHell & even the Azure Portal manual method will fail.