Azure webapp security group access - azure

I have an azure web app which I want to restrict the access to its URL and allow access exclusively through my application gateway. One of the option is to use "Access Restriction" but I would like to achieve this using the security group as will give me more freedom and customisation as I have a lot of app services.
Using terraform I configured the application gateway, app gateway subnet and the app service gateway as follow
resource "azurerm_virtual_network" "VNET" {
address_space = ["VNET-CIDR"]
location = var.location
name = "hri-prd-VNET"
resource_group_name = azurerm_resource_group.rg-hri-prd-eur-app-gate.name
}
resource "azurerm_subnet" "app-gate" {
name = "app-gateway-subnet"
resource_group_name = azurerm_resource_group.app-gate.name
virtual_network_name = azurerm_virtual_network.VNET.name
address_prefixes = ["SUBNET-CIDR"]
}
resource "azurerm_subnet" "app-service" {
name = "app-service-subnet"
resource_group_name = azurerm_resource_group.app-gate.name
virtual_network_name = azurerm_virtual_network.hri-prd-VNET.name
address_prefixes = ["APP_CIDR"]
delegation {
name = "app-service-delegation"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
while in my security group I configured the mapping as follow:
resource "azurerm_network_security_group" "app-service-sg" {
location = var.app-service-loc
name = "app-service-sg"
resource_group_name = azurerm_resource_group.app-service.name
security_rule {
access = "Allow"
direction = "Inbound"
name = "application_gateway_access"
priority = 100
protocol = "Tcp"
destination_port_range = "80"
source_port_range = "*"
source_address_prefixes = ["app-gate-CIDR"]
destination_address_prefixes = ["app-service-CIDR"]
}
}
resource "azurerm_subnet_network_security_group_association" "app-service-assoc" {
network_security_group_id = azurerm_network_security_group.app-service-sg.id
subnet_id = azurerm_subnet.app-service.id
}
The configuration runs without any issue with terraform, but when I hit the web app url directly I am able to access it.
What am I doing wrong at this stage? because I would like to be able to reach the web app url only though my application gateway.
Thank you so much for any help guys

You have just created networks and security groups. You need to use Application Gateway integration with service endpoints
Additionally you will need to make further configuration.
Here is a diagram how your solution should look like.
https://learn.microsoft.com/en-us/azure/app-service/networking/app-gateway-with-service-endpoints
Create App Service using Terraform code and add IP restrictions.
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/app_service#ip_restriction
resource "azurerm_app_service_plan" "example" {
name = "example-app-service-plan"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
sku {
tier = "Standard"
size = "S1"
}
}
resource "azurerm_app_service" "example" {
name = "example-app-service"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
app_service_plan_id = azurerm_app_service_plan.example.id
site_config {
ip_restriction {
ip_address = "0.0.0.0"
}
}
Link App Service to your Network
resource "azurerm_app_service_virtual_network_swift_connection" "example" {
app_service_id = azurerm_app_service.example.id
subnet_id = azurerm_subnet.app-service.id
}
Create the access restriction using service endpoints.
https://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions#set-a-service-endpoint-based-rule

Related

Terraform deletes Azure resources in subsequent 'apply' without any config change

I was trying to test the scenario of handling external changes to existing resources and then syncing my HCL config to the current state in the next apply. I could achieve that using 'taint' for the modified resource, but TF deleted other resources which were deployed during the first 'apply'. Here is the module code for a VNet with 3 subnets(prod,dmz and app) and 3 NSGs associated. And I tested with modifying one of the NSGs but TF deleted all of the subnets-
VNET-
resource "azurerm_virtual_network" "BP-VNet" {
name = var.Vnetname
location = var.location
resource_group_name = var.rgname
address_space = var.vnetaddress
subnet {
name = "GatewaySubnet"
address_prefix = "10.0.10.0/27"
}
}
Subnet -
resource "azurerm_subnet" "subnets" {
count = var.subnetcount
name = "snet-prod-${lookup(var.snettype, count.index, "default")}-001"
address_prefixes = ["10.0.${count.index+1}.0/24"]
resource_group_name = var.rgname
virtual_network_name = azurerm_virtual_network.BP-VNet.name
}
NSGs-
resource "azurerm_network_security_group" "nsgs" {
count = var.subnetcount
name = "nsg-prod-${lookup(var.snettype, count.index, "default")}"
resource_group_name = var.rgname
location = var.location
--------
}
BastionSubnet-
resource "azurerm_subnet" "bastionsubnet" {
name = "AzureBastionSubnet"
virtual_network_name = azurerm_virtual_network.BP-VNet.name
resource_group_name = var.rgname
address_prefixes = [ "10.0.5.0/27" ]
}
The end result of second apply is -
With just Gateway subnet. It should not have deleted rest of the 4 subnets. Why is this happening?
The solution may confuse you. You can separate the GatewaySubnet from the azurerm_virtual_network block into an azurerm_subnet block. The code looks like this:
resource "azurerm_subnet" "gateway" {
name = "GatewaySubnet"
resource_group_name = var.rgname
virtual_network_name = azurerm_virtual_network.BP-VNet.name
address_prefixes = ["10.0.10.0/27"]
}
I don't know the certain reason, but it solves your issue.

Getting "Error waiting for Virtual Network Rule "" (server, rg) to be created or updated..." for azurerm_mariadb_virtual_network_rule

I'm building a Terraform config for my infrastructure deployment, and trying to connect an azurerm_mariadb_server resource to an azurerm_subnet, using an azurerm_mariadb_virtual_network_rule, as per documentation.
The vnet, subnet, mariadb-server etc are all created, but I get the following when trying to create the vnet_rule.
Error: Error waiting for MariaDb Virtual Network Rule "vnet-rule" (MariaDb Server: "server", Resource Group: "rg")
to be created or updated: couldn't find resource (21 retries)
on main.tf line 86, in resource "azurerm_mariadb_virtual_network_rule" "vnet_rule":
86: resource "azurerm_mariadb_virtual_network_rule" "mariadb_vnet_rule" {
I can't determine which resource can't be found - all resources except the azurerm_mariadb_virtual_network_rule are created, according to both the bash shell output and Azure portal.
My config is below - details of some resources are omitted for brevity.
provider "azurerm" {
version = "~> 2.27.0"
features {}
}
resource "azurerm_resource_group" "rg" {
name = "${var.resource_group_name}-rg"
location = var.location
}
resource "azurerm_virtual_network" "vnet" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}Vnet"
address_space = ["10.0.0.0/16"]
location = var.location
}
resource "azurerm_subnet" "backend" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}backendSubnet"
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.0.1.0/24"]
service_endpoints = ["Microsoft.Sql"]
}
resource "azurerm_mariadb_server" "server" {
# DB server name can contain lower-case letters, numbers and dashes, NOTHING ELSE
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}-mariadb-server"
location = var.location
sku_name = "B_Gen5_2"
version = "10.3"
ssl_enforcement_enabled = true
}
resource "azurerm_mariadb_database" "mariadb_database" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}_mariadb_database"
server_name = azurerm_mariadb_server.server.name
charset = "utf8"
collation = "utf8_general_ci"
}
## Network Service Endpoint (add DB to subnet)
resource "azurerm_mariadb_virtual_network_rule" "vnet_rule" {
resource_group_name = azurerm_resource_group.rg.name
name = "${var.prefix}-mariadb-vnet-rule"
server_name = azurerm_mariadb_server.server.name
subnet_id = azurerm_subnet.backend.id
}
The issue looks to arise within 'func resourceArmMariaDbVirtualNetworkRuleCreateUpdate', but I don't know Go, so can't follow exactly what's causing it.
If anyone can see an issue, or knows how to get around this, please let me know!
Also, I'm not able to do it via the portal - step 3 here shows a section for configuring VNET rules, which is not present on my page for 'Azure database for mariaDB server'. I have the Global administrator role, so I don't think it's permissions-related.
From creating and manage Azure Database for MariaDB VNet service endpoints and VNet rules by using the Azure portal
The key point is that
Support for VNet service endpoints is only for General Purpose and
Memory Optimized servers.
So change the code sku_name = "B_Gen5_2" to sku_name = "GP_Gen5_2" or other eligible sku_name.
sku_name - (Required) Specifies the SKU Name for this MariaDB Server.
The name of the SKU, follows the tier + family + cores pattern (e.g.
B_Gen4_1, GP_Gen5_8). For more information see the product
documentation.
It takes a few minutes to deploy.

azure function via terraform: how to connect to service bus

I am stuck when trying to deploy an Azure function via Azure DevOps pipelines and Terraform.
Running terraform apply works fine and the Service Bus looks good and works. In the Azure portal the function seems to be running, but it complains that it can not find the ServiceBusConnection.
I defined it via the following Terraform declaration:
resource "azurerm_resource_group" "rg" {
name = "rg-sb-westeurope"
location = "westeurope"
}
resource "azurerm_servicebus_namespace" "sb" {
name = "ns-sb"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
sku = "Standard"
}
resource "azurerm_servicebus_queue" "sbq" {
name = "servicebusqueue"
resource_group_name = azurerm_resource_group.rg.name
namespace_name = azurerm_servicebus_namespace.sb.name
enable_partitioning = true
}
resource "azurerm_servicebus_namespace_authorization_rule" "sb-ar" {
name = "servicebus_auth_rule"
namespace_name = azurerm_servicebus_namespace.sb.name
resource_group_name = azurerm_resource_group.rg.name
listen = false
send = true
manage = false
}
In the function app i declare:
resource "azurerm_function_app" "fa" {
name = "function-app"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = azurerm_app_service_plan.asp.id
storage_account_name = azurerm_storage_account.sa.name
storage_account_access_key = azurerm_storage_account.sa.primary_access_key
app_settings = {
ServiceBusConnection = azurerm_servicebus_namespace_authorization_rule.sb-ar.name
}
}
This tf. will not work out of the box as i have not copied here the full declaration.
I think I am setting the connection environment vars wrong but have no idea on how to do it correctly.
EDIT
With the hint from #Heye I got it working. This is the correct snipped replacing the name with primary_connection_string.
resource "azurerm_function_app" "fa" {
name = "function-app"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = azurerm_app_service_plan.asp.id
storage_account_name = azurerm_storage_account.sa.name
storage_account_access_key = azurerm_storage_account.sa.primary_access_key
app_settings = {
ServiceBusConnection = azurerm_servicebus_namespace_authorization_rule.sb-ar.primary_connection_string
}
}
You are setting the ServiceBusConnection value to the name of the authorization rule. However, you probably want to set it to the primary_connection_string, as that contains the key along with all the information needed to connect to the Service Bus.

How to reference azure network security ID in a module in Terraform?

I'm not sure how to reference an azure network security group in a module. I created a module that I can reuse for any VM I create which works to an extent except I'm not sure how to assign the network security group ID to it. The below is an example (slightly amended, I don't have it on me) that is very close to what I have and is based on.
main.tf at root
module "vm1" {
source = "/modules/vm/"
NSG = ????
}
tfvars
nic_name = apache_vm_nic
location = West Europe
........
modules/vm/main.tf
.........
resource "azurerm_network_interface" "myterraformnic" {
name = "var.nic_name"
location = "var.location"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
network_security_group_id = { WHAT DO I PUT HERE? }
ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsubnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraformpublicip.id}"
}
}
resource "azurerm_network_security_group" "apache-nsg" {
name = "myNetworkSecurityGroup"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_network_security_group" "nginx-nsg" {
name = "myNetworkSecurityGroup"
location = "eastus"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
in the module/main.tf file under network_security_group_id, I can't exactly put ${azurerm_network_security_group.apache-nsg.id} or ${azurerm_network_security_group.nginx-nsg.id}. So what can I put so I can reuse this module for all VMs?
Thanks
Your question isn't quite clear to me but I am going to assume you want to create a generic network security group that you want to assign to multiple instances of your VM module.
If you want to pass the ID of a security group from main.tf at root, you'd do this:
Create a network security group resource outside your module, e.g. inside main.tf at root, just like you created a few inside your VM module (for Apache and Nginx), so that main.tf at root looks like this:
resource "azurerm_network_security_group" "some_generic_vm_nsg" {
....
}
module "vm1" {
source = "/modules/vm/"
NSG = "${azurerm_network_security_group.some_generic_vm_nsg.id}"
}
Note that we are now passing the ID of the nsg to your VM module instance.
However, your VM module has not declared the NSG variable yet. So create the file modules/vm/variables.tf and put this in it:
variable "NSG" {
type = "string"
}
And inside your module, network_security_group_id = { WHAT DO I PUT HERE? } becomes:
network_security_group_id = "${var.NSG}"
This way, you can assign the same network security group to multiple VM module instances.
You can study this documentation for more elaborate information.

How can I add a VM into a Load Balancer within Azure using Terraform?

I have been looking through the Terraform.io docs and its not really clear.
I know how to add a VM to a LB through the Azure portal, just trying to figure out how to do with with Terraform.
I do not see an option in the azurerm_availability_set or azurerm_lb to add a VM.
Please let me know if anyone has any ideas.
Devon
I'd take a look at this example I created. After you've created the LB, when creating each NIC make sure you add a backlink to the LB.
load_balancer_backend_address_pool_ids = ["${azurerm_lb_backend_address_pool.webservers_lb_backend.id}"]
Terraform load balanced server
resource "azurerm_lb_backend_address_pool" "backend_pool" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.lb.id}"
name = "BackendPool1"
}
resource "azurerm_lb_nat_rule" "tcp" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.lb.id}"
name = "RDP-VM-${count.index}"
protocol = "tcp"
frontend_port = "5000${count.index + 1}"
backend_port = 3389
frontend_ip_configuration_name = "LoadBalancerFrontEnd"
count = 2
}
You can get the whole file at this link. I think the code above is the most import thing. For more details about Load Balancer NAT rule, see azurerm_lb_nat_rule.
Might be late to answer this, but here goes. Once you have created LB and VM, you can use this snippet to associate NIC and LB backend pool:
resource "azurerm_network_interface_backend_address_pool_association" "vault" {
network_interface_id = "${azurerm_network_interface.nic.id}"
ip_configuration_name = "nic_ip_config"
backend_address_pool_id = "${azurerm_lb_backend_address_pool.nic.id}"
}
Ensure that the VMs sit across availability set. Else you won't be able to register the VMs into the LB.
I think this is what you need. You then need to create the association between the network interface and the virtual machine thru the subnet.
All of the answers here appear to be outdated. azurerm_network_interface_nat_rule_association should be used as of 2021-08-22:
resource "azurerm_lb_nat_rule" "example" {
resource_group_name = azurerm_resource_group.example.name
loadbalancer_id = azurerm_lb.example.id
name = "RDPAccess"
protocol = "Tcp"
frontend_port = 3389
backend_port = 3389
frontend_ip_configuration_name = "primary"
}
resource "azurerm_network_interface" "example" {
name = "example-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_network_interface_nat_rule_association" "example" {
network_interface_id = azurerm_network_interface.example.id
ip_configuration_name = "testconfiguration1"
nat_rule_id = azurerm_lb_nat_rule.example.id
}

Resources