Unable to connect to SonarQube private IP with port 9000 - azure

I am having some trouble and was needing some assistance.
I have set up a SonarQube instance on a machine in Azure, and I am trying to connect to it through its private IP address and port 9000. However, I am unable to connect and get a "connection timed out" error.
Here are the steps I have taken so far:
Checked the firewall rules: The firewall on the machine is not blocking incoming traffic on port 9000.
Checked the IP address: The private IP address of the machine is correct.
Checked the port: Port 9000 is the correct port for my SonarQube instance.
Checked the logs: There are no error messages related to the connection issue in the logs.
Restarted the SonarQube instance: Restarting the instance did not resolve the issue.
What else can I do to resolve this issue and connect to my SonarQube instance?
Note: I am using a Linux machine and bash commands.
Here is terraform code in case I did something incorrectly.
provider "azurerm" {
features {}
}
locals {
sonarqube_image_name = "sonarqube:9.9-community"
sonarqube_container_name = "sonarqube-container"
postgres_container_name = "postgres-container"
}
resource "azurerm_resource_group" "examplegroup" {
name = "example-rg"
location = "South Central US"
}
resource "azurerm_network_security_group" "nsg-example-sonargroup" {
name = "nsg-example-sonargroup"
location = azurerm_resource_group.sonargroup.location
resource_group_name = azurerm_resource_group.sonargroup.name
}
resource "azurerm_virtual_network" "example-sonar-vnet" {
name = "example-sonar-vnet"
location = azurerm_resource_group.sonargroup.location
resource_group_name = azurerm_resource_group.sonargroup.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "example-sonar-subnet" {
name = "sonar-subnet"
resource_group_name = azurerm_resource_group.sonargroup.name
virtual_network_name = azurerm_virtual_network.example-sonar-vnet.name
address_prefixes = ["10.0.0.0/16"]
delegation {
name = "delegation"
service_delegation {
name = "Microsoft.ContainerInstance/containerGroups"
actions = ["Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action"]
}
}
}
resource "azurerm_container_group" "sonarqube" {
name = "sonarqube-group"
location = azurerm_resource_group.sonargroup.location
resource_group_name = azurerm_resource_group.sonargroup.name
ip_address_type = "Private"
os_type = "Linux"
subnet_ids = [azurerm_subnet.example-sonar-subnet.id]
container {
name = local.sonarqube_container_name
image = local.sonarqube_image_name
cpu = 1
memory = 1.5
ports {
port = 9000
}
environment_variables = {
SONARQUBE_JDBC_URL = "jdbc:postgresql://postgres-container:5432/sonarqube_db"
SONARQUBE_JDBC_USERNAME = "example_user"
SONARQUBE_JDBC_PASSWORD = "example_password"
}
}
container {
name = local.postgres_container_name
image = "postgres:11"
cpu = 1
memory = 2
ports {
port = 5432
}
environment_variables = {
POSTGRES_DB = "example_db"
POSTGRES_USER = "example_user"
POSTGRES_PASSWORD = "example_password"
}
}
}
output "private_ip_address" {
value = azurerm_container_group.sonarqube.ip_address
}

Related

Azure Cosmos DB Error with Private Link and Private Endpoint "Failed to refresh the collection list. Please try again later"

I have enabled Private Endpoint for my Azure Cosmos DB. Everytime i go to Cosmos, i see a Red Flag on top which says : Failed to refresh the collection list. Please try again later.
We use Terraform to deploy code.
Also i don't see any container being created even though i have the below code in Terraform
resource "azurerm_cosmosdb_sql_container" "default" {
resource_group_name = module.resourcegroup.resource_group.name
account_name = azurerm_cosmosdb_account.default.name
database_name = azurerm_cosmosdb_sql_database.default.name
name = "cosmosdb_container"
partition_key_path = "/definition/id"
throughput = 400
}
Any idea what can i do to fix this. I don't see these issues when the Cosmos is not behind a Private Endpoint and Private link
My TF Code is provided below :
resource "azurerm_cosmosdb_account" "default" {
resource_group_name = module.resourcegroup.resource_group.name
location = var.location
name = module.name_cosmosdb_account.location.cosmosdb_account.name_unique
tags = module.resourcegroup.resource_group.tags
public_network_access_enabled = false
network_acl_bypass_for_azure_services = true
enable_automatic_failover = true
is_virtual_network_filter_enabled = true
offer_type = "Standard"
kind = "GlobalDocumentDB"
consistency_policy {
consistency_level = "Session"
max_interval_in_seconds = 5
max_staleness_prefix = 100
}
geo_location {
location = module.resourcegroup.resource_group.location
failover_priority = 0
}
geo_location {
location = "eastus2"
failover_priority = 1
}
}
resource "azurerm_cosmosdb_sql_database" "default" {
resource_group_name = module.resourcegroup.resource_group.name
account_name = azurerm_cosmosdb_account.default.name
name = "cosmosdb_db"
throughput = 400
}
resource "azurerm_cosmosdb_sql_container" "default" {
resource_group_name = module.resourcegroup.resource_group.name
account_name = azurerm_cosmosdb_account.default.name
database_name = azurerm_cosmosdb_sql_database.default.name
name = "cosmosdb_container"
partition_key_path = "/definition/id"
throughput = 400
}
Even with the error from Portal the container and resources are being created from terraform . You can use Data explorer to see the database and container created from terraform.
Test:
Terraform code:
provider "azurerm" {
features{}
}
data "azurerm_resource_group" "rg" {
name = "resourcegroup"
}
resource "azurerm_virtual_network" "example" {
name = "cosmos-network"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
}
resource "azurerm_subnet" "example" {
name = "cosmos-subnet"
resource_group_name = data.azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.1.0/24"]
enforce_private_link_endpoint_network_policies = true
}
resource "azurerm_cosmosdb_account" "example" {
name = "ansuman-cosmosdb"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
offer_type = "Standard"
kind = "GlobalDocumentDB"
consistency_policy {
consistency_level = "BoundedStaleness"
max_interval_in_seconds = 10
max_staleness_prefix = 200
}
geo_location {
location = data.azurerm_resource_group.rg.location
failover_priority = 0
}
}
resource "azurerm_private_endpoint" "example" {
name = "cosmosansuman-endpoint"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
subnet_id = azurerm_subnet.example.id
private_service_connection {
name = "cosmosansuman-privateserviceconnection"
private_connection_resource_id = azurerm_cosmosdb_account.example.id
subresource_names = [ "SQL" ]
is_manual_connection = false
}
}
resource "azurerm_cosmosdb_sql_database" "example" {
name = "ansuman-cosmos-mongo-db"
resource_group_name = data.azurerm_resource_group.rg.name
account_name = azurerm_cosmosdb_account.example.name
throughput = 400
}
resource "azurerm_cosmosdb_sql_container" "default" {
resource_group_name = data.azurerm_resource_group.rg.name
account_name = azurerm_cosmosdb_account.example.name
database_name = azurerm_cosmosdb_sql_database.example.name
name = "cosmosdb_container"
partition_key_path = "/definition/id"
throughput = 400
}
Output:
Update: As per the Discussion , the error Failed to refresh the collection list. Please try again later. is by-default in your case as you have disabled public network access to the cosmosdb account while creation. If its set to disabled, public network traffic will be blocked even before the private endpoint is created.
So, for this error the possible solutions will be :
Enable public network traffic to access the account while creating the cosmosdb account from terraform. As , Even you set it to true after the private endpoint is set for cosmosdb , public access to cosmosdb will be automatically disabled , if you go to the firewalls and virtual networks you can see allow access from all networks is grayed out . So, you can check allow access from portal and add your current IP there to get access only for your public network as shown below.(note : as its bydefault set to true so you don't need to add public_network_access_enabled = true in code.)
You can use Data Explorer to check the containers which has been already verified by you .
You can create a VM on the same vnet where the endpoint is residing and
connect the cosmosdb from inside the VM on portal itself. You can refer this Microsoft Document for more details.

Create azure application gateway with static private ip address via terraform

I can't find a way to create an application gateway via terraform with private IP without manually inserting hard coded IP private address.
I tried:
Create a private IP in the application gateway subnet - failed because Azure blocks (attached error from the UI, but terraform raises the same error) it
Create a dynamic private IP in the application gateway subnet - Failed
Only when creating an application gateway with hard coded ip address it works.
This solution is not good enough for me because we handle many environents and we don't want to relay on developers to remember adding a private IP.
Is there a good solution?
Application Gateway v2 SKU supports the static VIP type exclusively, whereas the V1 SKU can be configured to support static or dynamic internal IP address and dynamic public IP address.
Refer: Application Gateway frontend-ip-addresses
Application Gateway V2 currently does not support only private IP mode. The Azure Application Gateway V2 SKU can be configured to support either both static internal IP address and static public IP address, or only static public IP address. It cannot be configured to support only static internal IP address.
Refer: Application gateway v2 with only private-ip
While deploying using terraform, we should define two frontend_ip_configuration blocks, one is used for public IP configuration, another is used for private IP configuration.
Scenario 1: When trying to create a new application gateway with dynamic private IP and dynamic public IP using terraform it gets created for Standard or V1 SKU only.
terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 2.65"
    }
  }
  required_version = ">= 0.14.9"
}
provider "azurerm" {
  features {}
}
resource "azurerm_resource_group" "test" {
  name     = "Terraformtest"
  location = "West Europe"
}
resource "azurerm_virtual_network" "test" {
  name                = "terraformvnet"
  resource_group_name = azurerm_resource_group.test.name
  location            = azurerm_resource_group.test.location
  address_space       = ["10.254.0.0/16"]
}
resource "azurerm_subnet" "frontend" {
  name                 = "frontend"
  resource_group_name  = azurerm_resource_group.test.name
  virtual_network_name = azurerm_virtual_network.test.name
  address_prefixes     = ["10.254.0.0/24"]
}
resource "azurerm_subnet" "backend" {
  name                 = "backend"
  resource_group_name  = azurerm_resource_group.test.name
  virtual_network_name = azurerm_virtual_network.test.name
  address_prefixes     = ["10.254.2.0/24"]
}
resource "azurerm_public_ip" "test" {
  name                = "test-pip"
  resource_group_name = azurerm_resource_group.test.name
  location            = azurerm_resource_group.test.location
  allocation_method   = "Dynamic"
}
locals {
  backend_address_pool_name      = "${azurerm_virtual_network.test.name}-beap"
  frontend_port_name             = "${azurerm_virtual_network.test.name}-feport"
  frontend_ip_configuration_name = "${azurerm_virtual_network.test.name}-feip"
  http_setting_name              = "${azurerm_virtual_network.test.name}-be-htst"
  listener_name                  = "${azurerm_virtual_network.test.name}-httplstn"
  request_routing_rule_name      = "${azurerm_virtual_network.test.name}-rqrt"
  redirect_configuration_name    = "${azurerm_virtual_network.test.name}-rdrcfg"
}
resource "azurerm_application_gateway" "network" {
  name                = "test-appgateway"
  resource_group_name = "${azurerm_resource_group.test.name}"
  location            = "${azurerm_resource_group.test.location}"
  sku {
    name     = "Standard_Small"
    tier     = "Standard"
    capacity = 2
  }
  gateway_ip_configuration {
    name      = "my-gateway-ip-configuration"
    subnet_id = "${azurerm_subnet.frontend.id}"
  }
  frontend_port {
    name = "${local.frontend_port_name}"
    port = 80
  }
  frontend_ip_configuration {
    name                 = "${local.frontend_ip_configuration_name}"
    public_ip_address_id = "${azurerm_public_ip.test.id}"
  }
 frontend_ip_configuration {
    name                 = "${local.frontend_ip_configuration_name}-private"
    subnet_id = "${azurerm_subnet.frontend.id}"
    private_ip_address_allocation = "Dynamic"
  }
  backend_address_pool {
    name = "${local.backend_address_pool_name}"
  }
  backend_http_settings {
    name                  = "${local.http_setting_name}"
    cookie_based_affinity = "Disabled"
    path                  = "/path1/"
    port                  = 80
    protocol              = "Http"
    request_timeout       = 1
  }
  http_listener {
    name                           = "${local.listener_name}"
    frontend_ip_configuration_name = "${local.frontend_ip_configuration_name}-private"
    frontend_port_name             = "${local.frontend_port_name}"
    protocol                       = "Http"
  }
  request_routing_rule {
    name                       = "${local.request_routing_rule_name}"
    rule_type                  = "Basic"
    http_listener_name         = "${local.listener_name}"
    backend_address_pool_name  = "${local.backend_address_pool_name}"
    backend_http_settings_name = "${local.http_setting_name}"
  }
}
Scenario 2: While creating a Standard V2 we can create a private IP but it doesn’t support dynamic allocation yet so it must be static, and you must mention the IP address you want to use. and to use that you must select standard sku for public IP and static IP address allocation for public as well.
z
So, after updating private_ip_address_allocation = "Static" and private_ip_address = "10.254.0.10" it will get created successfully.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.65"
}
}
required_version = ">= 0.14.9"
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "test" {
name = "Terraformtest"
location = "West Europe"
}
resource "azurerm_virtual_network" "test" {
name = "terraformvnet"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
address_space = ["10.254.0.0/16"]
}
resource "azurerm_subnet" "frontend" {
name = "frontend"
resource_group_name = azurerm_resource_group.test.name
virtual_network_name = azurerm_virtual_network.test.name
address_prefixes = ["10.254.0.0/24"]
}
resource "azurerm_subnet" "backend" {
name = "backend"
resource_group_name = azurerm_resource_group.test.name
virtual_network_name = azurerm_virtual_network.test.name
address_prefixes = ["10.254.2.0/24"]
}
resource "azurerm_public_ip" "test" {
name = "test-pip"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
allocation_method = "Static"
sku = "Standard"
}
locals {
backend_address_pool_name = "${azurerm_virtual_network.test.name}-beap"
frontend_port_name = "${azurerm_virtual_network.test.name}-feport"
frontend_ip_configuration_name = "${azurerm_virtual_network.test.name}-feip"
http_setting_name = "${azurerm_virtual_network.test.name}-be-htst"
listener_name = "${azurerm_virtual_network.test.name}-httplstn"
request_routing_rule_name = "${azurerm_virtual_network.test.name}-rqrt"
redirect_configuration_name = "${azurerm_virtual_network.test.name}-rdrcfg"
}
resource "azurerm_application_gateway" "network" {
name = "test-appgateway"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "${azurerm_resource_group.test.location}"
sku {
name = "Standard_v2"
tier = "Standard_v2"
capacity = 2
}
gateway_ip_configuration {
name = "my-gateway-ip-configuration"
subnet_id = "${azurerm_subnet.frontend.id}"
}
frontend_port {
name = "${local.frontend_port_name}"
port = 80
}
frontend_ip_configuration {
name = "${local.frontend_ip_configuration_name}"
public_ip_address_id = "${azurerm_public_ip.test.id}"
}
frontend_ip_configuration {
name = "${local.frontend_ip_configuration_name}-private"
subnet_id = "${azurerm_subnet.frontend.id}"
private_ip_address_allocation = "Static"
private_ip_address = "10.254.0.10"
}
backend_address_pool {
name = "${local.backend_address_pool_name}"
}
backend_http_settings {
name = "${local.http_setting_name}"
cookie_based_affinity = "Disabled"
path = "/path1/"
port = 80
protocol = "Http"
request_timeout = 1
}
http_listener {
name = "${local.listener_name}"
frontend_ip_configuration_name = "${local.frontend_ip_configuration_name}"
frontend_port_name = "${local.frontend_port_name}"
protocol = "Http"
}
request_routing_rule {
name = "${local.request_routing_rule_name}"
rule_type = "Basic"
http_listener_name = "${local.listener_name}"
backend_address_pool_name = "${local.backend_address_pool_name}"
backend_http_settings_name = "${local.http_setting_name}"
}
}
Note : 2 application gateway cannot use same subnet . So if you are creating a new appgw then you have to create a new subnet.
Can you paste your terraform code?
For the latest terraform version documentation say that block frontend_ip_configuration supports private_ip_address_allocation parameter, which can hold value Dynamic.
Also remember that app gateway has to have a separate network with only application gateway in it. I am not sure, but I suppose that it is gateway per subnet, so 2 gateways in one subnet is impossible.

two frontend ports of application gateway are using the same port 443 - Azure application gateway in terraform

I am configuring azure application gateway using terraform.
Following is the module that i wrote:
locals {
backend_address_pool_name = format("appgwbeap-%[1]s-%[2]s%[3]sweb-gw",var.project_code,var.env,var.zone)
frontend_port_name = format("appgwfeport-%[1]s-%[2]s%[3]sweb-gw",var.project_code,var.env,var.zone)
frontend_ip_configuration_name = format("appgwfeip-%[1]s-%[2]s%[3]sweb-gw",var.project_code,var.env,var.zone)
http_setting_name = format("appgwhtst-%[1]s-%[2]s%[3]sweb-gw",var.project_code,var.env,var.zone)
listener_name = format("appgwhttplstnr-%[1]s-%[2]s%[3]sweb-gw",var.project_code,var.env,var.zone)
request_routing_rule_name = format("appgwrqrt-%[1]s-%[2]s%[3]sweb-gw",var.project_code,var.env,var.zone)
redirect_configuration_name = format("appgwrdrcfg-%[1]s-%[2]s%[3]sweb-gw",var.project_code,var.env,var.zone)
}
resource "azurerm_application_gateway" "appgw" {
name = format("appgw-%[1]s-%[2]s%[3]sweb-gw",var.project_code,var.env,var.zone)
resource_group_name = var.rg_name
location = var.location
sku {
name = var.sku_name
tier = var.sku_tier
capacity = var.sku_capacity
}
gateway_ip_configuration {
name = format("appgwipcfg-%[1]s-%[2]s%[3]sweb-gw",var.project_code,var.env,var.zone)
subnet_id = var.subnet_id
}
frontend_port {
name = "appgwfeport-app1-uatizweb-gw"
port = "443"
}
frontend_port {
name = "appgwfeport-app2-uatizweb-gw"
port = "443"
}
ssl_certificate {
name = "UAT-APP1-APPGW-SSL-CERT-SGCORE-12Jan21-12Jan23"
data = filebase64("./certificates/web.app1.sso.gwwu.xxx.com.de-12Jan2021.pfx")
password = "${var.app1_pfx_password}"
}
authentication_certificate {
name = "UAT-APP1-APPGW-SSL-CERT-SGCORE-12Jan21-12Jan23"
data = file("./certificates/web_app1_sso_gwwu_xxx_com_de-12Jan21.cer")
}
ssl_certificate {
name = "UAT-APP2-APPGW-SSL-CERT-01Mar21"
data = filebase64("./certificates/selfsigned-app2-uat-01Mar21.pfx")
password = "${var.app1_pfx_password}"
}
authentication_certificate {
name = "UAT-APP2-APPGW-SSL-CERT-01Mar21"
data = file("./certificates/selfsigned-app2-uat-01Mar21.cer")
}
frontend_ip_configuration {
name = "${local.frontend_ip_configuration_name}"
subnet_id = var.subnet_id
private_ip_address = var.frontend_private_ip
private_ip_address_allocation = "Static"
}
backend_address_pool {
name = "beap-path-app1-app"
#fqdns = var.fqdn_list
ip_addresses = ["10.xxx.xxx.36"]
}
backend_address_pool {
name = "beap-path-app2-app"
#fqdns = var.fqdn_list
ip_addresses = ["10.xxx.xxx.37"]
}
backend_http_settings {
name = "behs-path-app1-app"
cookie_based_affinity = var.backend_cookie_based_affinity
affinity_cookie_name = "ApplicationGatewayAffinity"
path = var.backend_path
port = "443"
#probe_name = "probe-app1"
protocol = "Https"
request_timeout = var.backend_request_timeout
authentication_certificate {
name = "UAT-APP1-APPGW-SSL-CERT-SGCORE-12Jan21-12Jan23"
}
}
backend_http_settings {
name = "behs-path-app2-app"
cookie_based_affinity = var.backend_cookie_based_affinity
affinity_cookie_name = "ApplicationGatewayAffinity"
path = var.backend_path
port = "443"
#probe_name = "probe-app2"
protocol = "Https"
request_timeout = var.backend_request_timeout
authentication_certificate {
name = "UAT-APP2-APPGW-SSL-CERT-01Mar21"
}
}
http_listener {
name = "appgwhttplsnr-app1-uatizweb-gw"
frontend_ip_configuration_name = "${local.frontend_ip_configuration_name}"
frontend_port_name = "appgwfeport-app1-uatizweb-gw"
protocol = "Https"
ssl_certificate_name = "UAT-APP1-APPGW-SSL-CERT-SGCORE-12Jan21-12Jan23"
require_sni = true
host_name = "web.app1.sso.gwwu.xxx.com.de"
}
http_listener {
name = "appgwhttplsnr-app2-uatizweb-gw"
frontend_ip_configuration_name = "${local.frontend_ip_configuration_name}"
frontend_port_name = "appgwfeport-app2-uatizweb-gw"
ssl_certificate_name = "UAT-APP2-APPGW-SSL-CERT-01Mar21"
require_sni = true
protocol = "Https"
host_name = "web.app2.sso.gwwu.xxx.com.de"
}
request_routing_rule {
name = "appgwrqrt-app2-uatizweb-gw"
rule_type = var.backend_rule_type
http_listener_name = "appgwhttplsnr-app2-uatizweb-gw"
backend_address_pool_name = "beap-path-app2-app"
backend_http_settings_name = "behs-path-app2-app"
}
request_routing_rule {
name = "appgwrqrt-app1-uatizweb-gw"
rule_type = var.backend_rule_type
http_listener_name = "appgwhttplsnr-app1-uatizweb-gw"
backend_address_pool_name = "beap-path-app1-app"
backend_http_settings_name = "behs-path-app1-app"
}
}
Below is the main.tf that calls the module:
module "app_gateway" {
source = "../../../modules/appgateway"
rg_name = var.rg_name
agency = local.agency
project_code = local.project_code
env = var.env
zone = var.zone
tier = "appgw"
location = local.location
vnet_name = var.vnet_name
subnet_id = module.agw_subnet.subnet_id
sku_name = var.appgw_sku_name
sku_capacity = var.appgw_sku_capacity
frontend_private_ip = var.appgw_frontend_ip
frontend_port = var.frontend_port
frontend_protocol = var.frontend_protocol
app1_pfx_password = "${var.app1_pfx_password}"
backend_protocol = var.backend_protocol
backend_port = var.backend_port
backend_path = "/"
providers = {
azurerm = azurerm.corpapps
}
}
I have used Multi-site, However when i deploy -i get the following error:
two frontend ports of application gateway are using the same port number 443.
When i change one of my port to 5443 - it does get deployed and works from terraform.
Also, i can create two frontend port with 443 (multi-site) from portal.Can't do this from terraform.
What am i missing from terraform.
Any light on this will help!
We ran into the same error when updating an App Gateway via a PowerShell script.
Scenario:
There was an existing multi-site listener using the FrontendPort for 80. When the script tried to add a second multi-site listener on that same port, we got the same error message.
It turned out that the original listener was on the public Frontend IP while the the second one being added was using the Private Frontend IP. I didn't realize this, but you can NOT use the same Frontend Port for both a public listener and a private listener even if they are both multi-site.
The original listener shouldn't have been public IP, anyway, so once I tweaked the original listener to use the private IP, the script executed without error.
I found the explanation about Private and Public IP's not being able to share the same port here:
https://github.com/MicrosoftDocs/azure-docs/issues/23652
Maybe this will help someone else.
We could use the same frontend configuration(frontend IP, protocol, port or name) for multi-sites listener instead of creating two frontend_port names.
For example, change the related codes:
resource "azurerm_application_gateway" "appgw" {
#..
frontend_port {
name = "appgwfeport-app1-uatizweb-gw"
port = "443"
}
# frontend_port {
# name = "appgwfeport-app2-uatizweb-gw"
# port = "443"
# }
#..
http_listener {
name = "appgwhttplsnr-app1-uatizweb-gw"
frontend_ip_configuration_name = "${local.frontend_ip_configuration_name}"
frontend_port_name = "appgwfeport-app1-uatizweb-gw"
protocol = "Https"
ssl_certificate_name = "UAT-APP1-APPGW-SSL-CERT-SGCORE-12Jan21-12Jan23"
require_sni = true
host_name = "web.app1.sso.gwwu.xxx.com.de"
}
http_listener {
name = "appgwhttplsnr-app2-uatizweb-gw"
frontend_ip_configuration_name = "${local.frontend_ip_configuration_name}"
frontend_port_name = "appgwfeport-app1-uatizweb-gw" #change here
ssl_certificate_name = "UAT-APP2-APPGW-SSL-CERT-01Mar21"
require_sni = true
protocol = "Https"
host_name = "web.app2.sso.gwwu.xxx.com.de"
}
}
For more information, read https://learn.microsoft.com/en-us/azure/application-gateway/tutorial-multiple-sites-powershell and https://learn.microsoft.com/en-us/azure/application-gateway/create-multiple-sites-portal#configuration-tab
Maybe this link will be helpful: https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq#can-i-use-the-same-port-for-both-public-facing-and-private-facing-listeners
The short answer is - it is not possible to use the same port private and public listeners.
As a workaround I used another port like 10443 for https private listener configuration. In my case it worked fine because users did not use private listener
azure-cli was outdated in our case. After upgrade it all started to work like a charm.
We had an Application Gateway set up by Terraform with two multi-site public listeners, both using the same 443 port. The mentioned error Two Http Listeners of Application Gateway <..> and <..> are using the same Frontend Port <..> and FrontendIpConfiguration <..> was happening when outdated az cli was trying to az network application-gateway ssl-cert update --key-vault-secret-id <..>. azure-cli initial: 2.2.0, final: 2.39.0. After upgrade az network application-gateway ssl-cert update started to update GW's cert as expected.

Terraform: SSH authentication failed (user#:22): ssh: handshake failed

I wrote some Terraform code to create a new VM and want to execute a command on it via remote-exec but it throws an SSH connection error:
Error: timeout - last error: SSH authentication failed (admin#:22): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain.
My Terraform code:
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "rg" {
name = "${var.deployment}-mp-rg"
location = "${var.azure_environment}"
tags = {
environment = "${var.deployment}"
}
}
# Create virtual network
resource "azurerm_virtual_network" "vnet" {
name = "${var.deployment}-mp-vnet"
address_space = ["10.0.0.0/16"]
location = "${var.azure_environment}"
resource_group_name = "${azurerm_resource_group.rg.name}"
tags = {
environment = "${var.deployment}"
}
}
# Create subnet
resource "azurerm_subnet" "subnet" {
name = "${var.deployment}-mp-subnet"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.1.0/24"
}
# Create public IPs
resource "azurerm_public_ip" "publicip" {
name = "${var.deployment}-mp-publicip"
location = "${var.azure_environment}"
resource_group_name = "${azurerm_resource_group.rg.name}"
allocation_method = "Dynamic"
tags = {
environment = "${var.deployment}"
}
}
# Create Network Security Group and rule
resource "azurerm_network_security_group" "nsg" {
name = "${var.deployment}-mp-nsg"
location = "${var.azure_environment}"
resource_group_name = "${azurerm_resource_group.rg.name}"
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags = {
environment = "${var.deployment}"
}
}
# Create network interface
resource "azurerm_network_interface" "nic" {
name = "${var.deployment}-mp-nic"
location = "${var.azure_environment}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_security_group_id = "${azurerm_network_security_group.nsg.id}"
ip_configuration {
name = "${var.deployment}-mp-nicconfiguration"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "Dynamic"
public_ip_address_id = "${azurerm_public_ip.publicip.id}"
}
tags = {
environment = "${var.deployment}"
}
}
# Generate random text for a unique storage account name
resource "random_id" "randomId" {
keepers = {
# Generate a new ID only when a new resource group is defined
resource_group = "${azurerm_resource_group.rg.name}"
}
byte_length = 8
}
# Create storage account for boot diagnostics
resource "azurerm_storage_account" "storageaccount" {
name = "diag${random_id.randomId.hex}"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${var.azure_environment}"
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "${var.deployment}"
}
}
# Create virtual machine
resource "azurerm_virtual_machine" "vm" {
name = "${var.deployment}-mp-vm"
location = "${var.azure_environment}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
vm_size = "Standard_DS1_v2"
storage_os_disk {
name = "${var.deployment}-mp-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
os_profile {
computer_name = "${var.deployment}-mp-ansible"
admin_username = "${var.ansible_user}"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/${var.ansible_user}/.ssh/authorized_keys"
key_data = "${var.public_key}"
}
}
boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.storageaccount.primary_blob_endpoint}"
}
tags = {
environment = "${var.deployment}"
}
}
resource "null_resource" "ssh_connection" {
connection {
host = "${azurerm_public_ip.publicip.ip_address}"
type = "ssh"
private_key = "${file(var.private_key)}"
port = 22
user = "${var.ansible_user}"
agent = false
timeout = "1m"
}
provisioner "remote-exec" {
inline = ["sudo apt-get -qq install python"]
}
}
I have tried to SSH into the new VM manually with admin#xx.xx.xx.xx:22 and it works. Looking at the error message I then output the parameter ${azurerm_public_ip.publicip.ip_address} but it is null so I think that this is the reason why the SSH authentication failed but I don't know the reason. If I want to SSH the server via Terraform script, how can I modify the code?
Your issue is that Terraform has built a dependency graph that tells it that the only dependency for the null_resource.ssh_connection is the azurerm_public_ip.publicip resource and so it's starting to try to connect before the instance has been created.
This in itself isn't a massive issues as the provisioner would normally attempt to retry in case SSH isn't yet available but the connection details are being determined as soon as the null resource starts. And with the azurerm_public_ip set to an allocation_method of Dynamic it won't get its IP address until after it has been attached to a resource:
Note Dynamic Public IP Addresses aren't allocated until they're assigned to a resource (such as a Virtual Machine or a Load Balancer) by design within Azure - more information is available below.
There's a few different ways you can solve this. You could make the null_resource depend on the azurerm_virtual_machine.vm resource via interpolation or via depends_on:
resource "null_resource" "ssh_connection" {
connection {
host = "${azurerm_public_ip.publicip.ip_address}"
type = "ssh"
private_key = "${file(var.private_key)}"
port = 22
user = "${var.ansible_user}"
agent = false
timeout = "1m"
}
provisioner "remote-exec" {
inline = [
"echo ${azurerm_virtual_machine.vm.id}",
"sudo apt-get -qq install python",
]
}
}
or
resource "null_resource" "ssh_connection" {
depends_on = ["azurerm_virtual_machine.vm"]
connection {
host = "${azurerm_public_ip.publicip.ip_address}"
type = "ssh"
private_key = "${file(var.private_key)}"
port = 22
user = "${var.ansible_user}"
agent = false
timeout = "1m"
}
provisioner "remote-exec" {
inline = ["sudo apt-get -qq install python"]
}
}
A better approach here would to be to run the provisioner as part of the azurerm_virtual_machine.vm resource instead of a null_resource. The normal reasons to use a null_resource to launch a provisioner are when you need to wait until after something else has happened to a resource such as attaching a disk or if there's not an appropriate resource to attach it to but this doesn't really apply here. So instead of your existing null_resource you'd move the provisioner into the azurerm_virtual_machine.vm resource:
resource "azurerm_virtual_machine" "vm" {
# ...
provisioner "remote-exec" {
connection {
host = "${azurerm_public_ip.publicip.ip_address}"
type = "ssh"
private_key = "${file(var.private_key)}"
port = 22
user = "${var.ansible_user}"
agent = false
timeout = "1m"
}
inline = ["sudo apt-get -qq install python"]
}
}
For many resources this also allows you to refer to the outputs of the resource you are provisioning by using the self keyword. Unfortunately the azurerm_virtual_machine resource doesn't seem to easily expose the IP address of the VM due to this being set by the network_interface_ids.

Terraform private azure load balancer issue

Iam trying to deploy an infrastructure with a private loadbalancer:
.....
resource "azurerm_lb" "private" {
name = "${var.name}-${var.live}-private-lb"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
sku = var.sku
frontend_ip_configuration {
name = "frontend"
subnet_id = var.subnet_id != "" ? var.subnet_id : null
private_ip_address = (var.subnet_id != "" && var.private_ip != "") ? var.private_ip : null
private_ip_address_allocation = var.subnet_id != "" ? (var.subnet_id == "" ? "Static" : "Dynamic") : null
}
}
......
But i got the error message :
..../frontendIPConfigurations/frontend must reference either a Subnet, Public IP Address or Public IP Prefix." Details=[]
Why and how can i tackle this issue ? I don't know which configuration is missing.
thanks
An internal Load Balancer differs from a public Load Balancer, it has been assigned to a subnet and does not have a public IP address. As the error displayed, the frontend should reference either a Subnet, Public IP Address or Public IP Prefix, and the subnet should have existed when you reference. You could use the data source subnet to access information about an existing resource or create your subnet and VNet for your load balancer.
For example, the following can work for me.
data "azurerm_resource_group" "rg" {
name = "mytestrg"
}
variable "sku" {
default = "basic"
}
variable "private_ip" {
default = "172.19.0.100"
}
variable "env" {
default="Static"
}
data "azurerm_subnet" "test" {
name = "default"
virtual_network_name = "vnet1"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
}
resource "azurerm_lb" "test" {
name = "mytestlb"
location = "${data.azurerm_resource_group.rg.location}"
resource_group_name = "${data.azurerm_resource_group.rg.name}"
sku = "${var.sku}"
frontend_ip_configuration {
name = "frontend"
subnet_id = "${data.azurerm_subnet.test.id}"
private_ip_address = "${var.env=="Static"? var.private_ip: null}"
private_ip_address_allocation = "${var.env=="Static"? "Static": "Dynamic"}"
}
}

Resources