Has anyone had success in mapping Azure fileshare path mapping with Container volume path. I'm specifically looking for Allure Docker container mounting in azure and mapping container volume path to Azurefile share path.
I have used ARM templates and also yml files. but no where I could find mounting volume paths defined or explained in Azure docs online.
Also I saw an option where one can create their own container and host it in Azure container registry and then they can use docker-compose file to map the volume paths. Which is not I'm after. I dont want to host container in ACR. I'm using third party container always.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"containerGroups_tst_tf_allure_report_api_aci_name": {
"defaultValue": "tst-tf-allure-report-api-aci",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.ContainerInstance/containerGroups",
"apiVersion": "2019-12-01",
"name": "[parameters('containerGroups_tst_tf_allure_report_api_aci_name')]",
"location": "[resourceGroup().location]",
"properties": {
"sku": "Standard",
"containers": [
{
"name": "[parameters('containerGroups_tst_tf_allure_report_api_aci_name')]",
"properties": {
"image": "frankescobar/allure-docker-service",
"ports": [
{
"protocol": "TCP",
"port": 5050
}
],
"volumeMounts": [
{
"name": "filesharevolume",
"mountPath": "/mnt/acishare/projects"
}
],
"environmentVariables": [
{
"name": "CHECK_RESULTS_EVERY_SECONDS",
"value": 1
},
{
"name": "KEEP_HISTORY",
"value": 1
},
{
"name": "KEEP_HISTORY_LATEST",
"value": 25
}
],
"resources": {
"requests": {
"memoryInGB": 1,
"cpu": 1
}
}
}
}
],
"initContainers": [],
"restartPolicy": "OnFailure",
"osType": "Linux",
"ipAddress": {
"ports": [
{
"protocol": "TCP",
"port": 5050
}
],
"type": "Public"
},
"volumes": [
{
"name": "filesharevolume",
"azureFile": {
"shareName": "acishare",
"storageAccountName": "acistoragev1",
"storageAccountKey": "zzzxxxxxxxxxddddddddddddddd"
}
}
]
}
}
]
}
I don't see any problems with your ARM template and it works fine on my side. When you map the Azure file share on the container, then you can see the files in that path both where. But you need to note that the path must be a new one or there no files exist in that path before mapping. Azure file share will hide the files existing before. Here is the example about map Azure file share to ACI.
Alright after debugging alot with Azure container and Fileshare, I got the answer finally.
The "mountPath" is bascially the container volume path which you want to map back to Azure Fileshare directory. But the path does not need to exists on the fileshare directory. just the root name /acishare should match to your fileshare directory.
In my case above it is /acishare/projects. After fixing this I can see the volume is correctly copying files to /acishare/projects directory and I can restart container or re-create container, the files are retained and re-sycned back to the container.
Related
I have a container that runs in ACI and uses mounted storageaccount file shares. In the template I have a section:
"type": "Microsoft.ContainerInstance/containerGroups",
...
"volumeMounts": [
{
"mountPath": "/aci/sra/
"name": "acisra",
"readOnly": false
},
{
"mountPath": "/aci/fastq
"name": "acifastq",
"readOnly": false
},
{
"mountPath": "/aci/cache
"name": "acicache",
"readOnly": false
}
],
...
"volumes": [
{
"name": "acisra",
"azureFile": {
"readOnly": false,
"shareName": "acisra",
"storageAccountKey": "[parameters('storageAccountKey')]",
"storageAccountName": "[parameters('storageAccountName')]"
}
},
{
"name": "acicache",
"azureFile": {
"readOnly": false,
"shareName": "acicache",
"storageAccountKey": "[parameters('storageAccountKey')]",
"storageAccountName": "[parameters('storageAccountName')]"
}
},
{
"name": "acifastq",
"azureFile": {
"readOnly": false,
"shareName": "acifastq",
"storageAccountKey": "[parameters('storageAccountKey')]",
"storageAccountName": "[parameters('storageAccountName')]"
}
}
],
This allows me to run bash scripts referencing paths such as:
/aci/cache/$ID
Unfortunately, I am finding ACI has poor networking performance. I am also running an Azure Function that makes use of the the same fileshares with an always-on App Service Plan and the performance is excellent.
I found this article explaining how to run a container on an App Service Plan. This works great, the Log Stream shows my container is working.
https://learn.microsoft.com/en-us/azure/app-service/quickstart-custom-container?tabs=dotnet&pivots=container-linux-vscode
However, my container is not able to make the fileshare reference.
Is there a way to tell the App Service to make fileshare connection available to the container is a manner similar to the "Microsoft.ContainerInstance/containerGroups" template.
Is there a way to tell the App Service to run n instances of the container ?
This is a kind of newbee question on ARM templates.
I'm trying to add a private endpoint to an existing ADLS v2 storage account.
The problem is that I don't have the existing code and if I export the template I may miss something, like networking and firewall information.
Any advice on how to add a private endpoint to an existing storage account using an ARM template?
Thanks.
I tried in my environmnt and got below results:
Add a private endpoint to an existing storage account using an ARM template?
Yes, you can create private endpoint for azure ADLS account using ARM template.
Template:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"privateEndpoints_venkat345_name": {
"defaultValue": "venkat345",
"type": "String"
},
"storageAccounts_venkat326_externalid": {
"defaultValue": "/subscriptions/xxxxxx/resourceGroups/v-venkat-rg/providers/Microsoft.Storage/storageAccounts/venkat326",
"type": "String"
},
"virtualNetworks_imr_externalid": {
"defaultValue": "/subscriptions/xxxxx/resourceGroups/v-venkat-rg/providers/Microsoft.Network/virtualNetworks/venkat",
"type": "String"
},
"privateDnsZones_privatelink_blob_core_windows_net_externalid": {
"defaultValue": "/subscriptions/xxxxxxxxxxx/resourceGroups/v-venkat-rg/providers/Microsoft.Network/privateDnsZones/privatelink.blob.core.windows.net",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Network/privateEndpoints",
"apiVersion": "2022-05-01",
"name": "[parameters('privateEndpoints_venkat345_name')]",
"location": "eastus",
"tags": {
"Reason": "Repro",
"CreatedDate": "1/24/2023 4:31:05 AM",
"CreatedBy": "NA",
"OwningTeam": "NA"
},
"properties": {
"privateLinkServiceConnections": [
{
"name": "[parameters('privateEndpoints_venkat345_name')]",
"id": "[concat(resourceId('Microsoft.Network/privateEndpoints', parameters('privateEndpoints_venkat345_name')), concat('/privateLinkServiceConnections/', parameters('privateEndpoints_venkat345_name')))]",
"properties": {
"privateLinkServiceId": "[parameters('storageAccounts_venkat326_externalid')]",
"groupIds": [
"blob"
],
"privateLinkServiceConnectionState": {
"status": "Approved",
"description": "Auto-Approved",
"actionsRequired": "None"
}
}
}
],
"manualPrivateLinkServiceConnections": [],
"customNetworkInterfaceName": "[concat(parameters('privateEndpoints_venkat345_name'), '-nic')]",
"subnet": {
"id": "[concat(parameters('virtualNetworks_venkat_externalid'), '/subnets/default')]"
},
"ipConfigurations": [],
"customDnsConfigs": []
}
},
{
"type": "Microsoft.Network/privateEndpoints/privateDnsZoneGroups",
"apiVersion": "2022-05-01",
"name": "[concat(parameters('privateEndpoints_venkat345_name'), '/default')]",
"dependsOn": [
"[resourceId('Microsoft.Network/privateEndpoints', parameters('privateEndpoints_venkat345_name'))]"
],
"properties": {
"privateDnsZoneConfigs": [
{
"name": "privatelink-blob-core-windows-net",
"properties": {
"privateDnsZoneId": "[parameters('privateDnsZones_privatelink_blob_core_windows_net_externalid')]"
}
}
]
}
}
]
}
You can deploy the template through the portal using custom Template deployment.
Portal -> Template deployments -> Custom deployments -> Build your own deployments.
Portal:
The above template deployed successfully, and it reflected in both in resource group and ADLS storage account.
Reference:
Use private endpoints - Azure Storage | Microsoft Learn
I created a Container Instance in Azure and used a Hello World image.
However, I do not get any IP to access the webserver!
TL;DR. You might want to use the CLI (or other methods) to deploy ACI.
Be sure to set either --ip_address public or --dns-name-label <blah> in order to create public IP address.
Example:
az container create --resource-group myResourceGroup --name mycontainer --image mcr.microsoft.com/azuredocs/aci-helloworld --dns-name-label aci-demo --ports 80
Ref: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-quickstart
Investigation details:
I was able to reproduce the same issue when deploying ACI via the Azure portal. My guess is: there is a bug with the Azure portal , it is not sending the public IP / DNS label to Azure control plane so the public IP address was not created.
In this screenshot, I've also provided the DNS name label and set IP address to public, we can check the ARM template portal uses.
The ARM template looks like the following, where you can see dnsNameLabel was defined as a parameter, but not being referenced / applied in the resource creation section
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string"
},
"containerName": {
"type": "string"
},
"imageType": {
"type": "string",
"allowedValues": [
"Public",
"Private"
]
},
"imageName": {
"type": "string"
},
"osType": {
"type": "string",
"allowedValues": [
"Linux",
"Windows"
]
},
"numberCpuCores": {
"type": "string"
},
"memory": {
"type": "string"
},
"restartPolicy": {
"type": "string",
"allowedValues": [
"OnFailure",
"Always",
"Never"
]
},
"ports": {
"type": "array"
},
"dnsNameLabel": {
"type": "string"
}
},
"resources": [
{
"location": "[parameters('location')]",
"name": "[parameters('containerName')]",
"type": "Microsoft.ContainerInstance/containerGroups",
"apiVersion": "2021-07-01",
"properties": {
"containers": [
{
"name": "[parameters('containerName')]",
"properties": {
"image": "[parameters('imageName')]",
"resources": {
"requests": {
"cpu": "[int(parameters('numberCpuCores'))]",
"memoryInGB": "[float(parameters('memory'))]"
}
},
"ports": "[parameters('ports')]"
}
}
],
"restartPolicy": "[parameters('restartPolicy')]",
"osType": "[parameters('osType')]"
},
"tags": {}
}
]
}
I have tested in my environment.
I followed this document Quickstart - Deploy Docker container to container instance - Portal - Azure Container Instances | Microsoft Docs to create a container instance via azure portal.
But the issue with creating the container from portal is IP address and FQDN are blank after creating the container instance. There might be some issue with creation of container instance via portal
I created another container instance using CLI by following this document Quickstart - Deploy Docker container to container instance - Azure CLI - Azure Container Instances | Microsoft Docs
The container instance is successfully created with IP address and FQDN.
So, we can create container instance with CLI for workaround.
I built out an Azure Batch Account via the UI (Portal) and exported the template after I got everything working the way I wanted it.
Now I'm trying to deploy this ARM template via Visual Studio 2019 and keep getting the following error:
The specified application package does not exist.
The ARM template looks good and I've reconciled it with Microsoft.Batch batchAccounts/pools template reference. I did this to verify that the template allows for applicationPackages element.
The specific portion of the template causing my issue is as follows:
"applicationPackages": [
{
"id": "[concat(resourceId('Microsoft.Batch/batchAccounts', parameters('batchAccounts_baeast909_name')), '/applications/logparser')]",
"version": "2.2"
},
{
"id": "[concat(resourceId('Microsoft.Batch/batchAccounts', parameters('batchAccounts_baeast909_name')), '/applications/powershellscripts')]",
"version": "1.0"
}
]
I was hoping this is would be as simple as placing the application zips in a directory called applications and running everything again. Alas it wasn't and the deployment failed with the same error.
One of the comments asked why I would be doing this. The answer to this is I'm running a Custom Activity out of Azure Data Factory V2 (ADFv2.) The custom activity transforms WebLogs via a executable called LogParser.exe That executable is loaded as an application to the Batch Account as you see below. I also added the PowerShell Scripts that tie everything together as an application.
I was hoping for a solution similar to deploying a Web App that is detailed here: Deploy Azure Web App Package using ARM
So my questions are:
Can the applications zips be deployed at the same time as I am deploying the ARM template?
If they can not, when do I deploy them, and how do I automate that process?
application.json:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"configuration": {
"type": "object",
"metadata": {
"description": "Configuration for this resource"
}
},
"pools_1_password": {
"type": "SecureString"
},
"batchAccounts_baeast909_name": {
"defaultValue": "baeast909",
"type": "String"
},
"storageAccounts_storageaccount909_externalid": {
"defaultValue": "/subscriptions/subguid/resourceGroups/resourcegroup909/providers/Microsoft.Storage/storageAccounts/storageaccount909",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Batch/batchAccounts",
"apiVersion": "2017-09-01",
"name": "[parameters('batchAccounts_baeast909_name')]",
"location": "eastus2",
"tags": {
"displayname": "[parameters('configuration').displayName]",
"department": "[parameters('configuration').department]",
"group": "[parameters('configuration').group]",
"environment": "[parameters('configuration').environment]",
"primaryOwner": "[parameters('configuration').primaryOwner]",
"secondaryOwner": "[parameters('configuration').secondaryOwner]",
"version": "[parameters('configuration').version]",
"ms-resource-usage": "azure-cloud-shell"
},
"properties": {
"autoStorage": {
"storageAccountId": "[parameters('storageAccounts_storageaccount909_externalid')]"
},
"poolAllocationMode": "BatchService"
}
},
{
"type": "Microsoft.Batch/batchAccounts/pools",
"apiVersion": "2017-09-01",
"name": "[concat(parameters('batchAccounts_baeast909_name'), '/1')]",
"dependsOn": [
"[resourceId('Microsoft.Batch/batchAccounts', parameters('batchAccounts_baeast909_name'))]"
],
"properties": {
"vmSize": "STANDARD_A1",
"interNodeCommunication": "Disabled",
"maxTasksPerNode": 1,
"taskSchedulingPolicy": {
"nodeFillType": "Spread"
},
"deploymentConfiguration": {
"virtualMachineConfiguration": {
"imageReference": {
"publisher": "microsoftwindowsserver",
"offer": "windowsserver",
"sku": "2016-datacenter",
"version": "latest"
},
"nodeAgentSkuId": "batch.node.windows amd64",
"dataDisks": [
{
"lun": 0,
"caching": "ReadWrite",
"diskSizeGB": 100,
"storageAccountType": "Standard_LRS"
}
]
}
},
"scaleSettings": {
"fixedScale": {
"targetDedicatedNodes": 1,
"targetLowPriorityNodes": 0,
"resizeTimeout": "PT15M"
}
},
"userAccounts": [
{
"name": "jborn",
"elevationLevel": "NonAdmin",
"password": "[parameters('pools_1_password')]"
}
],
"applicationPackages": [
{
"id": "[concat(resourceId('Microsoft.Batch/batchAccounts', parameters('batchAccounts_baeast909_name')), '/applications/logparser')]",
"version": "2.2"
},
{
"id": "[concat(resourceId('Microsoft.Batch/batchAccounts', parameters('batchAccounts_baeast909_name')), '/applications/powershellscripts')]",
"version": "1.0"
}
]
}
}
]
}
application.parameters.json:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"configuration": {
"value": {
"displayName": "A Batch Account",
"department": "IT",
"group": "Development",
"environment": "dev",
"primaryOwner": "user1#fred.com",
"secondaryOwner": "user2#fred.com",
"version": "1.0"
}
},
"pools_1_password": {
"reference": {
"keyVault": {
"id": "/subscriptions/subguid/resourceGroups/rgn00119/providers/Microsoft.KeyVault/vaults/keyvault909"
},
"secretName": "azureAdmin"
}
},
"batchAccounts_jc00mdpbageu2d99_name": {
"value": "jc00mdpbageu2d99"
},
"storageAccounts_jc00mdpstgeud99_externalid": {
"value": "/subscriptions/subguid/resourceGroups/rgn00119/providers/Microsoft.Storage/storageAccounts/storageAccount909"
}
}
}
Please follow the below steps to download and deploy an ARM template using Visual Studio 2019:
Fill the details for creating a Azure Batch account and click on "Download a template for automation"
Download the zip
Deploy ARM template using Visual Studio 2019
https://learn.microsoft.com/en-us/azure/azure-resource-manager/vs-azure-tools-resource-groups-deployment-projects-create-deploy
In step 4 in the above document use a blank template instead of a WebApp
Now paste the contents from the downloaded zip
Copy contents from template.json to azuredeploy.json
Copy contents from parameters.json to azuredeploy.parameters.json
Now deploy your ARM template using https://learn.microsoft.com/en-us/azure/azure-resource-manager/vs-azure-tools-resource-groups-deployment-projects-create-deploy#azurerm-module-script
Edit: In order to create a batch pool using ARM template, you would first have to create an Application Package using Azure CLI and reference this from your ARM template for creating a Batch Pool
# Upload and register your archive as application package
az batch application package create \
--resource-group testrg01 \
--name test01 \
--application-id app01 \
--package-file myapp-exe.zip \
--version 1.0
# Set this version of package as default version
az batch application set \
--resource-group testrg01 \
--name test01 \
--application-id app01 \
--default-version 1.0
References:
https://tsmatz.wordpress.com/2017/12/12/essential-azure-batch-with-azure-cli/
https://learn.microsoft.com/bs-latn-ba/cli/azure/batch/application/package?view=azure-cli-latest
https://learn.microsoft.com/en-us/azure/batch/batch-cli-templates
Hope this helps!
I am developing ARM template to deploy an App Service Environment v2 configured with an Internal Load Balancer (ILB ASE). Is there a way to grab the Virtual IP (VIP) address that the Internal Load Balancer gets from the vnet it is attached to as an output? When I look at the properties of the ASE via PowerShell after it is provisioned, I do not see a property for the IP address, or for the load balancer.
After much research and testing...there is currently no way to do this as an output from the ARM template. Here are the ways that the value can be collected:
Via Resource Explorer...although this is not very helpful for doing it programmatically but it did help me figure out the other 2 ways
Using PowerShell to query the management.azure.com API but you have to publish an app with the appropriate permissions and assign the app to have permissions in the subscription you are trying to query resources from
Using Azure CLI. This method turned out to be the easiest.
I needed this value to fully automate the deployment of an App Gateway sitting in front of an ILB ASE. I use Terraform for deployment automation and I run the Terraform configs from Azure Cloud Shell. I kick off my deployments with a shell script where I dynamically get the storage account key to the storage account where I store state files. I then query the ILB ASE to get the IP address and set it to a variable that I then pass into Terraform
Below is a copy of the shell script I use:
#!/bin/bash
set -eo pipefail
# The block below will grab the access key for the storage account that is used
# to store state files
subscription_name="<my_subscription_name>"
tfstate_storage_resource_group="terraform-state-rg"
tfstate_storage_account="<name_of_statefile_storage_account>"
subscription_id="my_subscription_id>"
ilbase_rg_name="<name_of_resourcegroup_where_ase_is_deployed>"
ilbase_name="<name_of_ase>"
az account set --subscription "$subscription_name"
tfstate_storage_access_key=$(
az storage account keys list \
--resource-group "$tfstate_storage_resource_group" \
--account-name "$tfstate_storage_account" \
--query '[0].value' -o tsv
)
echo ""
echo "Terraform state storage account access key:"
echo $tfstate_storage_access_key
echo ""
# The block below will get the Virtual IP of the ASE Internal Load Balancer
# which will be used to create the App GW
ilbase_virtual_ip=$(
az resource show \
--ids "/subscriptions/$subscription_id/resourceGroups/$ilbase_rg_name/providers/Microsoft.Web/hostingEnvironments/$ilbase_name/capacities/virtualip" \
--query "additionalProperties.internalIpAddress"
)
echo ""
echo "ASE internal load balancer IP:"
echo $ilbase_virtual_ip
echo ""
terraform plan \
-var "tfstate_access_key=$tfstate_storage_access_key" \
-var "ilbase_virtual_ip=$ilbase_virtual_ip"
You can use output like this:
"outputs": {
"privateIp": {
"type": "string",
"value": "[reference(parameters('lbname')).frontendIPConfigurations[0].properties.privateIPAddress]"
}
}
Here is my template, create one Vnet and one internal load balancer:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vnetName": {
"type": "string",
"defaultValue": "VNet1",
"metadata": {
"description": "VNet name"
}
},
"vnetAddressPrefix": {
"type": "string",
"defaultValue": "10.0.0.0/16",
"metadata": {
"description": "Address prefix"
}
},
"subnet1Prefix": {
"type": "string",
"defaultValue": "10.0.0.0/24",
"metadata": {
"description": "Subnet 1 Prefix"
}
},
"subnet1Name": {
"type": "string",
"defaultValue": "Subnet1",
"metadata": {
"description": "Subnet 1 Name"
}
},
"subnet2Prefix": {
"type": "string",
"defaultValue": "10.0.1.0/24",
"metadata": {
"description": "Subnet 2 Prefix"
}
},
"subnet2Name": {
"type": "string",
"defaultValue": "Subnet2",
"metadata": {
"description": "Subnet 2 Name"
}
},
"lbname": {
"defaultValue": "jasonlbb",
"type": "String"
}
},
"variables": {
"virtualnetworkname" : "vnet1",
"apiVersion": "2015-06-15",
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('virtualnetworkname'))]",
"subnetRef": "[concat(variables('vnetID'),'/subnets/',parameters('subnet1Name'))]"
},
"resources": [
{
"apiVersion": "2015-06-15",
"type": "Microsoft.Network/virtualNetworks",
"name": "[parameters('vnetName')]",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"[parameters('vnetAddressPrefix')]"
]
},
"subnets": [
{
"name": "[parameters('subnet1Name')]",
"properties": {
"addressPrefix": "[parameters('subnet1Prefix')]"
}
},
{
"name": "[parameters('subnet2Name')]",
"properties": {
"addressPrefix": "[parameters('subnet2Prefix')]"
}
}
]
}
},
{
"apiVersion": "2015-05-01-preview",
"type": "Microsoft.Network/loadBalancers",
"name": "[parameters('lbname')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[variables('vnetID')]"
],
"properties": {
"frontendIPConfigurations": [
{
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"privateIPAllocationMethod": "Dynamic"
},
"name": "LoadBalancerFrontend"
}
],
"backendAddressPools": [
{
"name": "BackendPool1"
}
],
"loadBalancingRules": [
{
"properties": {
"frontendIPConfiguration": {
"id": "[concat(resourceId('Microsoft.Network/loadBalancers', parameters('lbname')), '/frontendIpConfigurations/LoadBalancerFrontend')]"
},
"backendAddressPool": {
"id": "[concat(resourceId('Microsoft.Network/loadBalancers', parameters('lbname')), '/backendAddressPools/BackendPool1')]"
},
"probe": {
"id": "[concat(resourceId('Microsoft.Network/loadBalancers', parameters('lbname')), '/probes/lbprobe')]"
},
"protocol": "Tcp",
"frontendPort": 80,
"backendPort": 80,
"idleTimeoutInMinutes": 15
},
"Name": "lbrule"
}
],
"probes": [
{
"properties": {
"protocol": "Tcp",
"port": 80,
"intervalInSeconds": 15,
"numberOfProbes": 2
},
"name": "lbprobe"
}
]
}
}
],
"outputs": {
"privateIp": {
"type": "string",
"value": "[reference(parameters('lbname')).frontendIPConfigurations[0].properties.privateIPAddress]"
}
}
}
Here is the screenshot about the result:
Hope this helps.
If you’re using Terraform, here’s how I got it working. Had to use the external data source in Terraform coupled with Azure CLI and jq to get around the bugs in Azure and the Terraform External Data provider.
# As of writing, the ASE ARM deployment don’t return the IP address of the ILB
# ASE. This workaround querys Azure’s API to get the values we need for use
# elsewhere in the script.
# See this https://stackoverflow.com/a/49436100
data “external” “app_service_environment_ilb_ase_ip_address” {
# This calls the Azure CLI then passes the value to jq to return JSON as a single
# string so that external provider can parse it properly. Otherwise you get an
# error. See this bug https://github.com/terraform-providers/terraform-provider-external/issues/23
program = [“bash”, “-c”, “az resource show --ids ${local.app_service_environment_id}/capacities/virtualip --query ‘{internalIpAddress: internalIpAddress}’ | jq -c”]
# Explicit dependency on the ASE ARM deployment because this command will fail
# if that resource isn’t built yet.
depends_on = [azurerm_template_deployment.ase]
}