Azure ARM template - vnet example - azure

I am new to Azure. I am trying to create an environment that should be fairly straightforward - App Service and Azure SQL with a Private Endpoint. I decided to try this ARM template:
https://azure.microsoft.com/en-us/resources/templates/web-app-regional-vnet-private-endpoint-sql-storage/
https://github.com/Azure/azure-quickstart-templates/tree/master/demos/web-app-regional-vnet-private-endpoint-sql-storage
I am not quite sure what to put for the V Nets (sic) entry. I have tried for hours and read through all the documentation I can find, which is lacking. Can someone please provide advice or an example. Thanks.
Azure Image
UPDATE
This is what I ended up putting in the VNets parameter:
[{"name":"hub-vnet","addressPrefixes":["10.1.0.0/16"],"subnets":[{"name":"PrivateLinkSubnet","addressPrefix":"10.1.1.0/24","udrName":null,"nsgName":null,"delegations":null,"privateEndpointNetworkPolicies":"Disabled","privateLinkServiceNetworkPolicies":"Enabled"}]},{"name":"spoke-vnet","addressPrefixes":["10.2.0.0/16"],"subnets":[{"name":"AppSvcSubnet","addressPrefix":"10.2.1.0/24","udrName":null,"nsgName":null,"privateEndpointNetworkPolicies":"Enabled","privateLinkServiceNetworkPolicies":"Enabled","delegations":[{"name":"appservice","properties":{"serviceName":"Microsoft.Web/serverFarms"}}]}]}]

In the same repo, check out the azuredeploy.parameters.json file, and you'll see an example of the vNets object it's looking for.
You can use the parameters file as-is and deploy it to your subscription or feel free to customize the names, address spaces, etc. by adjusting the properties
snippet
"parameters": {
"vNets": {
"value": [
{
"name": "hub-vnet",
// ...
},
{
"name": "spoke-vnet",
// ...
}
]
},

Related

Azure Data Factory ARM Template has empty parameter for linked service connection string

I have some problems, that I am hoping someone can help me with.
I created a bunch of resources - few linked services, few datasets and few pipe lines in the Datafactory "DevDataFactory".
One of the linked services connecting to a SQL Database is configured this way
Then in the json for that linked service the connection string is seen in this snippet:
"typeProperties": {
"connectionString": "Integrated Security=False;Encrypt=True;Connection Timeout=30;Data Source=#{linkedService().cloudDbDomain};Initial Catalog=#{linkedService().dbName};User ID=#{linkedService().dbUserName}",
"password": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "KeyVaultLink",
"type": "LinkedServiceReference"
},
"secretName": "DBPassword"
},
"alwaysEncryptedSettings": {
"alwaysEncryptedAkvAuthType": "ManagedIdentity"
}
}
All parameters are in place, default values are set and when
pipeline is run it asks for values.
The problem is that when I then go and export the ARM Template so to use the datafactory in another environment, there is a parameter there for this Linked Service's connection String, and this parameter value is blank. ALL AND ANY OTHER PARAMETER used in the ARM template has some default value - why does this one not have it ?
Given the parameters used inside the connection string are present and do have their own default values.
If I use the Azure Portal to import that ARM Template, I can go ahead and enter the connection string, like from the json code snippent above
Integrated Security=False;Encrypt=True;Connection Timeout=30;Data Source=#{linkedService().cloudDbDomain};Initial Catalog=#{linkedService().dbName};User ID=#{linkedService().dbUserName}
.. and everything will be imported and all my pipelines will automatically work out from the newly created datafactory.
The PROBLEM IS that I need to do this from Azure Dev Ops pipelines, which are automatically picked up from Git repo "adf-publish" branch, and this is where I don't know what to do. When ADO pipeline automatically runs, I can't just substitute the "connection string" on the fly.
I am stuck, please help !!

Azure logic app - How can I share integration account across multiple resource groups

I am trying to deploy my logic app to multiple environments using CI/CD pipeline. I am getting an error -The client 'guid' with object id ''guid' ' has permission to perform action 'Microsoft.Logic/workflows/write' on scope 'Test Resource group'; however, it does not have permission to perform action 'Microsoft.Logic/integrationAccounts/join/action' on the linked scope(s) 'Development Resource group integration account' or the linked scope(s) are invalid.
Creating another integration account for test resource group doesnt come under free tier. Is there a way to share integration account across multiple resource groups
Not sure what the permissions issue is but you might need to give more information around this.
But try the below first in your pipeline. Works for us with 3 different resource groups and two integration accounts
"parameters": {
"IntegrationAccountName": {
"type": "string",
"minLength": 1,
"defaultValue": "inter-account-name"
},
"Environment": {
"type": "string",
"minLength": 1,
"defaultValue": "dev"
}
},
"variables": {
"resourceGroupName": "[if(equals(parameters('Environment'), 'prd'),'rg-resources-production','rg-resources-staging')]",
"LogicAppIntegrationAccount": "[concat('/subscriptions/3fxxx4de-xxxx-xxxx-xxxx-88ccxxxfbab4/resourceGroups/',variables('resourceGroupName'),'/providers/Microsoft.Logic/integrationAccounts/',parameters('IntegrationAccount'))]",
},
In the above sample, we had two different integration accounts one for testing and one for production. This is why I have set the integration account name as a parameter as it changes between environments.
I have created a variable "resourceGroupName" this is important because this url is setting up a direct link to the integration account which is stored in a known resource group. In this sample I have included an if statement using the value set at the "environment" parameter. This helps select which resource group is going to be used.
I then create another variable which stores the new URL. Replace the subscription guid with your own: 3fxxx4de-xxxx-xxxx-xxxx-88ccxxxfbab4.
Once that is created you need to change the ARM template to use the variable you just created. To set it,place in the properties object.
"properties": {
"state": "Enabled",
"integrationAccount": {
"id": "[variables('LogicAppIntegrationAccount')]"
},
So for you pipeline it should just be a normal arm template but with these two parameters above being set.
Let me know if you have more questions around this.

Retrieve regional code for azure static website in ARM template

We are developing an application which uses the following two resources:
Azure Functions for the backend
Azure Storage as a static website for the front end
This is being deployed automatically by our CI pipeline using ARM templates. However, for the application to work we need to set the CORS rules on the Azure function to allow the static website to perform the api calls.
This is now performed by the following resource configuration:
{
"type": "Microsoft.Web/sites/config",
"name": "[concat(variables('functionAppName'), '/web')]",
"apiVersion": "2016-08-01",
"location": "[parameters('location')]",
"properties": {
"cors": {
"allowedOrigins": [
"[concat('https://', variables('storageAccountName'),'.z21', '.web.core.windows.net']"
],
}
},
"dependsOn": [
"[resourceId('Microsoft.Web/sites/', variables('functionAppName'))]"
]
}
However we are hardcoding the .z21 parameter since we know that the location parameter will be South Central US. However, this is not something that we desire to hardcode since probably the application could be deployed to another location.
Reading upon the documentation for static website hosting it states that
The URL of your site contains a regional code. For example the URL https://contosoblobaccount.z22.web.core.windows.net/ contains regional code z22.
While that code must remain the URL, it is only for internal use, and you won't have to use that code in any other way.
However I couldnt find a reference to which regional codes are being used by Azure. Is there a way to know that so that we stop hardcoding this value into our ARM template?
Another approach would be to dynamically access the storage account properties on the ARM template through an ARM template function, however I am unsure of which function could help us retrieve the regional code for the storage account.
Thanks in advance!
Ugly solution to extract regional code with PowerShell.
$stgname = '0708static'
$start = "https://$stgname."
$end = (Get-AzStorageAccount -Name $stgname -ResourceGroupName 'deleteme').PrimaryEndpoints.Web.TrimEnd('.web.core.windows.net/')
$result = $end.TrimStart("$start")
$result
If you want the entire URI then you can use:
[reference(variables('storageAccountId'), '2019-04-01').primaryEndpoints.web
If you want just the region code you can use:
[split(reference(variables('storageAccountId'), '2019-04-01').primaryEndpoints.web, '.')[1]]

Copy the Logic APP from one resource group to another resource group using Azure CLI

I have created one logic app, now I want to copy the same logic to another resource group that I can use for the testing environment.
Can some one help me out with either Azure CLI command or any direct option in the Azure portal itself to copy the logic app from one resource group to another resource group.
I checked in the Azure portal, I can see only the "Move" option, when I use that it is just move my logic app from resource group 1 to resource group 2. But my requirement is it should present in both the resource groups .
Thanks in advance.
Regards,
Manikanta
From Azure portal, you can easily copy your logic app using the Clone button
You could download the logic app and connections as an ARM template using Logic app VS tools, with this way it contains all connections you set.
Then you could edit it , if you use Visual Studio, just replace the LogicApp.json with the one you downloaded.
If your selected connectors need input from you, a PowerShell window
opens in the background and prompts for any necessary passwords or
secret keys. After you enter this information, deployment continues.
Also you could deploy the template with Azure Cli.
This might make a little mess with connections, but I found this approach working faster for large LA's than manually recreating the same LA in a new resource group:
Open the logic app (LA01), click the clone button and save it with a different name (LA02) in the same resource group.
Open LA02 and near the resource group click change.
Choose new resource group to which you want to move it. You can also choose related resources if needed, but you probably want to make a copy of them too. Please make sure that you understand that all tools and scripts associated with moved resources will not work until you update them to use new resource IDs. This operation might take some time.
Optional. You might want to use the same name (LA01) as in previous resource group. Sadly I think you can not rename items in Azure, so perform the step #1 again to make a copy with LA01 name and remove LA02 from new resource group.
After those steps open copied LA and recreate all the connections.
I have also found another pretty neat way how to update already existing LA in other resource groups. It might look a little messy, but when you do it for several times, you can do it much way faster than just always cloning LA. When you open LA and click on Code view you need to notice that each LA structure is like in an example below. You can take all code in LA1 (resource group 1) from top till outputs and copy paste it to new LA2 (resource group 2), but some changes needs to be done in LA2 the first time you do this:
SomeActions - this will be copy pasted as is
$connections - this must be left as is, it's a pointer to LA connection definition
OtherParameters - this is your parameters that you will be passing to LA, so usually for different resource group you use different parameters, so keep this in mind and change accordingly if this is the case
SomeTrigger - Usually you should leave it as it was defined in LA2.
SomeConnection - the most important part is to make sure that in both LA you are using the same connection reference. If not the case, then retrieve the connection reference name from SomeActions part, and update the SomeConnection, but leave connectionId and connectionName as it was defined in LA2, so only the connection name matches between both LA.
Next time you want to do an update you just take the code, and copy everything from top till outputs.
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
...SomeActions
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {
"$connections": {
"defaultValue": {},
"type": "Object"
},
"OtherParameters": {
"defaultValue": "SomeValue",
"type": "String"
}
},
"triggers": {
"manual": {
"inputs": {
"schema": {
...SomeTrigger
},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {
"$connections": {
"value": {
"SomeConnection": {
"connectionId": "SomeId",
"connectionName": "SomeName",
"id": "SomeId"
}
}
}
}
}

Output an IotHub endpoint with Azure Resource Manager template

I am about to script the deployment of our Azure solution. For that reason I create an Azure IoTHub with a Resource Manager Template. This works very well. But the problem is, I need the
Event Hub-compatible endpoint string for further deployments.
See: https://picload.org/image/rrdopcia/untitled.png
I think, the solution would be, to output it in the template, but I cant get it to work.
The output-section of my template.json actually looks like this:
"outputs": {
"clusterProperties": {
"value": "[reference(parameters('clusterName'))]",
"type": "object"
},
"iotHubHostName": {
"type": "string",
"value": "[reference(variables('iotHubResourceId')).hostName]"
},
"iotHubConnectionString": {
"type": "string",
"value": "[concat('HostName=', reference(variables('iotHubResourceId')).hostName, ';SharedAccessKeyName=', variables('iotHubKeyName'), ';SharedAccessKey=', listkeys(variables('iotHubKeyResource'), variables('iotHubVersion')).primaryKey)]"
}
}
And here are the variables I used:
"variables": {
"iotHubVersion": "2016-02-03",
"iotHubResourceId": "[resourceId('Microsoft.Devices/Iothubs', parameters('iothubname'))]",
"iotHubKeyName": "iothubowner",
"iotHubKeyResource": "[resourceId('Microsoft.Devices/Iothubs/Iothubkeys', parameters('iothubname'), variables('iotHubKeyName'))]",
},
You can read the endpoint from the provisioned IoT Hub within the ARM template and build a connection string like this:
"EventHubConnectionString": "[concat('Endpoint=',reference(resourceId('Microsoft.Devices/IoTHubs',parameters('iothub_name'))).eventHubEndpoints.events.endpoint,';SharedAccessKeyName=iothubowner;SharedAccessKey=',listKeys(resourceId('Microsoft.Devices/IotHubs',parameters('iothub_name')),variables('devices_provider_apiversion')).value[0].primaryKey)]"
The important bit to get the EventHub-compatible endpoint was: resourceId('Microsoft.Devices/IoTHubs', parameters('iothub_name'))).eventHubEndpoints.events.endpoint
That was ripped out of my working ARM template. For clarity, here are some details about the variables/parameters in the above:
variables('devices_provider_apiversion') is "2016-02-03"
parameters('iothub_name') is the name of the IoT Hub that same ARM template is provisioning elsewhere in the template
The output of "listKeys" returns a array of key objects, where in my case the first item was "iothubowner". (... I like the approach to get this described in the question better. :)
One helpful trick that helped me learn what is available for me to read from the resources during execution of the ARM template is to output the entire resource and then find the property I am interested in. Here is how I output all details of the IoT Hub from running the ARM template:
"outputs": {
"iotHub": {
"value": "[reference(resourceId('Microsoft.Devices/IoTHubs',parameters('iothub_name')))]",
"type": "object"
}
}
You can also use this method to output the endpoint (among other things) to be used as input to other templates.

Resources