Nested Copy on Azure ARM template - azure

What I need to do is:
Create routing rules for each of my routes (http) assigned with just 2 frontends
Create one more routing rule, but with all available frontends assigned to it.
Currently I am using the copy like this
copy[
"name": "routingRules",
"count": mylengthvariable,
"input": {
"name": mynamearraywithsomeconcats,
"properties": {
"frontendEndpoints": the frontendpoint for this rule,
etc.. etcc
}
}
}
}
]
this is all well and good but I need to add yet another routing rule, but with more frontendpoints; which would like this: (this is placed outside of the copy above)
"name": "routingRules",
"input": {
"name": "extrarule",
"properties": {
"copy":[
{
"name" : "frontEndpoints",
"count": frontendpointcount,  
"input" : {
"id": a list of frontendpoints
}         
}
],
etc.. etc..
}
},
When I try this, I get the error because I am trying to add one more rule (I think)
I am seeking help on how to implement such a scenario.
Thanks in advance.

It is not possible to perform nested copies in ARM templates (unfortunately). There are some workarounds you can perform, such as expanding the sub-item in a variable, then referencing the variable.
Details and examples can be found here: https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/copy-resources

Related

Multiple Vms Arm template with different Vm names,sizes and same custom

I have seen many templates to create multi vms as loop using copy function. Ex: vm1, vm2 etc. But this is not how we put in practice as each vm has different function and the naming convention doesn't help.
I am trying to create a template with different vM names, sizes and a single custom Image.
Can anyone please help?
I suggest using an array of name/value pairs, either in the parameters or variables section of your template, for example,
"parameters": {
"vms": {
"type": "array",
"defaultValue": [
{
"name": "vm1",
"size": "Standard_DS1_v2"
},
{
"name": "vm2",
"size": "Standard_A1_v2"
}
]
}
}
Then you can dereference the array with
"copy": {
"name": "vmCopy",
"count": "[length(parameters('vms'))]"
}
and
parameters('vms')[copyIndex()].name
parameters('vms')[copyIndex()].size
By using parameters, we can differentiate the VM names, sizes etc:
"parameters": {
"org": {
"type": "array",
"defaultValue": [
"contoso",
"fabrikam",
"coho"
]
}
},
<![endif]-->
As Copy functions can get the correct value from parameter array and can set the count with length() automatically.
Refer to MS Docs to understand more.
Also have a look on this answer, thanks to SamaraSoucy for explanation.

Azure CORS Variable Allowed Origins

Whenever a new release pipeline is ran in Azure DevOps, the URL Is changed.. currently my ARM template has a hard-coded URL which can be annoying to keep on adding in manually.
"cors": {
"allowedOrigins": [
"[concat('https://',parameters('storage_account_name'),'.z10.web.core.windows.net')]"
}
The only thing that changes is the 10 part in the z10 so essentially i want it to be something like
[concat('https://',parameters('storage_account_name'),'.z', '*', '.web.core.windows.net')] I dont know if something like that is valid but essentially its so that the cors policy will accept the URL regardless of the z number.
Basically speaking this is not possible, because of the CORS standard (see docs).
which allows only for exact origins, wildcard, or null.
For instance, ARM for Azure Storage is also following this pattern allowing you to put a list of exact origins or a wildcard (see ARM docs)
However, if you know your website name, in your ARM you can receive the full host and use it in your CORS:
"[reference(resourceId('Microsoft.Web/sites', parameters('SiteName')), '2018-02-01').defaultHostName]"
The same with a static website (which is your case I guess) if you know the storage account name:
"[reference(concat('Microsoft.Storage/storageAccounts/', variables('storageAccountName')), '2019-06-01', 'Full').properties.primaryEndpoints.web]"
Advance reference output manipulation
Answering on comment - if you would like to replace some characters in the output from the reference function the easiest way is to use build-in replace function (see docs)
In case you need a more advanced scenario I am pasting my solution by introducing a custom function which is removing https:// and / from the end so https://contonso.com/ is transformed to contonso.com:
"functions": [
{
"namespace": "lmc",
"members": {
"replaceUri": {
"parameters": [
{
"name": "uriString",
"type": "string"
}
],
"output": {
"type": "string",
"value": "[replace(replace(parameters('uriString'), 'https://',''), '/','')]"
}
}
}
}
],
# ...(some code)...
"resources": [
# ... (some resource)...:
"properties": {
"hostName": "[lmc.replaceUri(reference(variables('storageNameCdn')).primaryEndpoints.blob)]"
}
]

Looping through complex JSON variables in ARM templates

In my ARM template I have a variable called "subnets" which can be of 3 types.
If it is of typeA then I want 4 subnets of the given names and addresses; if it's typeB then 2 subnets, and so on.
"variables": {
"subnets" : {
"typeA" : {
"network" : "3.0/24",
"directory" : "5.0/24",
"documents" : "8.0/24",
"security" : "10.0/24",
},
"typeB" : {
"directory" : "10.0/24",
"database" : "11.0/24",
},
"dmz" : {
"directory" : "12.0/24",
"database" : "15.0/24", }
}
}
In the ARM template I have a parameter which tells me what type to use. So I have a segment like the below which uses a condition to match on the subnetType being typeA and creates a virtual network accordingly.
{
"type": "Microsoft.Network/virtualNetworks",
"condition" : "[contains(parameters('subnetType'), 'typeA')]",
"apiVersion": "2018-10-01",
...
"copy" : [ {
"name" : "subnets",
"count" : "[length(array(variables('subnets').typeA))]",
"input": {
"name": "...",
"properties": {
"addressPrefix": "..."
}
}
} ]
}
}
As you can see above, I have a copy block within this VirtualNetwork resource, and I want to create the various subnets for the typeA network. I figure I could convert subnets.typeA to an array and get the length of it to loop over (that's the idea, I don't know if it actually works) but I am not clear how to extract the subnet name and addressprefix from my variable above.
so there are 2 issues here:
no way to loop object keys in arm templates
use of different resources in the template to create subnets
there is no way to work around the first limitation that I know of, whereas the second limitation is mostly due to you trying to work around the first one. I'd go for a completely different approach:
"networks": [
{
"name": "typeA",
"subnets": [
{
"name": "network",
"addressSpace": "3.0/24"
},
{
"name": "directory",
"addressSpace": "5.0/24"
},
{
"name": "documents",
"addressSpace": "8.0/24"
},
{
"name": "security",
"addressSpace": "10.0/24"
}
]
},
{
// second virtual network
},
{
// x virtual network
}
]
the main downside here - you'd have to have a nested deployment, because you cannot actually iterate array inside array, so you'd have to feed each object inside array into a deployment that would create a virtual network that can contain various subnets.
You can consult this link for an example of this exact approach or the official Azure Building Blocks thingie way of doing this (which is quite similar in the approach, but the implementation is different).
You could, get away with different resources instead of iterations, but that means you are less flexible and each time you make a change to the input everything breaks or just doesnt work like you think it would (your way of doing this would fall apart if dmz doesnt exist in that variable, you'll get a compilation error, similarly if you add another key to the object, say applicationgateway it will work, but that virtual network won't get created)

Azure API Management: Discriminate operations by both path and query parameters

I have a backend API (that implements ApiController) which I'd like to put behind an APIM API. ApiController allows us to discriminate between two different GET operations based on the query parameters that are passed in. When I attempt to define these endpoints in APIM, I get the following error:
The message suggests an endpoint is defined solely by the path and operation. But that seems to contradict documentation I found here which suggests there's a way to differentiate between operations based on query parameters:
Required parameters across both path and query must have unique names.
(In OpenAPI a parameter name only needs to be unique within a
location, for example path, query, header. However, in API Management
we allow operations to be discriminated by both path and query
parameters (which OpenAPI doesn't support). That's why we require
parameter names to be unique within the entire URL template.)
I have an ApiController that defines two different Get operations, differing only by the query parameters. How do I represent that in my APIM API?
The problem comes from multiple operation objects with the same OperationId. This is invalid swagger. In the Swagger file did not match the name of the selected API, so change the title attribute of the doc tag to match the destination API it worked..
Here is a similar SO thread you could refer to.
I got my answer from Azure support, sharing the info here:
APIM endpoints are defined by the path, method, and the name you assign to the operation. To differentiate between two GET endpoints to the same controller, differing only by query parameters, you need to hardcode required query parameters into the path. See the following two images:
In the latter image, the hardcoded query parameter is classified by the UI as a template parameter, but it still behaves like a regular query parameter. Query arguments defined in this way:
Are required
Can appear in anywhere in a request's list of query arguments
Are not case-sensitive
Are listed as a "Request Parameter" along side all other path parameters and query arguments in the Development Portal
Edit:
There's a typo in the screenshots. The URLs are case sensitive, and the casing of "blah" were different in each case. Here's what the the Open API Specification looks like when the casing is consistent. The overloaded path (with the query parameter hardcoded into the path template) appears in a section called x-ms-paths:
{
"swagger": "2.0",
"info": {
"title": "Echo API",
"version": "1.0"
},
"host": "<hostUrl>",
"basePath": "/echo",
"schemes": ["https"],
"securityDefinitions": {
"apiKeyHeader": {
"type": "apiKey",
"name": "Ocp-Apim-Subscription-Key",
"in": "header"
},
"apiKeyQuery": {
"type": "apiKey",
"name": "subscription-key",
"in": "query"
}
},
"security": [{
"apiKeyHeader": []
}, {
"apiKeyQuery": []
}],
"paths": {
"/Blah": {
"get": {
"operationId": "blah",
"summary": "Blah",
"responses": {}
}
}
},
"tags": [],
"x-ms-paths": {
"/Blah?alpha={alpha}": {
"get": {
"operationId": "blah2",
"summary": "Blah2",
"parameters": [{
"name": "alpha",
"in": "query",
"required": true,
"type": "string"
}],
"responses": {}
}
}
}
}

How to get the Azure Data Factory parameters into the ARM template parameters file (ARMTemplateParametersForFactory.json) after publishing

I am trying to create my Azure DevOps release pipeline for Azure Data Factory.
I have followed the rather cryptic guide from Microsoft (https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment ) regarding adding additional parameters to the ARM template that gets generated when you do a publish (https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment#use-custom-parameters-with-the-resource-manager-template )
Created a arm-template-parameters-definition.json file in the route of the master branch. When I do a publish, the ARMTemplateParametersForFactory.json in the adf_publish branch remains completely unchanged. I have tried many configurations.
I have defined some Pipeline Parameters in Data Factory and want them to be configurable in my deployment pipeline. Seems like an obvious requirement to me.
Have I missed something fundamental? Help please!
The JSON is as follows:
{
"Microsoft.DataFactory/factories/pipelines": {
"*": {
"properties": {
"parameters": {
"*": "="
}
}
}
},
"Microsoft.DataFactory/factories/integrationRuntimes": {
"*": "="
},
"Microsoft.DataFactory/factories/triggers": {},
"Microsoft.DataFactory/factories/linkedServices": {},
"Microsoft.DataFactory/factories/datasets": {}
}
I've been struggling with this for a few days and did not found a lot of info, so here what I've found out. You have to put the arm-template-parameters-definition.json in the configured root folder of your collaboration branch:
So in my example, it has to look like this:
If you work in a separate branch, you can test your configuration by downloading the arm templates from the data factory. When you make a change in the parameters-definition you have to reload your browser screen (f5) to refresh the configuration.
If you really want to parameterize all of the parameters in all of the pipelines, the following should work:
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"parameters":{
"*":{
"defaultValue":"="
}
}
}
}
I prefer specifying the parameters that I want to parameterize:
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"parameters":{
"LogicApp_RemoveFileFromADLSURL":{
"defaultValue":"=:-LogicApp_RemoveFileFromADLSURL:"
},
"LogicApp_RemoveBlob":{
"defaultValue":"=:-LogicApp_RemoveBlob:"
}
}
}
}
Just to clarify on top of Simon's great answer. If you have non standard git hierarchy (i.e. you move the root to a sub-folder like I have done below with "Source"), it can be confusing when the doc refers to the "repo root". Hopefully this diagram helps.
You've got the right idea, but the arm-template-parameters-definition.json file needs to follow the hierarchy of the element you want to parameterize.
Here is my pipeline activity I want to parameterize. The "url" should change based on the environment it's deployed in
{
"name": "[concat(parameters('factoryName'), '/ExecuteSPForNetPriceExpiringContractsReport')]",
"type": "Microsoft.DataFactory/factories/pipelines",
"apiVersion": "2018-06-01",
"properties": {
"description": "",
"activities": [
{
"name": "NetPriceExpiringContractsReport",
"description": "Passing values to the Logic App to generate the CSV file.",
"type": "WebActivity",
"typeProperties": {
"url": "[parameters('ExecuteSPForNetPriceExpiringContractsReport_properties_1_typeProperties')]",
"method": "POST",
"headers": {
"Content-Type": "application/json"
},
"body": {
"resultSet": "#activity('NetPriceExpiringContractsReportLookup').output"
}
}
}
]
}
}
Here is the arm-template-parameters-definition.json file that turns that URL into a parameter.
{
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"activities": [{
"typeProperties": {
"url": "-::string"
}
}]
}
},
"Microsoft.DataFactory/factories/integrationRuntimes": {},
"Microsoft.DataFactory/factories/triggers": {},
"Microsoft.DataFactory/factories/linkedServices": {
"*": "="
},
"Microsoft.DataFactory/factories/datasets": {
"*": "="
}
}
So basically in the pipelines of the ARM template, it looks for properties -> activities -> typeProperties -> url in the JSON and parameterizes it.
Here are the necessary steps to clear up confusion:
Add the arm-template-parameters-definition.json to your master branch.
Close and re-open your Dev ADF portal
Do a new Publish
Your ARMTemplateParametersForFactory.json will then be updated.
I have experienced similar problems with the ARMTemplateParametersForFactory.json file not being updated whenever I publish and have changed the arm-template-parameters-definition.json.
I figured that I can force update the Publish branch by doing the following:
Update the custom parameter definition file as you wish.
Delete ARMTemplateParametersForFactory.json from the Publish branch.
Refresh (F5) the Data Factory portal.
Publish.
The easiest way to validate your custom parameter .json syntax seems to be by exporting the ARM template, just as Simon mentioned.

Resources