I'm developing an Azure Blueprint with three artifacts:
A log analytics workspace
Two policy assignments that reference this log analytics workspace
To be able to reference log analytics workspace in policy assignments, I exported resource id of this log analytics workspace
"outputs": {
"id": {
"type": "string",
"value": "[resourceId('Microsoft.OperationalInsights/workspaces', variables('workspaces_manage_log_analytics_name'))]"
}
}
In policy assignments, I tried to refer to the log analytics workspace id with [artifacts('LogAnalyticsWorkspace').outputs.id], but ended with an error
This error says this reference is invalid. I checked artifacts function document, but with no luck to solve this problem.
In portal, users can assign "display name", but not "name" to artifacts. If users want to assign "name" to an artifact, they need to use Powershell to export blueprint, rename json file, and import the modified blueprint.
For the question, "LogAnalyticsWorkspace" is not a "name", but only a "display name". You need to assign "name" by yourself.
Related
I'm having issues using parameter within a storage event trigger for the typeProperty "blobPathBeginsWith". Per default when using a storage event trigger, the typeProperty "scope" appears in the ARMTemplateParamtersForFactory.json and can be correctly set in a CI/CD process for different environments.
However, as I use the standard integration "Export to datalake" from Power Apps to Data Lake, the container name in the Data Lake is different (and cannot be changed) depending on the environment. E.g.
Environment
ContainerName
dev
dataverse-researchdwhd-xxx1
test
dataverse-researchdwhd-xxx2
Now, when I create a storage event trigger and manually fill out all required information including subscription, storage account name, container name, blob path begins with and blob path ends with, following typeProperties are created automatically:
"typeProperties": {
"blobPathBeginsWith": "/dataverse-researchdwhd-xxx1/blobs/apss_project/Snapshot",
"blobPathEndsWith": ".csv",
"ignoreEmptyBlobs": true,
"scope": "/subscriptions/6fxxxb5a/resourceGroups/rdwh-dev/providers/Microsoft.Storage/storageAccounts/datalakerdwhdev",
"events": [
"Microsoft.Storage.BlobCreated"
]
}
Once the trigger is published, following parameter is available in the ARMTemplateParametersForFactory.json and can therefore be set in the release pipeline.
"trigger_snapshot_project_properties_typeProperties_scope": {
"value": "/subscriptions/6fxxxb5a/resourceGroups/rdwh-dev/providers/Microsoft.Storage/storageAccounts/datalakerdwhdev"
}
In my use case, not only the typeProperty "scope" is environment dependent but also the typeProperty "blobPathBeginsWith" since the auto-created container by the "Export to data lake" integration has a unique name across all environments. Therefore, I must somehow be able to parameterize the typeProperty "scope" so it can be set in a release pipeline depending on the environment it is deployed.
What I tried so far:
Created a global paramter called "container_name" and tried to manually updated the trigger json to use this global parameter.
"blobPathBeginsWith": "parameters('container_name')",
However, regardless if the paramter only contains the container name (/dataverse-researchdwhd-xxx1/) or the whole begins with path (/dataverse-researchdwhd-xxx1/blobs/apss_project/Snapshot/), once I saved the json and opened the trigger in the UI, the message "The container name is not written in an accepted format" appears below the container name dropdown.
The format should be correct based on the Microsoft doc "Examples of storage event triggers" (https://learn.microsoft.com/en-us/azure/data-factory/how-to-create-event-trigger) but it seems global parameter cannot be referenced within a trigger.
Any experts out there can lead me to the correct way to parameterize typeProperties within the trigger json besides "scope"?
Thanks in advance!
The use case is describe in the Microsoft documentation, I just didn't look at the correct place: https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters
On the Azure Data Factory where GIT is enabled, you can navigate to Manage > ARM template > Edit parameter configuration.
This opens arm-template-parameters-definition.json where you can add properties which are not paramtererized by default. For my use case, I added the parameter "blobPathBeginsWith" as "typeProperties" for triggers:
"Microsoft.DataFactory/factories/triggers": {
"properties": {
"pipelines": [
{
"parameters": {
"*": "="
}
},
"pipelineReference.referenceName"
],
"pipeline": {
"parameters": {
"*": "="
}
},
"typeProperties": {
"scope": "=",
"blobPathBeginsWith": "="
}
}
}
After the changes were published, this automatically updated the file "ARMTemplateParametersForFactory.json" in the adf_publish-branch and added for all triggers a new parameter analog following pattern which can then be used in the release pipeline.
"trigger_name_properties_typeProperties_blobPathBeginsWith": {
"value": "/container/blobs/folder/Snapshot"
}
How to set legal hold on Azure storage account container in ARM template?
When setting immutable blob storage policy Azure portal allows you to choose from legal hold and Time-base retention. According to doc arm template supports immutable blob storage. However only requests with immutabilityPeriodSinceCreationInDays are accepted. When trying without setting it, I am getting:
Missing at least one of the following properties 'immutabilityPeriodSinceCreationInDays,allowProtectedAppendWrites'
Or:
immutabilityPeriodSinceCreationInDays must be set before setting allowProtectedAppendWrites
Weirdest - without properties block in immutabilityPolicies (as below) request fails with InternalServerError:
{
"status": "Failed",
"error": {
"code": "UnexpectedException",
"message": "The server was unable to complete your request."
}
}
{
"name": "testsa/default/testcontainer/default",
"type": "Microsoft.Storage/storageAccounts/blobServices/containers/immutabilityPolicies",
"apiVersion": "2019-06-01"
// ,
// "properties": {
// // "immutabilityPeriodSinceCreationInDays" : 10,
// // "allowProtectedAppendWrites": false
// }
}
According to my research, the resource type Microsoft.Storage/storageAccounts/blobServices/containers/immutabilityPolicies just can be used to create time-based retention policies. Meanwhile, when creating time-based retention policies, the parameter immutabilityPeriodSinceCreationInDays is required. For more details, please refer to here and here.
Besides, at the moment, Azure ARM template does not provide any resource type to create set legal hold policy. For more details, please refer to here and here. So I suggest you use deployment scripts in template to implement tit.
I am trying to deploy my logic app to multiple environments using CI/CD pipeline. I am getting an error -The client 'guid' with object id ''guid' ' has permission to perform action 'Microsoft.Logic/workflows/write' on scope 'Test Resource group'; however, it does not have permission to perform action 'Microsoft.Logic/integrationAccounts/join/action' on the linked scope(s) 'Development Resource group integration account' or the linked scope(s) are invalid.
Creating another integration account for test resource group doesnt come under free tier. Is there a way to share integration account across multiple resource groups
Not sure what the permissions issue is but you might need to give more information around this.
But try the below first in your pipeline. Works for us with 3 different resource groups and two integration accounts
"parameters": {
"IntegrationAccountName": {
"type": "string",
"minLength": 1,
"defaultValue": "inter-account-name"
},
"Environment": {
"type": "string",
"minLength": 1,
"defaultValue": "dev"
}
},
"variables": {
"resourceGroupName": "[if(equals(parameters('Environment'), 'prd'),'rg-resources-production','rg-resources-staging')]",
"LogicAppIntegrationAccount": "[concat('/subscriptions/3fxxx4de-xxxx-xxxx-xxxx-88ccxxxfbab4/resourceGroups/',variables('resourceGroupName'),'/providers/Microsoft.Logic/integrationAccounts/',parameters('IntegrationAccount'))]",
},
In the above sample, we had two different integration accounts one for testing and one for production. This is why I have set the integration account name as a parameter as it changes between environments.
I have created a variable "resourceGroupName" this is important because this url is setting up a direct link to the integration account which is stored in a known resource group. In this sample I have included an if statement using the value set at the "environment" parameter. This helps select which resource group is going to be used.
I then create another variable which stores the new URL. Replace the subscription guid with your own: 3fxxx4de-xxxx-xxxx-xxxx-88ccxxxfbab4.
Once that is created you need to change the ARM template to use the variable you just created. To set it,place in the properties object.
"properties": {
"state": "Enabled",
"integrationAccount": {
"id": "[variables('LogicAppIntegrationAccount')]"
},
So for you pipeline it should just be a normal arm template but with these two parameters above being set.
Let me know if you have more questions around this.
We are using an ARM json template which has this:
"outputs": {
"gatewayurl": {
"type": "string",
"value": "[reference('Microsoft.ApiManagement/service/uat1api'), '2018-01-01', 'Full').properties.gatewayUrl]"
}
What exactly is Microsoft.ApiManagement/service/uat1api ?
How can I go into Microsoft.ApiManagement/service/uat1api and view the properties?
I can see that the value ends up being https://uat1api.azure-api.net/, but I'd like to go in and see where that value is coming from.
Microsoft.ApiManagement/service is the resource type while uat1api is the name of the resource (in your case, in API Management Service).
Properties are specific to each resource type and defined in a template reference (link below for API Management Service).
Another great resource is the Azure resource explorer. Not all properties are documented in template reference while the resource explorer seems to provide a more accurate visibility on properties available on each resources.
Microsoft.ApiManagement service template reference
Azure Resource Explorer
on this page:
https://learn.microsoft.com/en-us/azure/data-factory/v1/data-factory-usql-activity
there is a template for using Azure Datalake analytics in azure datafactory with service principal (instead of authorizing manually for each use).
the template looks like this:
{
"name": "AzureDataLakeAnalyticsLinkedService",
"properties": {
"type": "AzureDataLakeAnalytics",
"typeProperties": {
"accountName": "adftestaccount",
"dataLakeAnalyticsUri": "azuredatalakeanalytics.net",
"servicePrincipalId": "<service principal id>",
"servicePrincipalKey": "<service principal key>",
"tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
"subscriptionId": "<optional, subscription id of ADLA>",
"resourceGroupName": "<optional, resource group name of ADLA>"
}
}
}
This template does not work in azure data factory, it insists that for the type
"AzureDataLakeAnalytics", it is not possible to have "serviceprincipalid" and it still requires "authorization" as a property.
my question is:
what is the correct json template for configuring a AzureDataLakeAnalyticsLinkedService with a serviceprincipal ?
Ok, sorry for asking a question that i figured out myself in the end.
While it is true that the azure portal complains about the template it does allow you deploy it. I had of course tried this, but since the azure portal does not show the error message, only an error flag, i did not realize the error was from the service principals lack of permission and not from the template it complained about.
So by adding more permissions to the service principal and deploying the json, disregarding the compiler complaints. It did work. Sorry for bothering.