How to set friendlyName for an Azure Data Catalog Asset via REST API? - azure-data-catalog

I am registering assets to Azure Data Catalog via REST API. I could register my assets without any problem. When I want to add a "friendyName" to my assets, I get an error. I am using the exact syntax that is shown here. Here is the json that I am sending:
"annotations": {
"schema": {
"properties": {
"fromSourceSystem": false,
"columns": [{"name": "com.xxx.xx.claim ", "type": " VARCHAR"}, {"name": "com.xx.xx.requirement ", "type": " VARCHAR"}]
}
},
"tableDataProfiles": [{"properties": {"dataModifiedTime": "2020-05-12 17:26:37.706521", "schemaModifiedTime": "2020-05-12 17:26:37.706537", "fromSourceSystem": false, "key": "tableDataProfiles"}}],
"columnsDataProfiles": [{"properties": {"columns": [{"columnName": "com.xx.xx.claim ", "type": " VARCHAR"}, {"columnName": "com.xx.xx.requirement ", "type": " VARCHAR"},],
"tags": [{"properties": {"tag": "uploadedByScript", "key": "tag", "fromSourceSystem": false}}],
"experts": [{"properties": {"expert": {"upn": "Berkan#xx.de"}, "fromSourceSystem": false, "key": "expert"}}],
"friendlyName": {"properties": {"friendlyName": "Requirements", "fromSourceSystem": false}}
}
I have cut the irrelevant parts of the json to make it readible. Notice "friendlyName" annotation is under "annotations" as described in the sample code. Can someone point out, what is wrong with my json?

Apparently the syntax was correct. I have realized, some of my input strings are either null or non-trimmed. That was the problem. I confirm the syntax above is correct.

Related

Azure Data Factory with SP Activity - Debug and Publish fails

I've created an Azure Data Factory pipeline with one simple Stored Procedure Activity which is supposed to fetch data from a Stored Procedure residing in Azure SQL DB. The Stored Procedure accepts one input parameter. I've published these changes already.
When I click on Validate, I get the below error from where I hardly get any information:
{
"code": "BadRequest",
"message": null,
"target": "pipeline//runid/dcb92f70-0a4b-4be1-943b-5ggn68365tyc",
"details": null,
"error": null
}
When I click on Trigger now, it just says 'Failed to run pipeline' without anymore details.
My pipeline JSON is given below:
{
"name": "GetPopulationRecordsForAnalysis",
"properties": {
"description": "Gets Population Records",
"activities": [
{
"name": "GetPopulationRecords",
"type": "SqlServerStoredProcedure",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"storedProcedureName": "[dbo].[usp_GetPopulationRecords]",
"storedProcedureParameters": {
"#countryID": {
"value": "48",
"type": "Int64"
}
}
},
"linkedServiceName": {
"referenceName": "AzureSqlLinkedService",
"type": "LinkedServiceReference"
}
}
],
"annotations": [],
"lastPublishTime": "2022-08-02T13:37:27Z"
},
"type": "Microsoft.DataFactory/factories/pipelines"
}
What am I doing wrong here?
I have figured out the issue now. The first mistake that I was doing is that I was giving the complete SP name with schema, '['character and all, usp_GetPopulationRecords works just fine. Second is that I was adding an extra '#' character before my Input parameter like how we do while running in SQL Server. That is not required here, only countryIDworks fine. Hope my answer helps.
When we connect to the Linked service in the Settings tab, “Stored Procedure name” dropdown will populate the names of the Stored Procedures present in the Database.
We should select the required Stored procedure and Upon clicking Import it will display parameter Name, Type and Value (supply Value) as below:
We need not give ‘#’ before parameter name.
Corresponding JSON will look below:
This will be Validated successfully in ADF.

how to get multiple folder path on the azure logic app

I have a scenario : I want to build an azure logic app, where I have to got documents from various folder from the Sharepoint get process and give email notification. My confusion is how can I give multiple input folder path?
I'm going to make an assumptions with your architecture in my answer. I'm assuming you want to process multiple files in different sites within the same SharePoint tenant. So, not across tenants.
To achieve what you're asking for, I created a Parse JSON action which takes in the following structure (as an example, obviously the structure is the key point here, not the data) ...
Scenario 1 - Specific Files
[
{
"SiteName": "ExampleSolution",
"FileName": "/Shared Documents/General/Book.xlsx"
},
{
"SiteName": "TestSite",
"FileName": "/Shared Documents/Test Folder/Document.docx"
}
]
The SP tenant needs to be authenticated to with the appropriate user.
Then, in a For Each action, loop through each item and retrieve the contents of each document using the Get file content using path action.
Site Address = concat('https://yourtenant.sharepoint.com/sites/', items('For_each')?['SiteName'])
File Path = File Name (from Dynamic Content)
It will then retrieve the contents dynamically using those expressions.
File 1 (Excel Document)
File 2 (Word Document)
Scenario 2 - All Files
If you want to do it for all files, just change it up slightly ...
[
{
"FolderName": "/Shared Documents/General",
"SiteName": "ExampleSolution"
},
{
"FolderName": "/Shared Documents/Test Folder",
"SiteName": "TestSite"
}
]
Site Address = concat('https://yourtenant.sharepoint.com/sites/', items('For_each')?['SiteName'])
File Identifier = Folder Name (from Dynamic Content)
Output - Folder 1
[
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fBook.xlsx",
"Name": "Book.xlsx",
"DisplayName": "Book.xlsx",
"Path": "/Shared Documents/General/Book.xlsx",
"LastModified": "2021-12-24T02:56:14Z",
"Size": 15330,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{23948609-0DA0-43E0-994C-2703FEEC8567},7\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9uLnNoYXJlcG9pbnQuY29tL3NpdGVzL0V4YW1wbGVTb2x1dGlvbg==,id=JTI1MmZTaGFyZWQlMmJEb2N1bWVudHMlMjUyZkdlbmVyYWwlMjUyZkJvb2sueGxzeA==",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fTest%2bDocument.docx",
"Name": "Test Document.docx",
"DisplayName": "Test Document.docx",
"Path": "/Shared Documents/General/Test Document.docx",
"LastModified": "2021-12-30T11:49:28Z",
"Size": 17959,
"MediaType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"IsFolder": false,
"ETag": "\"{7A3C7133-02FC-4A63-9A58-E11A815AB351},8\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9u etc",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fHierarchy.xlsx",
"Name": "Hierarchy.xlsx",
"DisplayName": "Hierarchy.xlsx",
"Path": "/Shared Documents/General/Hierarchy.xlsx",
"LastModified": "2022-01-07T02:49:38Z",
"Size": 41719,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{C919454C-48AB-4897-AD8C-E3F873B52E50},72\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9uL etc",
"LastModifiedBy": null
}
]
Output - Folder 2
[
{
"Id": "%252fShared%2bDocuments%252fTest%2bFolder%252fTest.xlsx",
"Name": "Test.xlsx",
"DisplayName": "Test.xlsx",
"Path": "/Shared Documents/Test Folder/Test.xlsx",
"LastModified": "2022-01-09T11:08:31Z",
"Size": 17014,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{CCF71CE7-89E7-4F89-B5CB-0F078E22C951},163\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9u etc",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fTest%2bFolder%252fDocument.docx",
"Name": "Document.docx",
"DisplayName": "Document.docx",
"Path": "/Shared Documents/Test Folder/Document.docx",
"LastModified": "2022-01-09T11:08:16Z",
"Size": 17293,
"MediaType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"IsFolder": false,
"ETag": "\"{317C5767-04EC-4264-A58B-27A3FA8E4DF3},3\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG etc",
"LastModifiedBy": null
}
]
From here, just process each file individually using one of the files actions like in the first scenario above.
Note: You'll need to work through sub folders and recursion. There doesn't appear to be a way to do that easily.
You've provided very little information but it should be enough for you to adapt it accordingly.
Also, I strongly recommend you use a means other than a hardcoded JSON document in the action itself. There are way better means for housing that information which wouldn't result in a need to update the action itself everytime you want to add or delete a file.
The concept of the loop and and the expressions are the most important part to grasp as they will give you what you want.

AWS CDK Api Gateway Models Ref Dependency - Model reference must be in canonical form

I am trying to add multiple models in the same time using aws CDK when one of the models is referencing the other one.
Ex:
"Gender": {
"contentType": "application/json",
"modelName": "GenderModel",
"schema": {
"type": "string",
"enum": [
"Not Specified",
"Male",
"Female",
"Non-Binary"
],
"schema": "http://json-schema.org/draft-04/schema#",
"title": "GenderModel"
}
},
and
"Requirements": {
"contentType": "application/json",
"modelName": "RequirementsModel",
"schema": {
"type": "object",
"properties": {
"gender": {
"ref": "https://apigateway.amazonaws.com/restapis/${Token[TOKEN.791]}/models/GenderModel"
}
},
"required": [
"gender",
],
"additionalProperties": false,
"schema": "http://json-schema.org/draft-04/schema#",
"title": "RequirementsModel"
}
},
When i deploy this fails with
Model reference must be in canonical form
From what i can see this fails because the GenderModel does not exists. If i first add the GenderModel in the stack and then i add the RequirementsModel and deploy again it works just fine because the GenderModel was previously created. If i want to create both of the models in the same time it will fail.
I tried to make sure the order of addModel call is correct but it seems it does not work.
Solution Found
Seems like you have to add explicitly specify the dependency.
modelB.node.addDependency(modelA)
This will avoid the error and add the models in the correct order
The problem is https://apigateway.amazonaws.com/restapis/${Token[TOKEN.791]}/models/GenderModel, specifically the ${Token[TOKEN.791]} part. When the API is not created, at CloudFormation synthetisation time, id is not known and placeholder value used - https://docs.aws.amazon.com/cdk/latest/guide/tokens.html
You can use the Ref intristic function to compose model by the reference
const getModelRef = (api: RestApi, model: Model): string =>
Fn.join(
'',
['https://apigateway.amazonaws.com/restapis/',
api.restApiId,
'/models/',
model.modelId]);

Stream Analytics Job deployed as Azure Resource Manager (ARM) template

I am trying to setup an output EventHub for a Stream Analytics Job defined as a JSON template. Without the output bit the template is successfully deployed, however when adding the output definition it fails with:
Deployment failed. Correlation ID: <SOME_UUID>. {
"code": "BadRequest",
"message": "The JSON provided in the request body is invalid. Property 'eventHubName' value 'parameters('eh_name')' is not acceptable.",
"details": {
"code": "400",
"message": "The JSON provided in the request body is invalid. Property 'eventHubName' value 'parameters('eh_name')' is not acceptable.",
"correlationId": "<SOME_UUID>",
"requestId": "<SOME_UUID>"
}
}
I've defined the ARM template as:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string",
"defaultValue": "westeurope"
},
"hubName": {
"type": "string",
"defaultValue": "fooIotHub"
},
"eh_name": {
"defaultValue": "fooEhName",
"type": "String"
},
"eh_namespace": {
"defaultValue": "fooEhNamespace",
"type": "String"
},
"streamAnalyticsJobName": {
"type": "string",
"defaultValue": "fooStreamAnalyticsJobName"
}
},
"resources": [{
"type": "Microsoft.StreamAnalytics/StreamingJobs",
"apiVersion": "2016-03-01",
"name": "[parameters('streamAnalyticsJobName')]",
"location": "[resourceGroup().location]",
"properties": {
"sku": {
"name": "standard"
},
"outputErrorPolicy": "Drop",
"eventsOutOfOrderPolicy": "adjust",
"eventsOutOfOrderMaxDelayInSeconds": 0,
"eventsLateArrivalMaxDelayInSeconds": 86400,
"inputs": [{
"Name": "IoTHubInputLable",
"Properties": {
"DataSource": {
"Properties": {
"iotHubNamespace": "[parameters('hubName')]",
"sharedAccessPolicyKey": "[listkeys(resourceId('Microsoft.Devices/IotHubs/IotHubKeys',parameters('hubName'), 'iothubowner'),'2016-02-03').primaryKey]",
"sharedAccessPolicyName": "iothubowner",
"endpoint": "messages/events"
},
"Type": "Microsoft.Devices/IotHubs"
},
"Serialization": {
"Properties": {
"Encoding": "UTF8"
},
"Type": "Json"
},
"Type": "Stream"
}
}],
"transformation": {
"name": "Transformation",
"properties": {
"streamingUnits": 1,
"query": "<THE SQL-LIKE CODE FOR THE JOB QUERY>"
}
},
"outputs": [{
"name": "EventHubOutputLable",
"properties": {
"dataSource": {
"type": "Microsoft.ServiceBus/EventHub",
"properties": {
"eventHubName": "parameters('eh_name')",
"serviceBusNamespace": "parameters('eh_namespace')",
"sharedAccessPolicyName": "RootManageSharedAccessKey"
}
},
"serialization": {
"Properties": {
"Encoding": "UTF8"
}
}
}
}]
}
}]
}
Checking here https://learn.microsoft.com/en-us/azure/templates/microsoft.streamanalytics/streamingjobs
it looks like the structure of the JSON for the output is as the expected one (with the properties field along with the type).
I've figured out those "Event Hub properties" from the Chrome browser using Developer Tools and checking the details of the HTTP request "GetOutputs", otherwise I am not sure where I could see how to specify those properties? The structure looks quite similar to the one for the input IoT Hub (which is working), in that case using different lables for the properties related to the IoT Hub details.
Checking this blog post https://blogs.msdn.microsoft.com/david/2017/07/20/building-azure-stream-analytics-resources-via-arm-templates-part-2-the-template/
the output part is related to PowerBI and it looks like a different structure is used for the properties: outputPowerBISource, so I've tried to use for the Event Hub the field outputEventHubSource (from the checks using Chrome Developer Tools) instead of properties, but then I get this error:
Deployment failed. Correlation ID: <SOME_UUID>. {
"code": "BadRequest",
"message": "The JSON provided in the request body is invalid. Required property 'properties' not found in JSON. Path '', line 1, position 1419.",
"details": {
"code": "400",
"message": "The JSON provided in the request body is invalid. Required property 'properties' not found in JSON. Path '', line 1, position 1419.",
"correlationId": "<SOME_UUID>",
"requestId": "<SOME_UUID>"
}
}
The command I am using to deploy this template is the Azure CLI (from a Linux Debian machine):
az group deployment create \
--name "deployStreamAnalyticsJobs" \
--resource-group "MyRGName" \
--template-file ./templates/stream-analytics-jobs.json
How do I specify an output in an Azure Resource Manager (ARM) template for a Stream Analytics Job?
Any property that contains a parameter (or any expression that needs to be evaluated must contain square brackets, e.g.
"eventHubName": "[parameters('eh_name')]",
"serviceBusNamespace": "[parameters('eh_namespace')]",
Otherwise the literal value in the quotes is used.
That help?
I found out all parameters need to be wrapped in square brakets (as pointed out in the other answer to this question).
Also to dynamically retrieve the shared access policy key (or any other parameter for an existing resource like the Event Hub) a combination of functions like listKeys and resourceId etc must be used, see below for a full example of an Event Hub described as an output for a Stream Analytics Job.
In details:
the parameters defined for eventHubName and serviceBusNamespace must be evaluated using square brackets (see how I defined those parameters in the JSON example in the body of the question I asked above),
the shared access policy could be either an hardcoded string (or a parameter as before) like sharedAccessPolicyName or dynamically retrieved using "sharedAccessPolicyKey": "[listKeys(resourceId('Microsoft.EventHub/namespaces/eventhubs/authorizationRules', parameters('eh_namespace'), parameters('eh_name'), 'RootManageSharedAccessKey'),'2017-04-01').primaryKey]" for the sharedAccessPolicyKey (this is sensitive data and it should be protected avoiding hardcoding information as a plain string)
The following JSON configuration shows an existing Event Hub defined as an output defined for the Stream Analytics Job:
"outputs": [{
"Name": "EventHubOutputLable",
"Properties": {
"DataSource": {
"Type": "Microsoft.ServiceBus/EventHub",
"Properties": {
"eventHubName": "[parameters('eh_name')]",
"serviceBusNamespace": "[parameters('eh_namespace')]",
"sharedAccessPolicyKey": "[listKeys(resourceId('Microsoft.EventHub/namespaces/eventhubs/authorizationRules', parameters('eh_namespace'), parameters('eh_name'), 'RootManageSharedAccessKey'),'2017-04-01').primaryKey]",
"sharedAccessPolicyName": "RootManageSharedAccessKey"
}
},
"Serialization": {
"Properties": {
"Encoding": "UTF8"
},
"Type": "Json"
}
}
}]

How do I use Azure Key Vault secret in linked template

I'm trying to create automation variable off KeyVault secret. I assume I can probably do the same thing what is currently done in main template for retrieving windows password but it fails with non-descriptive error below. Not sure what shall be done next to troubleshoot.
Error
{
"code": "BadRequest",
"message": "{\"Message\":\"The request is invalid.\",\"ModelState\":{\"variable.properties.value\":[\"An error has occurred.\"]}}"
}
Template
{
"name": "mystring",
"type": "variables",
"apiVersion": "2015-10-31",
"dependsOn": [
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'))]"
],
"properties": {
"value": {
"reference": {
"keyVault": {
"id": "[resourceId(subscription().subscriptionId, 'Utility-RG', 'Microsoft.KeyVault/vaults', 'MyKeyVault')]"
},
"secretName": "WindowsPasswordSecret"
}
},
"description": "test var",
"isEncrypted": false
}
}
That error is indeed helpful, while I have no idea what went wrong there, I can tell you how to work around that, you need to pass the data from the KV to the template (as input parameter) not to the resource. And in the template use parameter to assign value to the object in question.
Reference: https://github.com/4c74356b41/bbbb-is-the-word/blob/master/_arm/parent.json#L151

Resources