I have two integration runtimes(both are self-hosted). When I try to delete one I get an error message.
Error: Failed to delete integration runtime.
Detail: The document cannot be deleted since it is referenced by AzureSqlDatabaseContoso.
But this is not true. At the moment there is no such thing as "AzureSqlDatabaseContoso". Perhaps it might have been there before. I did a search on source code as well, it is not present in the whole Git branch.
How can I delete it ?
This happened to me before. I just recreated the phantom object with the same name, associated it with the IR to be deleted, and then deleted the newly-recreated object (AzureSqlDatabaseContoso, in this case).
After that, ADF let me delete the underlying IR. Weird, but it worked for me.
the answer is what JeffRamos posted. Another option is to rename the file and 'name' field in git source and reload the adf and delete it there.
Source/linkedService/AzureKeyVault.json
rename this to
Source/linkedService/test.json
json content
{
"name": "AzureKeyVault",
"properties": {
"annotations": [],
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://mykv.vault.azure.net/"
}
}
}
Rename "name" field
{
"name": "test",
"properties": {
"annotations": [],
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://mykv.vault.azure.net/"
}
}
}
Related
I've created an Azure Data Factory pipeline with one simple Stored Procedure Activity which is supposed to fetch data from a Stored Procedure residing in Azure SQL DB. The Stored Procedure accepts one input parameter. I've published these changes already.
When I click on Validate, I get the below error from where I hardly get any information:
{
"code": "BadRequest",
"message": null,
"target": "pipeline//runid/dcb92f70-0a4b-4be1-943b-5ggn68365tyc",
"details": null,
"error": null
}
When I click on Trigger now, it just says 'Failed to run pipeline' without anymore details.
My pipeline JSON is given below:
{
"name": "GetPopulationRecordsForAnalysis",
"properties": {
"description": "Gets Population Records",
"activities": [
{
"name": "GetPopulationRecords",
"type": "SqlServerStoredProcedure",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"storedProcedureName": "[dbo].[usp_GetPopulationRecords]",
"storedProcedureParameters": {
"#countryID": {
"value": "48",
"type": "Int64"
}
}
},
"linkedServiceName": {
"referenceName": "AzureSqlLinkedService",
"type": "LinkedServiceReference"
}
}
],
"annotations": [],
"lastPublishTime": "2022-08-02T13:37:27Z"
},
"type": "Microsoft.DataFactory/factories/pipelines"
}
What am I doing wrong here?
I have figured out the issue now. The first mistake that I was doing is that I was giving the complete SP name with schema, '['character and all, usp_GetPopulationRecords works just fine. Second is that I was adding an extra '#' character before my Input parameter like how we do while running in SQL Server. That is not required here, only countryIDworks fine. Hope my answer helps.
When we connect to the Linked service in the Settings tab, “Stored Procedure name” dropdown will populate the names of the Stored Procedures present in the Database.
We should select the required Stored procedure and Upon clicking Import it will display parameter Name, Type and Value (supply Value) as below:
We need not give ‘#’ before parameter name.
Corresponding JSON will look below:
This will be Validated successfully in ADF.
I'm trying to create a batch pool via the az CLI as follows: az batch pool create --json-file foo.json.
The contents of foo.json are
{
"id": "testpool2",
"vmSize": "standard_d2s_v3",
"virtualMachineConfiguration": {
"imageReference": {
"publisher": "microsoftwindowsserver",
"offer": "windowsserver",
"sku": "2019-datacenter-core-with-containers-smalldisk",
"version": "latest"
},
"nodeAgentSKUId": "batch.node.windows amd64",
"windowsConfiguration": {
"enableAutomaticUpdates": false
},
"containerConfiguration": {
"type": "dockerCompatible",
"containerImageNames": [
"mcr.microsoft.com/windows/servercore:10.0.17763.2928-amd64"
]
},
"nodePlacementConfiguration": {
"policy": "Zonal"
}
},
"resizeTimeout": "PT15M",
"targetDedicatedNodes": 1,
"targetLowPriorityNodes": 0,
"enableAutoScale": false,
"enableInterNodeCommunication": false,
"networkConfiguration": {
"subnetId": "/subscriptions/path/to/my/subnet",
"dynamicVNetAssignmentScope": "none",
"publicIPAddressConfiguration": {
"provision": "BatchManaged"
}
},
"taskSlotsPerNode": 1,
"taskSchedulingPolicy": {
"nodeFillType": "Pack"
},
"identity": {
"type": "UserAssigned",
"userAssignedIdentities": {
"/subscriptions/path/to/my/user/assigned/identity": {}
}
}
}
This successfully creates the pool, but with a null identity property. Not surprisingly, any authentication relying on that user-assigned identity being in place fails.
Per the documentation, the --json-file property accepts a JSON file that conforms to the REST API body. However, the REST API body does not contain a suitable identity block.
I looked at the JSON that's POSTed to the REST API when creating the pool through the portal, and it looks very similar to what I have, except it's structured like this:
"properties": {
"id": "id value",
...etc...
},
"identity": {
"type": "UserAssigned",
...etc...
}
Making my JSON match up with that request body results in a JSON parsing error. The JSON I'm providing is syntactically correct, it just seems like it's expecting the contents of the properties section only.
There's this existing question which has a terrible link-only answer to Microsoft Q&A, where the recommendation is to add an identity block that looks exactly like the one I'm providing. Please note that as far as I can tell this question is not a duplicate of that one -- they are receiving a different error, and they didn't explicitly state that they are using the Azure CLI, just that they're trying to use "JSON".
There doesn't seem to be any definitive documentation or examples of how to use the --json-file parameter with the Azure CLI to create a batch pool that uses a user-assigned identity. If it is possible, some guidance on how to accomplish it would be most welcome.
After searching in vain for an answer to the same question, I posted a slight variation of the question on the MS support page and they came up with a working solution for our case, which seems to be near-identical to what has been asked here.
Edit:
Adding the following to the JSON file made it work in our case.
{
"type": "Microsoft.Batch/batchAccounts/pools",
"name": "TestPool",
"identity": {
"type": "UserAssigned",
"userAssignedIdentities": {"/subscriptions/<MySubscription>/resourceGroups/<MyResourceGroup>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<MyUserAssignedManagedIdentity>":{}}
},
"properties":{ All the remaining properties defining the pool itself }
}
Answer from MS support
I am passing the credentials and parameters required but I get the error
The value of the property 'index' is invalid: 'Index was out of range.
Must be non-negative and less than the size of the collection.
Parameter name: index'. Index was out of range. Must be non-negative
and less than the size of the collection. Parameter name: index
Activity ID: 36a4265d-3607-4472-8641-332f5656661d.
I had the same issue, the password contained a ' and that's causing the trouble. Changed the password with no symbols and it works like a charm
Seems the UI doesn't generate the linked service correctly. Using Microsoft Docs Example JSON I received the same index error when attempting to create the linked service. If I remove the password from the connection string and add it as a separate property I am able to successfully generate the linked service.
Microsoft Docs Example (Doesn't Work)
{
"name": "SnowflakeLinkedService",
"properties": {
"type": "Snowflake",
"typeProperties": {
"connectionString": "jdbc:snowflake://<accountname>.snowflakecomputing.com/?user=<username>&password=<password>&db=<database>&warehouse=<warehouse>&role=<myRole>"
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
Working Example
{
"name": "SnowflakeLinkedService",
"properties": {
"type": "Snowflake",
"typeProperties": {
"connectionString": "jdbc:snowflake://<accountname>.snowflakecomputing.com/?user=<username>&db=<database>&warehouse=<warehouse>",
"password": {
"type": "SecureString",
"value": "<password>"
}
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
We hit this same issue today, it was because our password had an ampersand (&) at the end. This seemed to mess up the connection string as it contained this:
&password=abc123&&role=MyRole
Changing the password to not include an ampersand fixed it
I have an Azure Data Factory Copy Activity that is using a REST request to elastic search as the Source and attempting to map the response to a SQL table as the Sink. Everything works fine except when it attempts to map the data field that contains the dynamic JSON. I get the following error:
{
"errorCode": "2200",
"message": "ErrorCode=UserErrorUnsupportedHierarchicalComplexValue,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The retrieved type of data JObject with value {\"name\":\"department\"} is not supported yet, please either remove the targeted column or enable skip incompatible row to skip them.,Source=Microsoft.DataTransfer.Common,'",
"failureType": "UserError",
"target": "CopyContents_Paged",
"details": []
}
Here's an example of my mapping configuration:
"type": "TabularTranslator",
"mappings": [
{
"source": {
"path": "['_source']['id']"
},
"sink": {
"name": "ContentItemId",
"type": "String"
}
},
{
"source": {
"path": "['_source']['status']"
},
"sink": {
"name": "Status",
"type": "Int32"
}
},
{
"source": {
"path": "['_source']['data']"
},
"sink": {
"name": "Data",
"type": "String"
}
}
],
"collectionReference": "$['hits']['hits']"
}
The JSON in the data object is dynamic so I'm unable to do an explicit mapping for the nested fields within it. That's why I'm trying to just store the entire JSON object under data in a column of a SQL table.
How can I adjust my mapping configuration to allow this to work properly?
I posted this question on the MSDN forums and I was told that if you are using a tabular sink you can set this option "mapComplexValuesToString": true and it should allow complex JSON properties to get mapped correctly. This resolved my ADF copy activity issue.
I have the same problem a few days ago. You need to convert your JSON object to a Json String. It will solve your mapping problem (UserErrorUnsupportedHierarchicalComplexValue).
Try it and tell me if also resolves your error.
I am trying to create my Azure DevOps release pipeline for Azure Data Factory.
I have followed the rather cryptic guide from Microsoft (https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment ) regarding adding additional parameters to the ARM template that gets generated when you do a publish (https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment#use-custom-parameters-with-the-resource-manager-template )
Created a arm-template-parameters-definition.json file in the route of the master branch. When I do a publish, the ARMTemplateParametersForFactory.json in the adf_publish branch remains completely unchanged. I have tried many configurations.
I have defined some Pipeline Parameters in Data Factory and want them to be configurable in my deployment pipeline. Seems like an obvious requirement to me.
Have I missed something fundamental? Help please!
The JSON is as follows:
{
"Microsoft.DataFactory/factories/pipelines": {
"*": {
"properties": {
"parameters": {
"*": "="
}
}
}
},
"Microsoft.DataFactory/factories/integrationRuntimes": {
"*": "="
},
"Microsoft.DataFactory/factories/triggers": {},
"Microsoft.DataFactory/factories/linkedServices": {},
"Microsoft.DataFactory/factories/datasets": {}
}
I've been struggling with this for a few days and did not found a lot of info, so here what I've found out. You have to put the arm-template-parameters-definition.json in the configured root folder of your collaboration branch:
So in my example, it has to look like this:
If you work in a separate branch, you can test your configuration by downloading the arm templates from the data factory. When you make a change in the parameters-definition you have to reload your browser screen (f5) to refresh the configuration.
If you really want to parameterize all of the parameters in all of the pipelines, the following should work:
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"parameters":{
"*":{
"defaultValue":"="
}
}
}
}
I prefer specifying the parameters that I want to parameterize:
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"parameters":{
"LogicApp_RemoveFileFromADLSURL":{
"defaultValue":"=:-LogicApp_RemoveFileFromADLSURL:"
},
"LogicApp_RemoveBlob":{
"defaultValue":"=:-LogicApp_RemoveBlob:"
}
}
}
}
Just to clarify on top of Simon's great answer. If you have non standard git hierarchy (i.e. you move the root to a sub-folder like I have done below with "Source"), it can be confusing when the doc refers to the "repo root". Hopefully this diagram helps.
You've got the right idea, but the arm-template-parameters-definition.json file needs to follow the hierarchy of the element you want to parameterize.
Here is my pipeline activity I want to parameterize. The "url" should change based on the environment it's deployed in
{
"name": "[concat(parameters('factoryName'), '/ExecuteSPForNetPriceExpiringContractsReport')]",
"type": "Microsoft.DataFactory/factories/pipelines",
"apiVersion": "2018-06-01",
"properties": {
"description": "",
"activities": [
{
"name": "NetPriceExpiringContractsReport",
"description": "Passing values to the Logic App to generate the CSV file.",
"type": "WebActivity",
"typeProperties": {
"url": "[parameters('ExecuteSPForNetPriceExpiringContractsReport_properties_1_typeProperties')]",
"method": "POST",
"headers": {
"Content-Type": "application/json"
},
"body": {
"resultSet": "#activity('NetPriceExpiringContractsReportLookup').output"
}
}
}
]
}
}
Here is the arm-template-parameters-definition.json file that turns that URL into a parameter.
{
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"activities": [{
"typeProperties": {
"url": "-::string"
}
}]
}
},
"Microsoft.DataFactory/factories/integrationRuntimes": {},
"Microsoft.DataFactory/factories/triggers": {},
"Microsoft.DataFactory/factories/linkedServices": {
"*": "="
},
"Microsoft.DataFactory/factories/datasets": {
"*": "="
}
}
So basically in the pipelines of the ARM template, it looks for properties -> activities -> typeProperties -> url in the JSON and parameterizes it.
Here are the necessary steps to clear up confusion:
Add the arm-template-parameters-definition.json to your master branch.
Close and re-open your Dev ADF portal
Do a new Publish
Your ARMTemplateParametersForFactory.json will then be updated.
I have experienced similar problems with the ARMTemplateParametersForFactory.json file not being updated whenever I publish and have changed the arm-template-parameters-definition.json.
I figured that I can force update the Publish branch by doing the following:
Update the custom parameter definition file as you wish.
Delete ARMTemplateParametersForFactory.json from the Publish branch.
Refresh (F5) the Data Factory portal.
Publish.
The easiest way to validate your custom parameter .json syntax seems to be by exporting the ARM template, just as Simon mentioned.