I need to use multiple Blob Storage Accounts in ADF. I am trying to create a single linked service for all storages with parameters. I unable to parameterized managed private endpoint. When I hardcode storage name then managed private endpoint (which has been created in ADF) gets selected automatically. Is there a way to parameterize it through Advance->JSON OR by any other way? ]
Unable to parameterize managed private endpoint. Did not find any Microsoft documentation.
I created Azure data factory and storage account in azure portal. I setup the Integration Runtime as mention below
I created managed private endpoint in Azure Data factory.
Manage->Security->managed private endpoint
Image for reference:
After creation of managed private endpoint we need to approve in storage account settings. for that I click on
Manage approval in azure portal which mentioned above. It takes me to below page
select private end point and click on approve which mentioned below.
It approves the private endpoint.
Image for reference:
Managed private endpoint is created and approved successfully.
we can achieve Parameterize Managed Private Endpoint in Blob Linked Service ADF using below Json script
{
"name": "DataLakeBlob",
"type": "Microsoft.DataFactory/factories/linkedservices",
"properties": {
"parameters": {
"StorageAccountEndpoint": {
"type": "String",
"defaultValue": "https://<storage AccountName>.blob.core.windows.net"
}
},
"type": "AzureBlobStorage",
"typeProperties": {
"sasUri": "#{linkedService().StorageAccountEndpoint}"
},
"description": "Test Description"
}
}
Mark the Specify dynamic contents in JSON format when test connection it connected successfully.
Image for reference:
This works from my end please check from your end.
Related
I have requirement to update a ADF linked service configuration by API(or any other way through code, except using UI). I need to add 'init scripts' in the job cluster configuration of a linked service.
I got some Microsoft documentation on this, but it is only for creating a linked service, not for editing it.
Please let me know if you have any leads on this.
You can update ADF linked Service configuration by API.
Sample Request
PUT https://management.azure.com/subscriptions/12345678-1234-1234-1234-12345678abc/resourceGroups/exampleResourceGroup/providers/Microsoft.DataFactory/factories/exampleFactoryName/linkedservices/exampleLinkedService?api-version=2018-06-01
Request body
{
"properties": {
"type": "AzureStorage",
"typeProperties": {
"connectionString": {
"type": "SecureString",
"value": "DefaultEndpointsProtocol=https;AccountName=examplestorageaccount;AccountKey=<storage key>"
}
},
"description": "Example description"
}
}
In this link Sample Request and Request body are given.
For example, If you want to update AzureBlobStorage LinkedService, You can update configurations given here
We use a PowerShell module azure.datafactory.tools for deployments of ADF components.
It can replace a Linked Service with a new definition. Furthermore, you can test the deployed Linked Service with the module.
I've created the ARM template for Azure API Management deployment. In order to enable its REST API I need to select the Enable API Management REST API checkbox in Azure Portal as explained here. I'd like to activate this option within the ARM template but I'm unable to find which resource to add/modify in my template to achieve it.
This one https://learn.microsoft.com/en-us/rest/api/apimanagement/2019-01-01/tenantaccess/update. In general whatever Azure portal does it does through same public API used by templates. So usually you can open browser dev console and see what call is being made behind the scenes.
If anyone is still looking for an answer, the below template does the job of enabling Management REST API in Azure APIM
{
"type": "Microsoft.ApiManagement/service/tenant",
"apiVersion": "2020-06-01-preview",
"name": "[concat(parameters('ApimServiceName'), '/access')]",
"dependsOn": [
"[resourceId('Microsoft.ApiManagement/service', parameters('ApimServiceName'))]"
],
"properties": {
"enabled": true
}
}
We have multiple pipelines in Data Factory V1 for each brand for our organization and We have common gateway named “SQLServerGateway” (Self-Hosted) for on-premises SQL Server for all these pipelines which are running well on a scheduled basis.
Now, we are trying to create a single test pipeline in Data Factory V2 which is doing the same job in Data Factory V1. Hence we are creating Linked Services in Data Factory V2 and we are trying to link existing gateway “SQLServerGateway” in linked services in V2. But we are not able to fetch that gateway (SQLServerGateway) in dropdown while creating new linked service for on-premises SQL Server.
Due to gateway not populating in the dropdown, we coded the below part in an advanced note. But we still receive some error while testing the connection.
Hence we would like to know how to connect the existing gateway in Data Factory V2 linked service.
{
"name": "SQLConn_RgTest",
"properties": {
"type": "SqlServer",
"typeProperties": {
"connectionString": {
"type": "SecureString",
"value": "Data Source=XXXX;Initial Catalog=XXXX;Integrated Security=False;user id=XXXX;password=XXXX;"
}
},
"connectVia": {
"referenceName": "SQLServerGateway",
"type": "SelfHostedIntegrationRuntime"
}
}
}
Currently you cannot reuse the V1 gateways in Data Factory V2.
For your scenario, you need to create a new self-hosted IR in Data Factory V2 and install it on another machine to run the test pipeline. You can follow this tutorial to setup the pipeline. And we will support sharing the self-hosted IR across V2 data factories soon.
We are planning to build a SFTP connector using logic apps which will basically take a file uploaded to azure blob and upload it to a sftp location.
We are SaaS product and are dealing with multiple customers. Also we have storage accounts per customer or tenant.
My questions is how would this logic app should be Deployed
1. should it be a single logic app which can listen to multiple storage accounts and upload the files .Right now I cant figure out how this can be done.
2. Should it be a logic app / tenant configured one to one with the storage account of the tenant
I would like to know what is the usual pattern followed in a multi-tenant environment and are their any pros / cons of deploying a logic app / tenant.
You don't need to create one Logic App per tenant. What you could do is to create an API connection per storage account (per tenant). You could do it with ARM Templates and Azure CLI. You could give the API connection an Id based on your tenantId
Then, in your Logic App workflow you can chose at run time the API connection dynamically, depending on the tenant Id.
e.g.
"Get_blob_content": {
"inputs": {
"host": {
"connection": {
"name": "/subscriptions/<id>/resourceGroups/<id>/providers/Microsoft.Web/connections/#{variables('tenantId')}"
}
},
"method": "get",
"path": "/datasets/default/files/.../content",
"queries": {
"inferContentType": true
}
},
"metadata": {
"...": "..."
},
"runAfter": {},
"type": "ApiConnection"
}
It would be up to you to decide how to name the API Connection, how to get it at runtime, and how to get the full connectionId at runtime. But hopefully you get the idea of what you can do the code above.
HTH
I want to use linked templates in my ARM deployment model. Every article I've read mentions that the linked template needs to be in an accessible location (such as blob storage).
This works OK if I manually upload the files to storage but I'm looking for a mechanism to upload a template to Storage as part of the build or Deployment process.
I'd hoped to use the Artifact Storage Account option but it is unavailable when deploying infrastructure only.
Is there a built-in method to achieve this or would it require either an extra step such as powershell script or VSTS build step?
The Artifact Storage Account option becomes available as soon as you introduce the two parameters _artifactsLocation and _artifactsLocationSasToken into your deployment.
"parameters": {
"_artifactsLocation": {
"type": "string",
"metadata": {
"description": "Auto-generated container in staging storage account to receive post-build staging folder upload"
}
},
"_artifactsLocationSasToken": {
"type": "securestring",
"metadata": {
"description": "Auto-generated token to access _artifactsLocation"
}
}
}