Generating and storing JSON files from the run-time parameters passed to Azure Data Factory v2 pipeline? - azure

Can we create a file (preferably json) and store it in its supported storage sinks (like Blob, Azure Data Lake Service etc) using the parameters that are passed to Azure Data Factory v2 pipeline at run-time. I suppose it can be done via Azure Batch but it seems to be an overkill for such a trivial task. Is there a better way to do that?

Here are all the transform activities ADFv2 currently equips with, I'm afraid there isn't a direct way to create a file in ADFv2. You could leverage Custom activity to achieve this by running your customized code logic on an Azure Batch pool of virtual machines. Hope it'll help a little.

Related

How to remove special characters from XML stored in ADLS using Azure data factory or any other option?

I have scenario where i need to remove some characters from xml tags which is stored in ADLS. I am looking for an option with ADF. Can someone help me here with approach i should follow?
This is not possible by ADF. May you can have piece code to do this in
Azure Functions. As, Azure Data Factory can do data movement and data
transformation only. When you are saying about tags that means it does
not come under that.
You may use the Azure Function activity in a Data Factory pipeline to run Azure Functions. To launch an Azure Function, you must first set up a connected service connection and an activity that specifies the Azure Function you want to perform.
There is the Microsoft document which have deep insights about Azure Function Activity in ADF | Here.

Is it possible to store variables in Azure Data Factory pipelines?

In my Azure Data Factory pipeline, I want to use a variable, which gets updated on each run and which is also read on each run. At the moment, I am using a Database to achieve that. But it would be much simpler if Azure Data Factory provided a way of storing variables. So, my question is, is there any such facility in Azure Data Factory?
As #Joel Cochran says, ADF doesn't support persist a variable inside pipeline runs. We need to write data to a storage, eg. database or azure storage. Use Lookup Activity to get the value from blob storage file or DB. :)

Logic Apps - Get Blob Content Using Path

I have a event driven logic app (blob event) which reads a block blob using the path and uploads the content to Azure Data Lake. I noticed the logic app is failing with 413 (RequestEntityTooLarge) reading a large file (~6 GB). I understand that logic apps has the limitation of 1024 MB - https://learn.microsoft.com/en-us/connectors/azureblob/ but is there any work around to handle this type of situation? The alternative solution I am working on is moving this step to Azure Function and get the content from the blob. Thanks for your suggestions!
If you want to use an Azure function, I would suggest you to have a look at this at this article:
Copy data from Azure Storage Blobs to Data Lake Store
There is a standalone version of the AdlCopy tool that you can deploy to your Azure function.
So your logic app will call this function that will run a command to copy the file from blob storage to your data lake factory. I would suggest you to use a powershell function.
Another option would be to use Azure Data Factory to copy file to Data Lake:
Copy data to or from Azure Data Lake Store by using Azure Data Factory
You can create a job that copy file from blob storage:
Copy data to or from Azure Blob storage by using Azure Data Factory
There is a connector to trigger a data factory run from logic app so you may not need azure function but it seems that there is still some limitations:
Trigger Azure Data Factory Pipeline from Logic App w/ Parameter
You should consider using Azure Files connector:https://learn.microsoft.com/en-us/connectors/azurefile/
It is currently in preview, the advantage it has over Blob is that it doesn't have a size limit. The above link includes more information about it.
For the benefit of others who might be looking for a solution of this sort.
I ended up creating an Azure Function in C# as the my design dynamically parses the Blob Name and creates the ADL structure based on the blob name. I have used chunked memory streaming for reading the blob and writing it to ADL with multi threading for adderssing the Azure Functions time out of 10 minutes.

Invoking a java method from Azure Data factory pipeline

Im working on a project where i need to ingest EBCDIC files to Azure blob storage and transform it using Azure Paas Services.
High Level design is as follows:
Ingest file to Azure using AZcopy Do the transformation using ADF.
Before doing transformation i need to convert EBCDIC file to Ascii and the best solution is using JRecord which is written in java.
I created a custom solution to convert my files using JRecord and deployed it as an app in Azure.
Can some one help how can I invoke this app from an activity in Azure Data Factory pipeline?

Azure Data Factory - moving data from On-Premise SQL to Azure SQL

A simple question: Can this be achieved directly? I mean without the Azure blob storage in between (as showed in all the examples)? Can someone provide some code example please.
yes, you can do this directly. In fact, you can do direct copies from any of our supported sources/sinks, you don't have to pass through blob. To go from on-prem SQL Server-->SQL azure, you will need to setup a Data Management Gateway connector on your on-prem server. Then, you use a linked service of type AzureStorage and an output dataset of type AzureSQLTable as the output dataset, instead of AzureBlob as is shown in the example. The exact steps to setup the DMG and the JSON code for the linked services, datasets, and pipelines can be found in our documentation. We are also improving our UI in the near future to make these kinds of copy setups an easy code-free experience.
https://azure.microsoft.com/en-us/documentation/articles/data-factory-sqlserver-connector/

Resources