Azure Data Factory user parameter - azure

In Azure Databricks I want to get the user that trigger manually a Notebook in Data Factory pipeline. I think Data Factory doesn't have a dynamic parameter to pass the user to Databricks, only pipeline features and functions. Do you know any solution for this?

It does have dynamic parameters for a databricks notebook!! Follow this tutorial and it will guide you to do just that :D
https://learn.microsoft.com/en-us/azure/data-factory/transform-data-using-databricks-notebook
Hope this helped!

Related

What is meant by an 'integration dataset' in Azure Synapse Analytics?

I'm new to Synapse. I am using 'Azure Synapse' and I have noticed that there is an option to import an 'integration dataset'.
I'm not sure what exactly it means and how it differs from some of the other options for instance. I can't find anything on the Microsoft documentation. Can anyone please explain to me what it means?.
Integration Datasets in Synapse are similar to Datasets in Azure Data Factory.
It is a reference dataset to specify the location and structure of your data within a data store which can be used in your pipeline activities and dataflows. There are many types of datastore options and connectors available to create your datasets which can be internal to Azure or external.
Please read this link to know more about datasets: https://learn.microsoft.com/en-us/azure/data-factory/concepts-datasets-linked-services?tabs=data-factory

How to remove special characters from XML stored in ADLS using Azure data factory or any other option?

I have scenario where i need to remove some characters from xml tags which is stored in ADLS. I am looking for an option with ADF. Can someone help me here with approach i should follow?
This is not possible by ADF. May you can have piece code to do this in
Azure Functions. As, Azure Data Factory can do data movement and data
transformation only. When you are saying about tags that means it does
not come under that.
You may use the Azure Function activity in a Data Factory pipeline to run Azure Functions. To launch an Azure Function, you must first set up a connected service connection and an activity that specifies the Azure Function you want to perform.
There is the Microsoft document which have deep insights about Azure Function Activity in ADF | Here.

Is it possible to store variables in Azure Data Factory pipelines?

In my Azure Data Factory pipeline, I want to use a variable, which gets updated on each run and which is also read on each run. At the moment, I am using a Database to achieve that. But it would be much simpler if Azure Data Factory provided a way of storing variables. So, my question is, is there any such facility in Azure Data Factory?
As #Joel Cochran says, ADF doesn't support persist a variable inside pipeline runs. We need to write data to a storage, eg. database or azure storage. Use Lookup Activity to get the value from blob storage file or DB. :)

Azure data factory data flow task cannot take on prem as source

Hey,
I am having trouble in creating a Data Flows task which uses an on-prem source. Is this not possible in the Preview version?
I have created a self hosted IR to connect ADF to my laptop, and that is what I want to use. In the pic below I am trying to create a dataset off self hosted IR. It works great in Copy task but for Data Flows it is greyed out.
For this question, I asked Azure support for help and they replied me with the answer:
Answer:
This means on-premise SQL server is not supported as dataset in data flow in current stage.
Screen shot:
Update:
Data flow now only support Azure IR so it doesn’t support on-premise dataset.
Refer to Integration runtime types.
Hope this helps.
If your goal is to use visual data transformations in ADF using Mapping Data Flows with on-prem data, then build a pipeline with a Copy Activity first. Use the Self-Hosted Integration Runtime with the Copy Activity to stage your data in Blob Store. Then add a subsequent Execute Data Flow activity to transform that data.
I made video on how to do this:
https://www.youtube.com/watch?v=IN-4v0e7UIs
Reproduce your issue on my side ,however nothing about this feature is claimed on the official document. As you can see everywhere about data flow cliams that:
You could submit any voice here:
Also found a feedback for data flow in ADF for your reference.If you need push the progress of it,you could vote up it. Also,i would suggest you referring to the comments in the link:
For access to the 80+ ADF connectors, use Copy Activity to stage data
for transformation.
Data Flows will access data in your lake (Blob, ADB, ADW, ADLS) for
transformation.
Think of Copy Activity as your data heavy-lifting activity and Data
Flow as your data transformation engine.

Scheduling U - SQL Job

I am trying to schedule a U SQL job. Please let me know whether I can schedule the U SQL job.If so,how can I schedule.
Thanks,
Vinoth
To my mind, the best way to orchestrate your U-SQL job along with concomitant data management such as getting source data, pushing output data and etc is Azure Data Factory V2. ADF has reach API. Basically, you can run your jobs using either PowerShell or C# or a trigger.
See my very simple example of the job and how to add a trigger below. In this example, I process the documents with my U-SQL job and then push output file (CSV or Avro file) into Azure SQL Server
You could use Azure Automation (with the help of the Azure Data Lake Analytics Cmdlets) or Azure Data Factory to schedule a U-SQL script in the cloud.
You can get some guidance regarding creating a ADF Pipeline here:
https://azure.microsoft.com/en-us/documentation/articles/data-factory-build-your-first-pipeline-using-editor/

Resources