Azure Data Factory V2 - Mutliple Instance of Same Pipeline Triggered parallely - azure

We've created a re-usable azure data factory V2 pipeline. We're thinking to invoke this pipeline from different master pipelineS. These master pipelines may run in parallel. So, my concern is will this re-usable run as multiple instance process OR experience deadlock ?
Do I need to make any settings to run the re-usable pipeline with multiple instances(In case, by default multiple instantiation is not supported)?
thanks

Do I need to make any settings to run the re-usable pipeline with
multiple instances(In case, by default multiple instantiation is not
supported)?
As I know, no such specific settings you need to configure. However, based on this azure-data-factory-limits, azure data factory V2 have many limitations.
Such as Concurrent pipeline runs per pipeline is 100 and Write API calls is 2500/hr. You need to optimize your behaviors against the limitations.In addition, you could contact support about your custom requirements.

Related

How to invoke Job/Task in Azure Databricks from Azure Function

I need to develop a event driven pipeline which should get trigger on file arrival in ADLS2 i.e. ABFS. On file arrival I need to trigger 4 subsequent Spark jobs on Azure Databricks cluster.
For orchestrating the Spark Jobs I can use Databricks jobs as an option so that jobs could get triggered in a pipeline.
But the first job should get triggered only after the file arrival.
I am currently exploring ways to achieve this but need expert advice to design this in a best possible manner w.r.t cost.
One solution could be to use Azure Data Factory for orchestrating the entire flow based on Storage Event Trigger component but going for ADF just because of event based trigger don't look feasible to me as the rest part of the application i.e. Spark jobs can be pipelined from Databricks Job feature. Also, in terms of cost ADF can be expensive. Another solution could be to use Azure Functions Blob Trigger to know the file arrival but I am not able to understand how can I trigger Azure Databricks jobs from Azure Functions. As going with Functions can be cost effective as the function would not be running/active until the file has arrived.
Note:There can be multiple files arriving in an hour. No fixed duration on file arrival.
Also, trigger file is different than data files. i.e. On arrival of trigger files, Spark pipeline would consume actual data files.
Data files and Trigger files have different extensions and both are arriving in ABFS.
Your worry about ADF cost is misplaced. The Pipelines are extremely cheap. The activities that actually move data and use CPU are where most of the cost is. For instance Data Flows are run on managed Spark clusters, which is reflected in the pricing. See Data Factory Pricing. Using a Pipeline to orchestrate Databricks jobs is a common, simple, and (at least for ADF) very inexpensive.
If you want to kick off a Databricks job from an Azure Function, there's an API. Also check out the Databricks Autoloader, but running your Databricks cluster continuously can be expensive.

Azure Devops and yaml pipelines creation

How to create Azure devops yaml Pipleine.I'm currently trying to create multiple build pipelines for my Angular app in Azure DevOps using the new YAML way. … As far as I can tell from the docs it is not possible to define multiple pipelines in a single .yml file either. Is this scenario currently not supported in Azure DevOps
To create a pipeline, the simplified steps are ...
Go to the project you want to create the pipeline in
Go to the 'Pipelines' menu
Click the blue 'New pipeline' button on the top right corner
Follow the wizard that will help you set up your YAML pipeline
You can also read Create your first pipeline
As far as multiple pipelines in one .yml file: no, you define one pipeline in one yaml file. But that doesn't mean you cannot have multiple stages in one pipeline.
A stage is a logical boundary in the pipeline. It can be used to mark separation of concerns (for example, Build, QA, and production). Each stage contains one or more jobs. When you define multiple stages in a pipeline, by default, they run one after the other. You can specify the conditions for when a stage runs. When you are thinking about whether you need a stage, ask yourself:
Do separate groups manage different parts of this pipeline? For example, you could have a test manager that manages the jobs that relate to testing and a different manager that manages jobs related to production deployment. In this case, it makes sense to have separate stages for testing and production.
Is there a set of approvals that are connected to a specific job or set of jobs? If so, you can use stages to break your jobs into logical groups that require approvals.
Are there jobs that need to run a long time? If you have part of your pipeline that will have an extended run time, it makes sense to divide them into their own stage.
and
You can organize pipeline jobs into stages. Stages are the major divisions in a pipeline: "build this app", "run these tests", and "deploy to pre-production" are good examples of stages. They are logical boundaries in your pipeline where you can pause the pipeline and perform various checks.
Source for the last snippet and an interesting read: Add stages, dependencies, & conditions.

How to Trigger ADF Pipeline from Synapse Pipelines

Problem
Due to internal requirements, I need to run a Synapse pipeline and then trigger an ADF pipeline. It does not seem that there is a Microsoft-approved method of doing this. The pipelines run infrequently (every week or month) and the ADF pipeline must run after the Synapse pipeline.
Options
It seems that other answers pose several options:
Azure Functions. Create an Azure function that calls the CreatePipelineRun function on the ADF pipeline. At the end of the Synapse pipeline, insert a block that calls the Azure function.
Use the REST API and Web Activity. Use the REST API to make a call to run the ADF pipeline. Insert a Web Activity block at the end of the Synapse pipeline to make the API call.
Tables and polling. Insert a record into a table in a managed database with data about the Synapse pipeline run. Have regular polling from the ADF pipeline to check for new records and run when ready.
Storage Event. Create a timestamped blob file at the end of the Synapse run. Use the "storage event trigger" within ADF to trigger the ADF pipeline.
Question
Which of these would be closest to the "approved" option? Are there any clear disadvantages to any of these?
As you mentioned, there is no "approved" solution for this problem. All the approaches you mentioned have pros and cons and should work. For me, Option #3 has been very successful. We have built a Queue Manager based on Tables & Stored Procedures in Azure SQL. We use Logic Apps to process the Triggers which can be Scheduled, Blob Events, or REST calls. Those Logic Apps insert jobs in the Queue table via Stored Procedure. That Stored Procedure can be called directly by virtually any system, so your Synapse pipeline could insert a Queue job to execute the ADF pipeline. Other benefits include a log of all the pipeline runs, support for multiple Data Factories (and now Synapse Workspaces), and a web interface we wrapped around the database for management and tracking.
We have 2 other Logic Apps that process the Queue (a Status manager and an Executor). These run constantly (every 1 minute and every 3 minutes). The actions to check status and create pipeline runs are both implemented as .NET Azure Functions [you'll need different SDKs for Synapse vs. ADF]. This system runs thousands of pipelines a month, sometimes more, across numerous Data Factories and Synapse Workspaces.
The PROs here are many, but this disconnected approach permits facets of your system to operate in isolation. And it is flexible, in that you can tie virtually any system into the Queue. Your example of a pipeline that needs to execute another pipeline in a different system is a perfect example.
The CON here is that this is the most involved approach. If this is a on-off problem you are trying to solve, choose one of the other options.

Azure DevOps REST API - Get agent pools for jobs?

I'm using the Azure DevOps REST API to retrieve pipeline runs (aka "builds"). The build response has a bunch of good data, but it seems that the pool it reports only applies if the overall pipeline has a top-level pool defined.
For example, I have a pipeline that runs several parallel jobs, each one in a different self-hosted agent pool. But when I retrieve a build of this pipeline using the REST API, the only data available is for the pipeline's pool, which is the normal Hosted Ubuntu 1604 response you get for Microsoft-hosted builds - there's no mention of any of the self-hosted agent pools that did all the work.
I've tried drilling down into different sections (including the stage and task queries). The task level will eventually show the name of the agent used, but it's just a string, so it's not easy to infer the agent pool used unless you happen to name your agents in a specific way.
Is there any way to drill down into the individual "jobs" that ran as part of a pipeline and see what agent pools they were run on, using the REST API?
The Build response contains a link timeline (under Links field).
You can form it like this: https://dev.azure.com/<org>/<project>/_apis/build/builds/<buildId>/Timeline
The Json response has useful info for each job/task including queueId
at present, our Rest API cannot help us drill down into the individual "jobs" that ran as part of a pipeline and see what agent pools they were run on. This Rest Api is used to find the default settings about your pipeline.
As the work around, we can use the api: Builds - Get Build Log, and find the agent's name, like this:

Setup Multiple Environments in Parallel Azure DevOps

I have a query to setup multiple environments at a time so that we can discreetly test multiple projects at once. Ideally we should be able to spin these environments up and down as necessary.
We have microservice based architecture and are mostly using azure PAAS services in our infrastructure.
Currently i have tried to automate our infrastructure through terraform its almost done but next step is deployment of code as services are not containerized so tried using azure pipelines but its a huge task, can i get any better idea for this that how we could do this.
Should look at leveraging Azure Pipeline Templates Once this is defined then can reuse it everywhere. For instance with terraform created a template for doing the plan and apply that just needs to be fed in the directory the terraform is located in. This saved time across all projects as we just need to reference our template and the rest was taken care of.
In terms of your other question with the ability to spin up and spin down this can be easily done if the application is architected with that in mind. Keep in mind for deployment certain things where names must be unique: storage account, app service and things that are potentially shared: i.e. network.
The other piece to consider is how to ensure these ad hoc environments are actually being spun down. Would recommend something like a tagging strategy or process that cleans up resources that haven't been deployed in x amount of days.

Resources