Logic App running twice with alert monitoring - azure

I have the following Logic app:
This logic app is triggered when my connection goes above 120, it runs a powershell script which reduces the number of connection. The problem that I am facing is once it runs and the connections go back down from 120 or above the logic app is triggered again because the alert is being triggered, this generally happens minutes from each other. Is there a way I can tweak this logic app to make sure it wont trigger again for maybe 10 minutes after it has been triggered, to stop my powershell script from running twice?

You could have persistent value - stored in any of one of cloud services - let me take azure blob for instance.
The immediate running instance can persist the current running time in the Azure Blob storage.
So next instance is triggered - checks for the last run time from the blob - if it is less than 10 minutes. Your logic would be to skip the execution of the PowerShell.
The overall logic will look like below :
Note :
The logic app doesn't have a concept of persistent storage built-in. You can use AzureSQL, CosmosDB,Sharepoint, Azure Storage etc. using their inbuilt connector to achieve this persisting storage functionality.

Related

Task.Run() Inside of a Azure Function App

We have an Azure Function App (timer triggered) on a Consumption plan for testing purpose. The App fist fires a bunch of Stored Procedures on a SQL Server. We use Task.Run() and inside of it it's just a Synchronous operation to run an SP on the Server. It's a fire and forgets tasks that we require and the Exceptions/Errors from SQL are logged to the table inside of the SQL Server. This particular Azure App is a plan to migrate our SQL Agent Jobs (as we are moving towards a PaaS Database) to the cloud. Moreover, the Function App triggers an SP across multiple databases. So a single Task.Run for each DB.
The thing is the execution of the SP might take around 20 minutes to complete itself. I see that around 19 minutes the Connection is dropped. So I see that an SP was was started let's say at 5:00 AM and with appropriate logging inside of an SP, it went on till 5:19 AM and then it stopped (no success log). So I believe the SQLConnection from C# is dropped. The consumption plan default is 5 minutes. So if it's a timeout issue then why still I can continue till 19 minutes and then only it's dropped. I have observed this behavior for some days now.
I cannot arrive at a feasible explanation of the above behavior.
Maximum timeout for azure functions in consumption plan is 10min:
Change plan to support longer timeout or you can use Durable functions (intended for long-running tasks).
Durable Functions is an extension of Azure Functions that lets you
write stateful functions in a serverless compute environment. The
extension lets you define stateful workflows by writing orchestrator
functions and stateful entities by writing entity functions using the
Azure Functions programming model. Behind the scenes, the extension
manages state, checkpoints, and restarts for you, allowing you to
focus on your business logic.
Refs:
https://learn.microsoft.com/pl-pl/azure/azure-functions/functions-scale#function-app-timeout-duration
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview?tabs=csharp
https://learn.microsoft.com/en-us/learn/modules/create-long-running-serverless-workflow-with-durable-functions/

Azure Functions as Scheduler

Is Azure functions a good alternative to Azure Data Factory to use as scheduler? It has blob trigger to monitor and can use C# to trigger databricks jobs using API. But is it a viable alternative.
Edited to add more information. Wanted to trigger a databricks job based on a trigger file but do not want to use Azure Data Factory or Data bricks job.
I would probably use simple logic app with Event Grid trigger on blob storage event blob created event. Based on trigger data I would call Databricks Job REST API.
I did entire demo below working in under 10 minutes so its fast to set up.
With this demo I used
And logic app setup as trigger
Where I strongly suggest to add prefix filter like
/blobServices/default/containers/<container_name>
So you don't fire too many logic apps from different containers as event grid reacts to all events in entire storage account.
And HTTP call like so
Of course at this point simply change clusters list to submitting job REST call.
And see execution like
Just make sure that EventGrid resource provider is registered or logic app will never fire off.

Azure Blob Triggers sometime taking too much time to get triggered

I am using App service plan for azure function, and have added blob triggers but when any file is uploaded to blob container,Functions are not triggering .or sometime its taking too much time , then after it start triggering.
Any suggestion will be appreciated
It should trigger the function as and when new files is uploaded to blob container.
This should be the case of cold-start
As per the note here
When you're using a blob trigger on a Consumption plan, there can be
up to a 10-minute delay in processing new blobs. This delay occurs
when a function app has gone idle. After the function app is running,
blobs are processed immediately. To avoid this cold-start delay, use
an App Service plan with Always On enabled, or use the Event Grid
trigger.
For your case, you need to consider Event-Grid trigger instead of a blob trigger, Event trigger has the built-in support for blob-events as well.
Since you say that you are already running the functions on an App Service plan, it's likely that you don't have the Always On setting enabled. You can do this on the App from the Application Settings -> General Settings tab on the portal:
Note that Always On is only applicable to Az Functions bound to an App Service plan - it isn't available on the serverless Consumption plan.
Another possible cause is if you don't clear the blobs out of the container after you process it.
From here:
If the blob container being monitored contains more than 10,000 blobs (across all containers), the Functions runtime scans log files to watch for new or changed blobs. This process can result in delays. A function might not get triggered until several minutes or longer after the blob is created.
And when using the Consumption Plan, here's another link warning about the potential of delays.

Scalable Azure Function with blob trigger

I made an Azure Function on a Consumption Plan with a blob trigger. Then I add lots of files to the blob and I expect the Azure Function to be invoked every time a file is added to the trigger.
And because I use Azure Function and Consumption Plan, I would expect that there is no scalability problem, right? WRONG.
I can easily add files to the blob faster than the Azure Function can process them. Hundred users can add to the blob but there seems to be only one instance of the Azure Function working at any one time. Meaning it can easily fall behind.
I thought the platform would just create more instances of the Azure Function as needed. Well, it seems it doesn't.
Any advice how I can configure my Azure Function to be truly scalable with a blob trigger?
This is because you are affecting with cold-start
As per the note here
When you're using a blob trigger on a Consumption plan, there can be
up to a 10-minute delay in processing new blobs. This delay occurs
when a function app has gone idle. After the function app is running,
blobs are processed immediately. To avoid this cold-start delay, use
an App Service plan with Always On enabled, or use the Event Grid
trigger.
For your case, you need to consider Event-Grid trigger instead of a blob trigger, Event trigger has the built-in support for blob-events as well.
When to consider Event Grid?
Use Event Grid instead of the Blob storage trigger for the following scenarios:
Blob storage accounts
High scale
Minimizing cold-start delay
Read more here
Update in 2020
Azure Function has a new tier/plan called a premium where you can avoid the Cold Start
E.g, YouTube Video

Azure Logic App - Recurring Trigger Not Recurring

I have a simple Azure logic app that starts with a trigger step, which is "When One or More BLOBS are Added" to a specific Azure storage container. It has an interval setting to check for new files every 3 minutes. It then calls one more step to send a message to a service bus queue. In My Logic App Designer there is a "Run" button, and in the Overview panel of the Logic App, there is a "Run Trigger" command. If I run either of these from the Azure Portal my job runs continually as expected.
My issue is that once I leave the portal, my logic app no longer runs. I assumed that the Interval setting on the Trigger Step (Step 1) of every 3 minutes meant just that. I read that I can put a Scheduler step as step 1, but how does that interact with the 3 minute interval setting of the BLOB trigger?
My goal is to have the logic app run every 3 minutes, looking for any new files placed in the BLOB container since the last run, and then sending the service bus message as a step 2. Note that I also have an Azure Scheduler that I can use if that is a better solution (i.e. triggering the job wit ha web hook).
I appreciate the help!!

Resources