Azure Blob Triggers sometime taking too much time to get triggered - azure

I am using App service plan for azure function, and have added blob triggers but when any file is uploaded to blob container,Functions are not triggering .or sometime its taking too much time , then after it start triggering.
Any suggestion will be appreciated
It should trigger the function as and when new files is uploaded to blob container.

This should be the case of cold-start
As per the note here
When you're using a blob trigger on a Consumption plan, there can be
up to a 10-minute delay in processing new blobs. This delay occurs
when a function app has gone idle. After the function app is running,
blobs are processed immediately. To avoid this cold-start delay, use
an App Service plan with Always On enabled, or use the Event Grid
trigger.
For your case, you need to consider Event-Grid trigger instead of a blob trigger, Event trigger has the built-in support for blob-events as well.

Since you say that you are already running the functions on an App Service plan, it's likely that you don't have the Always On setting enabled. You can do this on the App from the Application Settings -> General Settings tab on the portal:
Note that Always On is only applicable to Az Functions bound to an App Service plan - it isn't available on the serverless Consumption plan.
Another possible cause is if you don't clear the blobs out of the container after you process it.
From here:
If the blob container being monitored contains more than 10,000 blobs (across all containers), the Functions runtime scans log files to watch for new or changed blobs. This process can result in delays. A function might not get triggered until several minutes or longer after the blob is created.
And when using the Consumption Plan, here's another link warning about the potential of delays.

Related

Logic App running twice with alert monitoring

I have the following Logic app:
This logic app is triggered when my connection goes above 120, it runs a powershell script which reduces the number of connection. The problem that I am facing is once it runs and the connections go back down from 120 or above the logic app is triggered again because the alert is being triggered, this generally happens minutes from each other. Is there a way I can tweak this logic app to make sure it wont trigger again for maybe 10 minutes after it has been triggered, to stop my powershell script from running twice?
You could have persistent value - stored in any of one of cloud services - let me take azure blob for instance.
The immediate running instance can persist the current running time in the Azure Blob storage.
So next instance is triggered - checks for the last run time from the blob - if it is less than 10 minutes. Your logic would be to skip the execution of the PowerShell.
The overall logic will look like below :
Note :
The logic app doesn't have a concept of persistent storage built-in. You can use AzureSQL, CosmosDB,Sharepoint, Azure Storage etc. using their inbuilt connector to achieve this persisting storage functionality.

How can I find the source of my Hot LRS Write Operations on Azure Storage Account?

We are using an Azure Storage account to store some files that shall be downloaded by our app on the users demand.
Even though there should be no write operations (at least none I could think of), we are exceeding the included write operations just some days into the billing period (see image).
Regarding the price it's still within limits, but I'd still like to know whether this is normal and how I can analyze the matter. Besides the storage we are using
Functions and
App Service (mobile app)
but none of them should cause that many write operations. I've checked the logs of our functions and none of those that access the queues or the blobs have been active lately. There are are some functions that run every now and then, but only once every few minutes and those do not access the storage at all.
I don't know if this is related, but there is a kind of periodic ingress on our blob storage (see the image below). The period is roundabout 1 h, but there is a baseline of 100 kB per 5 min.
Analyzing the metrics of the storage account further, I found that there is a constant stream of 1.90k transactions per hour for blobs and 1.3k transactions per hour for queues, which seems quite exceptional to me. (Please not that the resolution of this graph is 1 h, while the former has a resolution of 5 minutes)
Is there anything else I can do to analyze where the write operations come from? It kind of bothers me, since it does not seem as if it's supposed to be like that.
I 've had the exact same problem; after enabling Storage Analytics and inspecting the $logs container I found many log entries that indicate that upon every request towards my Azure Functions, these write operations occur against the following container object:
https://[function-name].blob.core.windows.net:443/azure-webjobs-hosts/locks/linkfunctions/host?comp=lease
In my Azure Functions code I do not explicitly write in any of container or file as such but I have the following two Application Settings configured:
AzureWebJobsDashboard
AzureWebJobsStorage
So I filled a support ticker in Azure with the following questions:
Are the write operation triggered by these application settings? I
believe so but could you please confirm.
Will the write operation stop if I delete these application settings?
Could you please describe, in high level, in what context these operations occur (e.g. logging? resource locking, other?)
and I got the following answers from Azure support team, respectively:
Yes, you are right. According to the logs information, we can see “https://[function-name].blob.core.windows.net:443/azure-webjobs-hosts/locks/linkfunctions/host?comp=lease”.
This azure-webjobs-hosts folder is associated with function app and it’s created by default as well as creating function app. When function app is running, it will record these logs in the storage account which is configured with AzureWebJobsStorage.
You can’t stop the write operations because these operations record necessary logs to storage account used by Azure Functions runtime. Please do not remove application setting AzureWebJobsStorage. The Azure Functions runtime uses this storage account connection string for all functions except for HTTP triggered functions. Removing this Application Settings will cause your function app unable to start. By the way, you can remove AzureWebJobsDashboard and it will stop Monitor rather than the operation above.
These operations is to record runtime logs of function app. These operations will occur when our backend allocates instance for running the function app.
Best place to find information about storage usage is to make use of Storage Analytics especially Storage Analytics Logging.
There's a special blob container called $logs in the same storage account which will have detailed information about every operation performed against that storage account. You can view the blobs in that blob container and find the information.
If you don't see this blob container in your storage account, then you will need to enable storage analytics on your storage account. However considering you can see the metrics data, my guess is that it is already enabled.
Regarding the source of these write operations, have you enabled diagnostics for your Functions and App Service? These write diagnostics logs to blob storage. Also, storage analytics is also writing to the same account and that will also cause these write operations.
For my case, I have a Azure App Insight which took 10K transactions on its storage per mintues for functions and app services, even thought there are only few https requests among them. I'm not sure what triggers them, but once I removed app insights, everything becomes normal.

Which plan to select for my Azure function : Consumption Plan or App Service Plan?

We have created a blob triggered azure function to process files placed in blob storage. Load on this blob will not be consistent.
For example, for some hours there will be hundreds or even thousands of file will be placed in that blob every minutes. On the other hand there will be some hours during which we will not find even a single file.
Some files will be processed in very few seconds and some can take more than 10-15 minutes.
So my question is: In this type of unpredictable scenario which plan will be better for us? App service plan or Consumption plan?
If you can optimize your code so that the maximum processing time is 10 minutes, so Consumption Plan is your best option from cost perspective considering your fluctuating workload
As #Peter Bons, mentioned in the comments, this is your best reference
Edit
According to the above document,
if your function app is on the Consumption plan, there can be up to a
10-minute delay in processing new blobs if a function app has gone
idle.
If you want to avoid that delay and still use consumption plan to benefit from its cost effectiveness, you can replace Blob Trigger with Event Grid Trigger but it is not fully supported by Azure Functions nowadays

Azure function not triggered by blob events

I have a azure function created with ARM Template by using powershell.
Function is blobtrigger type function running on consumption plan, to copy blob from source storage to destination storage.
When I upload blob to source storage it will not copied. That means function is not executed.
When I browse function app through portal, function get invoked and do the required things as expected. Thereafter it works fine. It only happens when function app initially deployed by powershell script using ARM templates.
So I guess Issue is, when I create function app using ARM template and deployed using powershell it is in idle mode, and never triggered by blob events. Is my assumption correct or could you please help me to find the issue. Thanks.
Be careful here. According to the Blob Storage Documentation it mentions that there may be a delay for this trigger if on the consumption plan: (emphasis mine)
When your function app runs in the default Consumption plan, there may be a delay of up to several minutes between the blob being added or updated and the function being triggered. If you need low latency in your blob triggered functions, consider running your function app in an App Service plan.
Perhaps the behavior you are seeing is a manifestation of the above. Try converting to an App Service Plan and see if you still see the delay in the trigger.
I suspect it has nothing to do with your deployment method.

Scalable Azure Function with blob trigger

I made an Azure Function on a Consumption Plan with a blob trigger. Then I add lots of files to the blob and I expect the Azure Function to be invoked every time a file is added to the trigger.
And because I use Azure Function and Consumption Plan, I would expect that there is no scalability problem, right? WRONG.
I can easily add files to the blob faster than the Azure Function can process them. Hundred users can add to the blob but there seems to be only one instance of the Azure Function working at any one time. Meaning it can easily fall behind.
I thought the platform would just create more instances of the Azure Function as needed. Well, it seems it doesn't.
Any advice how I can configure my Azure Function to be truly scalable with a blob trigger?
This is because you are affecting with cold-start
As per the note here
When you're using a blob trigger on a Consumption plan, there can be
up to a 10-minute delay in processing new blobs. This delay occurs
when a function app has gone idle. After the function app is running,
blobs are processed immediately. To avoid this cold-start delay, use
an App Service plan with Always On enabled, or use the Event Grid
trigger.
For your case, you need to consider Event-Grid trigger instead of a blob trigger, Event trigger has the built-in support for blob-events as well.
When to consider Event Grid?
Use Event Grid instead of the Blob storage trigger for the following scenarios:
Blob storage accounts
High scale
Minimizing cold-start delay
Read more here
Update in 2020
Azure Function has a new tier/plan called a premium where you can avoid the Cold Start
E.g, YouTube Video

Resources