I have a azure function created with ARM Template by using powershell.
Function is blobtrigger type function running on consumption plan, to copy blob from source storage to destination storage.
When I upload blob to source storage it will not copied. That means function is not executed.
When I browse function app through portal, function get invoked and do the required things as expected. Thereafter it works fine. It only happens when function app initially deployed by powershell script using ARM templates.
So I guess Issue is, when I create function app using ARM template and deployed using powershell it is in idle mode, and never triggered by blob events. Is my assumption correct or could you please help me to find the issue. Thanks.
Be careful here. According to the Blob Storage Documentation it mentions that there may be a delay for this trigger if on the consumption plan: (emphasis mine)
When your function app runs in the default Consumption plan, there may be a delay of up to several minutes between the blob being added or updated and the function being triggered. If you need low latency in your blob triggered functions, consider running your function app in an App Service plan.
Perhaps the behavior you are seeing is a manifestation of the above. Try converting to an App Service Plan and see if you still see the delay in the trigger.
I suspect it has nothing to do with your deployment method.
Related
I am using App service plan for azure function, and have added blob triggers but when any file is uploaded to blob container,Functions are not triggering .or sometime its taking too much time , then after it start triggering.
Any suggestion will be appreciated
It should trigger the function as and when new files is uploaded to blob container.
This should be the case of cold-start
As per the note here
When you're using a blob trigger on a Consumption plan, there can be
up to a 10-minute delay in processing new blobs. This delay occurs
when a function app has gone idle. After the function app is running,
blobs are processed immediately. To avoid this cold-start delay, use
an App Service plan with Always On enabled, or use the Event Grid
trigger.
For your case, you need to consider Event-Grid trigger instead of a blob trigger, Event trigger has the built-in support for blob-events as well.
Since you say that you are already running the functions on an App Service plan, it's likely that you don't have the Always On setting enabled. You can do this on the App from the Application Settings -> General Settings tab on the portal:
Note that Always On is only applicable to Az Functions bound to an App Service plan - it isn't available on the serverless Consumption plan.
Another possible cause is if you don't clear the blobs out of the container after you process it.
From here:
If the blob container being monitored contains more than 10,000 blobs (across all containers), the Functions runtime scans log files to watch for new or changed blobs. This process can result in delays. A function might not get triggered until several minutes or longer after the blob is created.
And when using the Consumption Plan, here's another link warning about the potential of delays.
We are using an Azure Storage account to store some files that shall be downloaded by our app on the users demand.
Even though there should be no write operations (at least none I could think of), we are exceeding the included write operations just some days into the billing period (see image).
Regarding the price it's still within limits, but I'd still like to know whether this is normal and how I can analyze the matter. Besides the storage we are using
Functions and
App Service (mobile app)
but none of them should cause that many write operations. I've checked the logs of our functions and none of those that access the queues or the blobs have been active lately. There are are some functions that run every now and then, but only once every few minutes and those do not access the storage at all.
I don't know if this is related, but there is a kind of periodic ingress on our blob storage (see the image below). The period is roundabout 1 h, but there is a baseline of 100 kB per 5 min.
Analyzing the metrics of the storage account further, I found that there is a constant stream of 1.90k transactions per hour for blobs and 1.3k transactions per hour for queues, which seems quite exceptional to me. (Please not that the resolution of this graph is 1 h, while the former has a resolution of 5 minutes)
Is there anything else I can do to analyze where the write operations come from? It kind of bothers me, since it does not seem as if it's supposed to be like that.
I 've had the exact same problem; after enabling Storage Analytics and inspecting the $logs container I found many log entries that indicate that upon every request towards my Azure Functions, these write operations occur against the following container object:
https://[function-name].blob.core.windows.net:443/azure-webjobs-hosts/locks/linkfunctions/host?comp=lease
In my Azure Functions code I do not explicitly write in any of container or file as such but I have the following two Application Settings configured:
AzureWebJobsDashboard
AzureWebJobsStorage
So I filled a support ticker in Azure with the following questions:
Are the write operation triggered by these application settings? I
believe so but could you please confirm.
Will the write operation stop if I delete these application settings?
Could you please describe, in high level, in what context these operations occur (e.g. logging? resource locking, other?)
and I got the following answers from Azure support team, respectively:
Yes, you are right. According to the logs information, we can see “https://[function-name].blob.core.windows.net:443/azure-webjobs-hosts/locks/linkfunctions/host?comp=lease”.
This azure-webjobs-hosts folder is associated with function app and it’s created by default as well as creating function app. When function app is running, it will record these logs in the storage account which is configured with AzureWebJobsStorage.
You can’t stop the write operations because these operations record necessary logs to storage account used by Azure Functions runtime. Please do not remove application setting AzureWebJobsStorage. The Azure Functions runtime uses this storage account connection string for all functions except for HTTP triggered functions. Removing this Application Settings will cause your function app unable to start. By the way, you can remove AzureWebJobsDashboard and it will stop Monitor rather than the operation above.
These operations is to record runtime logs of function app. These operations will occur when our backend allocates instance for running the function app.
Best place to find information about storage usage is to make use of Storage Analytics especially Storage Analytics Logging.
There's a special blob container called $logs in the same storage account which will have detailed information about every operation performed against that storage account. You can view the blobs in that blob container and find the information.
If you don't see this blob container in your storage account, then you will need to enable storage analytics on your storage account. However considering you can see the metrics data, my guess is that it is already enabled.
Regarding the source of these write operations, have you enabled diagnostics for your Functions and App Service? These write diagnostics logs to blob storage. Also, storage analytics is also writing to the same account and that will also cause these write operations.
For my case, I have a Azure App Insight which took 10K transactions on its storage per mintues for functions and app services, even thought there are only few https requests among them. I'm not sure what triggers them, but once I removed app insights, everything becomes normal.
I am using Azure Functions V2 with a Service Bus trigger using 1.0.23 of the C# Functions SDK. I'm using the following approach to get secrets from KeyVault and use them within the settings of the triggers: How to map Azure Functions secrets from Key Vault automatically
The function, especially when it has done nothing for a while, doesn't fire when there are messages on the subscription. If I then go to the portal and execute manually (yes, that particular execution is fired with a null message) it kicks it into life and picks up the other messages on the queue and processes them correctly.
This obviously isn't ideally for our automated tests. Has anybody seen this, or know of anything that will help?
Also, the Function App is running on a consumption plan.
App Service Plan
If you're using App Service plan then it's simple, just make use of Always on
Consumption Plan
If you're using Consumption plan, the issue could be that your triggers did not sync properly with the Azure Infrastructure (Central Listener). It could have happened due to the way you deployed/edited your trigger related settings as explained in issue #210 below.
When you access the function directly from Portal, it might be forcing your function app to come alive, but as you can see that's only a workaround. Something similar is mentioned here
Take a look at these issues:
Service Bus Topic Trigger goes to sleep - Consumption Plan
They also mention that it wakes up only on accessing it via the portal or calling a HTTP triggered function in the same app, which is similar to the behavior you are seeing.
Issue #210
Issue #681
There are 3 suggested ways to resolve it, mentioned as part of Issue #210 above
In order to synchronize triggers when these deployment options are
used, open the Azure Portal and click the Refresh button, or make a
API call to the sync triggers endpoint:
https://github.com/davidebbo/AzureWebsitesSamples/blob/master/ARMTemplates/FunctionsWebDeploy.json#L90
Powershell sample:
https://github.com/davidebbo/AzureWebsitesSamples/blob/master/PowerShell/HelperFunctions.ps1#L360-L365
I've had a similar issue. ServiceBus connection was injected using ServiceBus value in ConnectionStrings section of Function configuration. This is enough when Function is in hot state but after transitioning to cold state AzureWebJobsServiceBus value is used to connect to service bus. So in my case setting AzureWebJobsServiceBus to ServiceBus connection string in Function configuration fixed this.
I made an Azure Function on a Consumption Plan with a blob trigger. Then I add lots of files to the blob and I expect the Azure Function to be invoked every time a file is added to the trigger.
And because I use Azure Function and Consumption Plan, I would expect that there is no scalability problem, right? WRONG.
I can easily add files to the blob faster than the Azure Function can process them. Hundred users can add to the blob but there seems to be only one instance of the Azure Function working at any one time. Meaning it can easily fall behind.
I thought the platform would just create more instances of the Azure Function as needed. Well, it seems it doesn't.
Any advice how I can configure my Azure Function to be truly scalable with a blob trigger?
This is because you are affecting with cold-start
As per the note here
When you're using a blob trigger on a Consumption plan, there can be
up to a 10-minute delay in processing new blobs. This delay occurs
when a function app has gone idle. After the function app is running,
blobs are processed immediately. To avoid this cold-start delay, use
an App Service plan with Always On enabled, or use the Event Grid
trigger.
For your case, you need to consider Event-Grid trigger instead of a blob trigger, Event trigger has the built-in support for blob-events as well.
When to consider Event Grid?
Use Event Grid instead of the Blob storage trigger for the following scenarios:
Blob storage accounts
High scale
Minimizing cold-start delay
Read more here
Update in 2020
Azure Function has a new tier/plan called a premium where you can avoid the Cold Start
E.g, YouTube Video
Summary: I have 2 different Azure Function Apps (Node.js), sharing a single file storage account, however if I go into the Kudu Invocation Logs for either of them I see the entries from both Apps.
Here is my setup:
1 File Storage (shared by both Function Apps)
Service Bus 1 (sb-prod), with a single queue (somequeue)
Service Bus 2 (sb-staging), with a single queue (somequeue)
Function App 1 (func-prod), with a single function (somefunc)
Function App 2 (func-staging), with a single function (somefunc)
Both func-prod and func-staging are setup for continuous deployment from the same Bitbucket repo, but different branches
When a message is received in sb-prod it triggers somefunc in func-prod
When a message is received in sb-staging it triggers somefunc in func-staging
Note that the queue name and function name are the same in both prod and staging. That all seems to work fine. However if I go into Kudu and look at the Invocation Logs for debugging, it shows the execution of functions across both Function Apps (prod and staging shown in the logs for both). It is not respecting the folder structure on the file storage to only show the logs from the appropriate App. As far as I can tell, this is only a log viewing issue, and the functions aren't being run twice or messages being sent to the wrong function app. Any ideas on how to fix this? Or is this a bug and I would need to add a second storage account to fix it so that Kudu doesn't get confused? Is there any risk with this setup that messages from staging service bus end up in the prod app or vice versa?
By 'Kudu', I assume you mean the WebJobs Dashboard (not related to Kudu). The behavior you are seeing is quirky, but is in fact by design. See https://github.com/Azure/azure-webjobs-sdk/issues/1541 for more info.
Workarounds:
The best is to use App Insights instead of the WebJobs Dashboard
If you must use the WebJobs Dashboard, use distinct storage accounts