Azure Functions Timer Trigger thread safety - azure

I was wondering if anybody knows what happens if you have a Cron setting on an Azure Function to run every 5 minutes if its task takes more than 5 minutes to execute. Does it back up? Or should I implement a locking feature that would prevent something, say in a loop, from handling data already being processed by a prior call?

Azure function with timer trigger will only run one job at a time. If a job takes longer then next one is delayed.
Quoting from Wiki
If your function execution takes longer than the timer interval,
another execution won't be triggered until after the current
invocation completes. The next execution is scheduled after the
current execution completes.
That is true even if you scale out.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer#scale-out
You may want to ensure that your function does not time out on you. See https://buildazure.com/2017/08/17/azure-functions-extend-execution-timeout-past-5-minutes/ on how to configure function timeout.

If a timer trigger occurs again before the previous run has completed, it will start a second parallel execution.
Ref: https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference#parallel-execution
When multiple triggering events occur faster than a single-threaded function runtime can process them, the runtime may invoke the function multiple times in parallel.
Something else to note is that even using the new WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT settings, you cannot prevent multiple executions on the same instance.
Each instance of the function app, whether the app runs on the Consumption hosting plan or a regular App Service hosting plan, might process concurrent function invocations in parallel using multiple threads.
The best advice is to either reduce the duration it takes to execute, through optimisation or changing the approach to the problem. Perhaps splitting the task down and triggering it in a different way may help.

Related

Job processing with a traditional worker or using a Node server?

I want to be able to process background jobs that will have multiple tasks associated with it. Tasks may consist of launching API requests (blocking operations) and manipulating and persisting responses. Some of these tasks could also have subtasks that must be executed asynchronously.
For languages such as Ruby, I might use a worker to execute the jobs. As I understand, every time a new job comes gets to the queue, a new thread will execute it. As I mentioned before, sometimes a task could contain a series of subtasks to be executed asynchronously, so as I see it, I have two options:
Add the substep execution to the worker queue (But a job could have easily lots of subtasks that will fill the queue fast and will block new jobs from been processed).
What if I use event-driven Node server to handle a job execution? I would not need to add subtasks to a queue as a single node server could be able to handle one's job entire execution asynchronously. Is there something wrong with doing this?
This is the first time I encounter this kind of problem and I want to know which approach is better suited to solve my issue.

what will happen on Azure Functions during maintenance?

I know that Web Apps will be rebooted during maintenance without notice.
But how about the case of Functions?
During maintenance, does the current execution get stopped?
I think it is difficult to retry Timer, Http, Event Hub Triggered Functions.
But I wish Functions runtime will make my code retry after the maintenance finishes.
Your question has several parts, so:
Probably yes, Azure will stop routing requests to an instance which is about to get maintenance done. Because Function executions are short-lived (on Consumption Plan), that's relatively easy to do.
"Probably" - because this is not something they guarantee to you. Overall, Functions on Consumption Plan have no SLA, and host behavior details might change over time.
If stopping in the middle of function execution is a problem for your business case, you still need to handle it. Any instance can experience hardware failure at any time, including the least convenient time possible.
The observed behavior in case of such failure will differ per trigger type. E.g. HTTP call will just fail with 5xx code and the client is supposed to retry it. Queue-based triggers have a mechanism with locks, timeouts and retry counts. Event Hub will restart at the last checkpoint.
I might be wrong but the whole thing about serverless computing is that you don't have to worry about these things anymore. So I would trust Microsoft that they won't stop your function during a maintenance. Thats probably one of the reasons why a function can only run for a limit time period.

Why do Azure Functions take an excessive time to "wake up"?

We have a simple Azure Function that makes a DocumentDB query. It seems like the first time we call it there is a long wait to finish, and then successive calls are very fast.
For example, I just opened our app and the first Function call took 10760ms, definitely noticeable by any end user. After this, all Function calls take approximately 100ms to process and are nearly imperceptible.
It seems as though there is some "wake up" cycle in Azure Functions. Is there some way to minimize this, or better yet is this documented somewhere so we can understand what's really going on here?
Function apps running on a consumption plan do indeed have an idle time after which they effectively go to sleep. The next invocation is required to "wake them up" as you've observed and people have mentioned in the comments.
As to why this happens, it's so that Microsoft can most optimally distribute compute workloads in a multi-tenant environment while ensuring that you're only billed to the second for the time where your function is actually doing work. This is the beauty of serverless.
For workloads where this is not acceptable behavior, you could consider moving off of the consumption plan and on to the actual App Service plan. Alternatively, you could implement a timer triggered function that goes off every minute for example and use that as a "keep alive" mechanism by pinging the function that you don't want to go to sleep.

Stop Multiple WebAPI requests from Azure Scheduler

I have a web api which does a task and it currently takes couple of minutes based on the data. This can increases over time.
I have Azure scheduler job which calls this web api every 10 minutes. I want to avoid the case where the second call after 10 minutes overlaps with the first call because of the increase in time for execution. How can I put the smarts in the web api so that I detect and avoid the second call if the first call is running.
Can I use AutoResetEvent or lock statements? Or keeping a storage flag to indicate busy/free a better option?
Persistent state is best managed via storage. Can your long-running activity persist through a role reset (after all, a role may be reset at any time as long as availability constraints are met).
Ensure that you think through scenarios where your long running job terminates halfway through.
The Windows Azure Scheduler has a 30 seconds timeout. So we cannot have a long running task called by a scheduler. The overlap of 2 subsequent calls is out of question.
Also it seems like having a long running task from a WebAPI is a bad design, because the recycling of app pools. I ended up using Azure service bus. When a task is requested message is posted to the queue. That way the time occupied by the webapi is limited.

Background job with a thread/process?

Technology used: EJB 3.1, Java EE 6, GlassFish 3.1.
I need to implement a background job that is execute every 2 minutes to check the status of a list of servers. I already implemented a timer and my function updateStatus get called every two minutes.
The problem is I want to use a thread to do the update because in case the timer is triggered again but my function called is not done, i will like to kill the thread and start a new one.
I understand I cannot use thread with EJB 3.1 so how should I do that? I don't really want to introduce JMS either.
You should simply use and EJB Timer for this.
When the job finishes, simply have the job reschedule itself. If you don't want the job to take more that some amount of time, then monitor the system time in the process, and when it goes to long, stop the job and reschedule it.
The other thing you need to manage is the fact that if the job is running when the server goes down, it will restart automatically when the server comes back up. You would be wise to have a startup process that scans the current jobs that exist in the Timer system, and if yours is not there, then you need to submit a new one. After that the job should take care of itself until your deploy (which erases existing Timer jobs).
The only other issue is that if the job is dependent upon some initialization code that runs on server startup, it is quite possible that the job will start BEFORE this happens when the server is firing up. So, may need to have to manage that start up race condition (or simply ensure that the job "Fails fast", and resubmits itself).

Resources