I have an Azure function running on a timer every few minutes that after a varied amount of time of running will begin to fail every time it runs because of an external API and hitting the restart button manually in the azure portal fixes the problem and the job works again.
Is there a way to either get an azure function to restart itself or have something externally restart an azure function via a web hook or API request or running on a timer
I have tried using Azures API Management service which can be used to restart other kinds of app services in azure but it turns out there is no functionality in the API to request a restart of an azure function, Also looked into power shell and it seems to be the same problem you can restart different app services but not azure functions
i have tried working with the API
https://learn.microsoft.com/en-us/rest/api/azure/
Example API request where you can list functions within an azure function
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{name}/functions?api-version=2016-08-01
but there is no functionality to restart an azure function from what i have researched
Basically i want to Restart the Azure function as if i was to hit this button
Azure functions manual stop/start and restart buttons in azure portal
because there is a case where the job gets into a bad state every time it runs because of an external API i have no control over and hitting restart manually gets the job going again
Another way to restart your function is by using the "watchDirectories" setting in the host.json file. If your host.json looks like this:
{
"version": "2.0",
"watchDirectories": [ "Toggle" ]
}
You could toggle a restart by using following statement in a function:
System.IO.File.WriteAllText("D:/home/site/wwwroot/Toggle/restart.conf", DateTime.Now.ToString());
Looking at the logs, the function reloads as it has detected the file change in the directory:
Watched directory change of type 'Changed' detected for 'D:\home\site\wwwroot\Toggle\restart.conf'
Host configuration has changed. Signaling restart
Azure functions by their nature are called upon an event. That may be a timer, a trigger or invocation like a HTTP event. They cannot be restarted per se, i.e. if you a function throws and exception, you cannot find the specific instance and re-run it using the out of the box functionality.
However, you can engineer your way to a more reliable solution:
Replay the event that invoked the function (i.e. kick it off again)
For non-sensitive data, log the payload of the function and create a another function that can be called on demand to re-run it. I.e. you create a proxy to "re-invoke" the function.
Harden your code by implementing a retry policy. See Polly.
Add a service bus in to your architecture. Have a simple function to write the call payload to a message bus payload. Have another function to pick up the payload and process it more extensively where there may be unreliable integrations etc). That way if the call fails you can abandon and dead letter failures for later reprocessing.
Consider using Durable Function Extensions and leveraging the durable patterns, these can help make your functions code more robust and manage state.
Why don't you try below ARM API. Since Azure function also fall under App service category, sometimes this may be helpful,
https://learn.microsoft.com/en-us/rest/api/appservice/webapps/restart
Related
I'm experiencing TimerTrigger not actually triggering in multiple of my Azure Functions. The flow always looks similar to this:
As shown in the log statements, this timer is configured to trigger every 5 minutes (0 */5 * * * *). It triggers 5:10, 5:15, ... 5:40. But then on 5:45 no trigger. The same goes for 5:50. Then at 5:51 it "wakes up". I have RunOnStartup = true on my trigger, so this is probably caused by the function app being started.
My function app is consumption based, why I would expect the app to simply run on another machine if the current machine is shot down or in other ways unavailable. The app is running on Azure Functions version 3.
Am I missing something here, or does anyone experience similar issues?
AFAIK, There is no specific reason for the timer trigger not firing properly within the given time.
Few of the workaround we can follow,
I have not faced the similar issue yet, would suggest you to please try to restart/refresh your function app.
or, It may be due to of the sync issue with the function which is not happening properly.
As suggested by #Anand Sowmithiran the SO THREAD, #MayankBargali-MSFT suggest about singleton lock i,e;
TimerTrigger uses the Singleton feature of the WebJobs SDK to ensure
that only a single instance of your triggered function is running at
any given time. When the JobHost starts up, for each of your
TimerTrigger functions a blob lease (the Singleton Lock) is taken.
This distributed lock ensures that only a single instance of your
scheduled function is running at any time. If the blob for that
function is not currently leased, the function will acquire the lease
and start running on schedule immediately. If the blob lease cannot be
acquired, it generally means that another instance of that function is
running, so the function is not started in the current host.
Also please try to set runonstartup to false to check whether its behaving the same or not as provided the MS DOC in comment.
For more information Please refer the below links :-
MS Q&A| Azure Timer Trigger not firing
GitHub | azure timer function not executing all of sudden
Our existing system uses App Services with API controllers.
This is not a good setup because our scaling support is poor, its basically all or nothing
I am looking at changing over to use Azure Functions
So effectively each method in a controller would become a new function
Lets say that we have a taxi booking system
So we have the following
Taxis
GetTaxis
GetTaxiDrivers
Drivers
GetDrivers
GetDriversAvailableNow
In the app service approach we would simply have a TaxiController and DriverController with the the methods as routes
How can I achieve the same thing with Azure Functions?
Ideally, I would have 2 function apps - Taxis and Drivers with functions inside for each
The problem with that approach is that 2 function apps means 2 config settings, and if that is expanded throughout the system its far too big a change to make right now
Some of our routes are already quite long so I cant really add the "controller" name to my function name because I will exceed the 32 character limit
Has anyone had similar issues migrating from App Services to Azure Functions>
Paul
The problem with that approach is that 2 function apps means 2 config
settings, and if that is expanded throughout the system its far too
big a change to make right now
This is why application setting is part of the release process. You should compile once, deploy as many times you want and to different environments using the same binaries from the compiling process. If you're not there yet, I strongly recommend you start by automating the CI/CD pipeline.
Now answering your question, the proper way (IMHO) is to decouple taxis and drivers. When requested a taxi, your controller should add a message to a Queue, which will have an Azure Function listening to it, and it get triggered automatically to dequeue / process what needs to be processed.
Advantages:
Your controller response time will get faster as it will pass the processing to another process
The more messages in the queue / more instances of the function to consume, so it will scale only when needed.
Http Requests (from one controller to another) is not reliable (unless you implement properly a circuit breaker and a retry policy. With the proposed architecture, if something goes wrong, the message will remain in the queue or it won't get completed by the Azure function and will return to the queue.
have two function app (httptrigger) in one of azure function apps project.
PUT
DELETE
In certain condition, would like to call DELETE function app from PUT function app.
Is it possible to get directly RUN of DELETE function app as both are resides in same function app project ?
I wouldn't recommend trying to call the actual function directly, but you can certainly refactor the DELETE functionality into a normal method and then call that from both the DELETE and PUT functions.
There is a few ways to call a function from the function:
HTTP request - it's simple, execute a normal HTTP request to your second function. It's not recommended, because it extends function execution time and generates a few additional problems, such as the possibility of receiving a timeout, the unavailability of the service and others.
Storage queues - make communication through queues (recommended), e.g. the first function (in your situation: "PUT function) can insert a message to the queue and the second function ("DELETE function") can listen on this queue and process a message.
Azure Durable Functions - this extensions allows to create rich, easy-to-understand workflows that are cheap and reliable. Another advantage is that they can retain their own internal state, which can be used for communication between functions.
Read more about cross function communication here.
I have timer-triggered Azure functions running in production, but now I want to be notified if the function fails.
In my case, access to various connected services can cause crashes, and there are many to troubleshoot. The crash is the type of error I need notification for.
When the function does fail, the log entry indicates failure, so I wonder if there is a hook in the system that would allow me to cause the system to generate a notification.
I know that blob and queue bindings, for instance, support the creation of poison queue entries, but timer trigger binding doesn't say anything about any trigger outputs of that nature.
I see that functions can pass their $return status as input to other functions, but that operation is not explained in depth in the docs. Also, in that case, I need to write another function to process the error status, and I was looking for something built-in.
I Have inquired with #AzureSupport on this, but their answer had nothing to do with Azure Functions, instead referring me to DLL notification hooks, then recommending I file on uservoice.
I'm sure there must be people here who have implemented some sort of error status notification. I prefer a solution that doesn't require code.
The recommended way to monitor and alert on failures is to use AppInsights which integrates fully with Azure Functions now
https://blogs.msdn.microsoft.com/appserviceteam/2017/04/06/azure-functions-application-insights/
Since all the logs are available in AppInsights it's easy to monitor for failures and setup alerts based on your own criteria.
However, if you only care about alerting and not things like monitoring etc, you could use Azure Monitor instead: https://learn.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-get-started
When the function does fail, the log entry indicates failure, so I wonder if there is a hook in the system that would allow me to cause the system to generate a notification.
...
I prefer a solution that doesn't require code.
This is a zero-code solution:
I poked #AzureFunctions once before on this topic, and a suggested response was to use Application Insights. It can handle the alerts upon failure and also can use webhooks.
See the Azure Functions App-Insights documentation on how to link your function app to App Insights. Then set up any alerts you want.
Unfortunately this hook doesn't exist.
Can you switch from a timer trigger to a queue trigger?
You can get retries (if you want them), and after the specified number of attempts the message is sent to a poison queue.
To schedule executions you can add queue messages with a visibility timeout to match your schedule.
In order to get alerts on failure you have two options:
A timer trigger than scans the execution logs (via SFTP) for failures.
Wrap the whole function in a try/catch block and in the catch block write a few lines to send you an email with the error details.
Hope this helps.
No code:
Go to your azure cloud account
From the menu select Monitor
Then select Add New Rule
Then Select your condition, action and add the alert details.
I have a node.js service running in Azure as worker role. By default the process is restarted every time there is a topology change, e.g. instance count is increased via Azure portal. How can I prevent this restart?
MSDN documentation pointed to handling Azure's "Changing" event. Azure Node SDK's support for cancelling was added here and here.
The code to use the API would be something like
azure.RoleEnvironment.on(ServiceRuntimeConstants.CHANGING, function (changes) {
changes.cancel();
});
From logs I know the handler is called, but restarts still took place afterwards. Am I using the API incorrectly or is this the wrong approach?
When using the Role Changing event in .NET when you cancel the event you are actually asking the system to restart. The docs for the Role Changing event on MSDN say this:
By using the Cancel property, you can ensure that the instance
proceeds through an orderly shutdown sequence and is taken offline
before the configuration change is applied. During the shutdown
process, Windows Azure raises the Stopping event, and then runs any
code in the OnStop method.
The idea of the Role Changing event is that if you can modify the configuration at runtime without requiring a restart you do so. By cancelling the changing event you are in essence saying, "I can't change this at runtime, so restart gracefully and pick up the new changes then."
I've not tried this with Node, but try not cancelling it.