How come my Azure timer function app stops firing - azure

I have an Azure timer function app that fires every five minutes
I have found this very hit and miss after deploying. It either starts firing or it doesn't. It if doesn't fire then no matter what I do I cannot seem to get it to fire.
I have tried restarting and refreshing the function in the portal
It is running on a S1 App service plan (that also has three other func apps on it)
My timer cron expression is * */5 * * *

I found a github issue relating the same.
There is an issue right now if you deploy timer based functions that
are disabled via app setting then change the app setting to enable
them. To work around until the fix is made you should click the
'refresh' button in the Azure portal after changing app settings used
for disabling functions.

I have found that Azure Timer trigger Functions will fail to fire unless you use the B1 or higher app service plan. They should work in the free or shared plans ( docs say so ) but mine keep failing. Hopefully Microsoft will address this at some point.

I ended up redeploying all the components in the resource group again and then it started to fire. It is even firing on the basic consumption plan now so not sure what happened

Related

How can i fix the deployement error 'starter.cmd "D:\home\site\d ..."' was aborted

I am having this error while deploying my nodejs code to azure app service through CD linked to my azure repos repository :
"Command 'starter.cmd "D:\home\site\d ...' was aborted due to no
output nor CPU activity for 60 seconds. You can increase the
SCM_COMMAND_IDLE_TIMEOUT app setting (or WEBJOBS_IDLE_TIMEOUT if this
is a WebJob) if needed.\r\nstarter.cmd
"D:\home\site\deployments\tools\deploy.cmd""
Any ideas ? Thanks :)
By default, when your build process launches some command, it's allowed to run for up to 60 seconds without producing any output. If that is not long enough, you can make it longer, e.g. to make it 10 minutes:
SCM_COMMAND_IDLE_TIMEOUT=600
Note that on Azure, there is a general idle request timeout that will cause clients to get disconnected after 230 seconds. However, the command will still continue running server-side after that.
You can do this by going to Azure Portal > select your web app> under settings click Configure tab and add an New Application Settings SCM_COMMAND_IDLE_TIMEOUT to say 600
Agree to the solution that is shared by Sneha
my first solution would be increasing the "SCM_COMMAND_IDLE_TIMEOUT" to 600 in the app setting of the web app.
Alternatively, If expecting a CONTINUOUS webjob this isn't something that happens by default (Even though the webjob appears to run by the rules until timeout). This is something that has to be selected in the Visual Studio Publish properties as detailed here
Once continuous is selected and the webjob is published, this error goes away
If you are using vnet integration you need to remove WEBSITE_VNET_ROUTE_ALL=1 in application setting. The deployment will be successful. Without this setting, web app routes only RFC1918 traffic into the VNet.
This document covers more information about integration with virtual network

Azure Functions Core Tools - hidden triggers or caching?

I'm developing simple azure function with Visual Studio 2019. I'm using Azure Functions Core Tools as they are started every time I start my project.
The functions are time triggered and activity triggered. Every time I start my project again, the Azure Functions Core Tools starts too - and now the wired is happening:
It looks like the functions are not only called once from the current runtime as expected - but also from old triggers, running still in the background. If stop in one of the functions, it sometimes gets "old" data from runs of my projects before. I'm also getting warnings about unknown functions, I renamed in my projects dozen runs before.
I cleared the /bin/ path of my project - but old functions seems to be alive in something like a hidden cache or a hidden runtime of the Azure Functions Core Tools.
Is there a master stop or a master cleanup I can run before every new start of my project in the Azure Functions Core Tools?
And can it be, that this behavior also happens in the real Azure environment - since I see time triggered functions there running in faster cycles as they should ran - maybe triggered by still running timers from published instances before?
If you're using Durable Framework (based on your previous question), you need to delete the storage artifacts otherwise, it will execute previous non completed executions.
There are a few ways to do that:
[FunctionName("PurgeInstanceHistory")]
public static Task Run(
[DurableClient] IDurableOrchestrationClient client,
[TimerTrigger("0 0 12 * * *")]TimerInfo myTimer)
{
return client.PurgeInstanceHistoryAsync(
DateTime.MinValue,
DateTime.UtcNow.AddDays(-30),
new List<OrchestrationStatus>
{
OrchestrationStatus.Completed
});
}
you can also use CLI for it:
func durable delete-task-hub --task-hub-name UserTest
as another option, you can manually delete the artifacts using Microsoft Azure Storage Explorer and connecting it to local storage emulator:

Deploying an Azure Function from VS Code - Succesfull but not visible in the Portal

I created a function and I am trying to deploy it from VS Code by clicking the Deploy to Function App.... The Deployment runs successfully based on the output log - Deployment successful but then when I go to the portal, the function is not listed under Functions.
What shall I do and what is the problem here?
When I debug in VS Code, I get this: No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
Unfortunatly I would not know if those steps don't work for uploading. The deployment finishes, and every single time it becomes visible in my portal. Uh, maybe there is a slight difference. The app service itself is pre-created via terraform. Just the uploading of the code I do via VSC.
As far as deletion goes:
Open the resource group, in the list lookup the App Service. Select the checkbox in front of it. Delete in the top nav bar of that pane.
Trying to delete it any other way will indeed give you the "Not found" error.
I've had the same 'issue', in my case it turns out that the issue was a bad entry in the requirements.txt
I had an incorrect line with 'io' and when it was present despite the deployment appearing to complete successfully in VS code, the function was not updated if it was previously deployed or not deployed if it wasn't resulting in the same 'no results' in the functions list.
Having other requirements such as 'numpy' or 'scipy' worked just fine.
It's an old thread but maybe it'll be helpful to whoever gets here in the future.
Even as of now, some changes I make in VS Code seem to take time to be immediately visible on the portal. I had a similar issue with resources, i.e. creating a resource from VS Code wouldn't make it immediately visible on Azure Portal. You can always go to Functions on the portal and click Refresh. Also try going to Advanced Tools, then Kudu and checking if your function can be found there.
One word of advice: if you publish your functions from VS Code, then work on that resource only from VS Code. You will find it reiterated all over Azure Functions docs that:
Publishing to an existing function app overwrites the content of that
app in Azure.

Azure Functions V2 have to stop functions every time before publishing because dll is busy

So this is new on V2, when I publish with Visual Studio (probably with vsts publishing as well). It says the dll is busy, it didn't used to do that in V1.
I suppose it's fine to stop the functions (Or probably I can do slot deployment as well, although all mine are triggered and scheduled, so I don't really need slots).
So on the "stop" it will still do that 30 seconds of graceful stop that the functions do before shutting down / switching (Heard this on a podcast when I asked).
If I redeploy after the stop, I suppose it's ok this way. Things will wait on my triggers from azure queue and schedule.
Only thing is it's sort of a pain to have to press start and stop rather than just publish. I'm not sure if it's supposed to be doing this or not. Doesn't seem publish will ever work unless it's stopped, why not have it auto-stop the function?
Looks like you meet ERROR_FILE_IN_USE
.
You can configure the appOffline rule in the publishing profile (In Solution explorer> Properties>PublishProfiles>*.pubxml). Set the EnableMSDeployAppOffline to true as below.
<PropertyGroup>
...
<EnableMSDeployAppOffline>true</EnableMSDeployAppOffline>
...
</PropertyGroup>
This setting take the app offline so the file lock is released and your app will start automatically after deploy.
With the caveat that ZIP deployment is now preferred, the solution to this is to add an app setting MSDEPLOY_RENAME_LOCKED_FILES with value 1.

deploying to sharepoint using the object model doesn't work reliably

Deploying to sharepoint using the object model or STSADM commands sometimes results in one or more packages being in the "error" state in the web control, a redeploy instantly fixes this, usually, even stranger, if i create two apps one which adds and one which deploys then i get no problems, but putting a delay between a single program does not have a similar effect.
If i run the deploy twice for programs which did not deploy successfully it works fine, as long as I do not try to do it programatically in which case it makes no difference.
It is different files and sometimes is none.
I do use stsadm -execadmsvcjobs between add and deploy and even between two of the deploy bunches.
(i'm deploying around 10 wsp files programatically)
Does anyone have any ideas on why this happens? or how to solve it, as when i get to implementations it causes problems.
The problem lies in the fact that sharepoint will perform app pool recycles and / or full iisresets, as well restarts of the SharePoint Timer Service (altough not completely sure about that though). When you try to actually deploy the just installed package sharepoint is still busy getting up and running again, the timer job created to install / deploy is basically waiting for the central admin app pool to be fully running again.
The same thing happens (somewhat reproducably) while retracting a solution. Hit F5 a lot of times on the solution management page while the retract process is underway and if you refreshed fast enough it will hang and display "error" in red.
My solution was to create a WebRequest to at least the central admin (or just do a SPSite = new SPSite("centraladminurl")) in your deployment app or in powershell. Do this after every deploy action as well.
This SHOULD fix the timing issue (basically a kind of "race condition").

Resources