I'm developing simple azure function with Visual Studio 2019. I'm using Azure Functions Core Tools as they are started every time I start my project.
The functions are time triggered and activity triggered. Every time I start my project again, the Azure Functions Core Tools starts too - and now the wired is happening:
It looks like the functions are not only called once from the current runtime as expected - but also from old triggers, running still in the background. If stop in one of the functions, it sometimes gets "old" data from runs of my projects before. I'm also getting warnings about unknown functions, I renamed in my projects dozen runs before.
I cleared the /bin/ path of my project - but old functions seems to be alive in something like a hidden cache or a hidden runtime of the Azure Functions Core Tools.
Is there a master stop or a master cleanup I can run before every new start of my project in the Azure Functions Core Tools?
And can it be, that this behavior also happens in the real Azure environment - since I see time triggered functions there running in faster cycles as they should ran - maybe triggered by still running timers from published instances before?
If you're using Durable Framework (based on your previous question), you need to delete the storage artifacts otherwise, it will execute previous non completed executions.
There are a few ways to do that:
[FunctionName("PurgeInstanceHistory")]
public static Task Run(
[DurableClient] IDurableOrchestrationClient client,
[TimerTrigger("0 0 12 * * *")]TimerInfo myTimer)
{
return client.PurgeInstanceHistoryAsync(
DateTime.MinValue,
DateTime.UtcNow.AddDays(-30),
new List<OrchestrationStatus>
{
OrchestrationStatus.Completed
});
}
you can also use CLI for it:
func durable delete-task-hub --task-hub-name UserTest
as another option, you can manually delete the artifacts using Microsoft Azure Storage Explorer and connecting it to local storage emulator:
Related
Currently, I am facing two issues in the Azure function app. I have provided the details below:
1. Global variable content is being shared across executions:
I have used Concurrent dictionary which is a global variable, private and static. This variable is being used in the Queue Trigger.
private static readonly ConcurrentDictionary<string, string> s_mapping = new ConcurrentDictionary<string, string>()
Intermittently, I see that the above variable is being shared across different Queue trigger executions. How can I rectify it, so that the variables are not shared across different run instances?
2. Old code running after publish through Visual studio:
I publish the code using visual studio, intermittently I see that the old code is running. In the configuration, WEBSITE_RUN_FROM_PACKAGE is set as 1 and also I deploy it as a zip file from VS. I tried restarting the Function App but it doesn't seem to work.
Really appreciate the help here or any guidance on what can be done for these issues.
Each function app runs in its own process and all functions run in
that process. Static variables are shared across all functions in that
application just as if you'd written an ASP.NET or console app. For
example, you can create an HttpClient in a static class and all
functions can access that same client instance (we encourage this, in
fact). It's always been this way, so nothing changed there.
Source: https://github.com/Azure/Azure-Functions/issues/1202
1-) don't use static variables or split into different azure function apps.
2-) try removing files from wwwroot before publishing new code. This should not happen, but it's possible in case of high workloads (code being processed while you're publishing new code). I believe the best you can do is to proper setup a cleaning step before pushing changes.
I have an Azure timer function app that fires every five minutes
I have found this very hit and miss after deploying. It either starts firing or it doesn't. It if doesn't fire then no matter what I do I cannot seem to get it to fire.
I have tried restarting and refreshing the function in the portal
It is running on a S1 App service plan (that also has three other func apps on it)
My timer cron expression is * */5 * * *
I found a github issue relating the same.
There is an issue right now if you deploy timer based functions that
are disabled via app setting then change the app setting to enable
them. To work around until the fix is made you should click the
'refresh' button in the Azure portal after changing app settings used
for disabling functions.
I have found that Azure Timer trigger Functions will fail to fire unless you use the B1 or higher app service plan. They should work in the free or shared plans ( docs say so ) but mine keep failing. Hopefully Microsoft will address this at some point.
I ended up redeploying all the components in the resource group again and then it started to fire. It is even firing on the basic consumption plan now so not sure what happened
I'm running an Azure function in Azure, the function gets triggered by a file being uploaded to blob storage container. The function detects the new blob (file) but then outputs the following message - Did not find any initialized language workers.
Setup:
Azure function using Python 3.6.8
Running on linux machine
Built and deployed using azure devops (for ci/cd capability)
Blob Trigger Function
I have run the code locally using the same blob storage container, the same configuration values and the local instance of the azure function works as expected.
The functions core purpose is to read in the .xml file uploaded into blob storage container and parse and transform the data in the xml to be stored as Json in cosmos db.
I expect the process to complete like on my local instance with my documents in cosmos db, but it looks like the function doesn't actually get to process anything due to the following error:
Did not find any initialized language workers
Troy Witthoeft's answer was almost certainly the right one at the time the question was asked, but this error message is very general. I've had this error recently on runtime 3.0.14287.0. I saw the error on many attempted invocations over about 1 hour, but before and after that everything worked fine with no intervention.
I worked with an Azure support engineer who gave some pointers that could be generally useful:
Python versions: if you have function runtime version ~3 set under the Configuration blade, then the platform may choose any of python versions 3.6, 3.7, or 3.8 to run your code. So you should test your code against all three of these versions. Or, as per that link's suggestion, create the function app using the --runtime-version switch to specify a specific python version.
Consumption plans: this error may be related to a consumption-priced app having idled off and taking a little longer to warm back up again. This depends, of course, on the usage pattern of the app. (I infer (but the Engineer didn't say this) that perhaps if the Azure datacenter my app is in happens to be quite busy when my app wants to restart, it might just have to wait for some resources to become available.). You could address this either by paying for an always-on function app, or by rigging some kind of heartbeat process to stop the app idling for too long. (Easiest with a HTTP trigger: probably just ping it?)
The Engineer was able to see a lower-level error message generated by the Azure platform, that wasn't available to me in Application Insights: ARM authentication token validation failed. This was raised in Microsoft.Azure.WebJobs.Script.WebHost.Security.Authentication.ArmAuthenticationHandler.HandleAuthenticate() at /src/azure-functions-host/src/WebJobs.Script.WebHost/Security/Authentication/Arm/ArmAuthenticationHandler.cs. There was a long stack trace with innermost exception being: System.Security.Cryptography.CryptographicException : Padding is invalid and cannot be removed.. Neither of us were able to make complete sense of this and I'm not clear whether the responsibility for this error lies within the HandleAuthenticate() call, or outside (invalid input token from... where?).
The last of these points may be some obscure bug within the Azure Functions Host codebase, or some other platform problem, or totally misleading and unrelated.
Same error but different technology, environment, and root cause.
Technology Net 5, target system windows. In my case, I was using dependency injection to add a few services, I was getting one parameter from the environment variables inside the .ConfigureServices() section, but when I deployed I forget to add the variable to the application settings in azure, because of that I was getting this weird error.
This is due to SDK version, I would suggest to deploy fresh function App in Azure and deploy your code there. 2 things to check :
Make sure your local function app SDK version matches with Azure function app.
Check python version both side.
This error is most likely github issue #4384. This bug was identified, and a fix was released mid-june 2020. Apps running on version 3.0.14063 or greater should be fine. List of versions is here.
You can use azure application insights to check your version. KUSTO Query the logs. The exception table, azure SDK column has your version.
If you are on the dedicated App Service plan, you may be able to "pull" the latest version from Microsoft by deleting and redeploying your app. If you are on consumption plan, then you may need to wait for this bugfix to rollout to all servers.
Took me a while to find the cause as well, but it was related to me installing a version of protobuf explicitly which conflicted with what was used by Azure Functions. Fair, there was a warning about that in the docs. How I found it: went to <your app name>.scm.azurewebsites.net/api/logstream and looked for any errors I could find.
I am using azure functions with the blob trigger and with the runtime 2.x. After the function being executed the runtime adds blob receipt into the location below (azure-webjob-hosts>blobreceipt>)
Is there any way to make runtime to use the same hostnameid in the path (in this case it's 8a76f42c1c01-2137340777)? Because if run function locally or publish new version of the code it would use different host identifier (like macbook-id).
The main problem is when I run func azure functionapp publish it will requeue all the existing blobs with new version of the code even when the new version has the same func name because it will use different host/func identifier to check for receipts.
I know that in runtime v1 you could add id into host.json. I dont see such option in runtime v2.
Edit:
Another thing. Just this morning I found out that my function run over all the blobs again. When I looked into blobreceipts I see that the hostid has changed and there are new bunch of receipts for the new host id. Host Id has changed even though I didnt do any deployments. How is that possible? I have also reported the bug on coretools about hostid getting changed on publish. https://github.com/Azure/azure-functions-core-tools/issues/1012
App name is altamente-prod-svc-shopify-background. But the host id is d7031c9b43fc-2137340777. I have removed all receipts to do more tests for now but you can see in logs that the func was initiated again this morning by itself over all my 7 blobs. There is just one function for now (and also WarmUp func which appeared automatically). I have created the linux func app via Azure portal with no functions and then just did publish with core tools from my mac.
There is a workaround now in the Azure Functions bug report that you linked to. Basically, set AzureFunctionsWebHost:hostid in the App settings to some explicit value. I'm using a self-hosted functions runtime and Kubernetes doesn't allow colons in environment variables, so I used the work-around of setting AzureFunctionsWebHost__hostid instead, which also works.
So this is new on V2, when I publish with Visual Studio (probably with vsts publishing as well). It says the dll is busy, it didn't used to do that in V1.
I suppose it's fine to stop the functions (Or probably I can do slot deployment as well, although all mine are triggered and scheduled, so I don't really need slots).
So on the "stop" it will still do that 30 seconds of graceful stop that the functions do before shutting down / switching (Heard this on a podcast when I asked).
If I redeploy after the stop, I suppose it's ok this way. Things will wait on my triggers from azure queue and schedule.
Only thing is it's sort of a pain to have to press start and stop rather than just publish. I'm not sure if it's supposed to be doing this or not. Doesn't seem publish will ever work unless it's stopped, why not have it auto-stop the function?
Looks like you meet ERROR_FILE_IN_USE
.
You can configure the appOffline rule in the publishing profile (In Solution explorer> Properties>PublishProfiles>*.pubxml). Set the EnableMSDeployAppOffline to true as below.
<PropertyGroup>
...
<EnableMSDeployAppOffline>true</EnableMSDeployAppOffline>
...
</PropertyGroup>
This setting take the app offline so the file lock is released and your app will start automatically after deploy.
With the caveat that ZIP deployment is now preferred, the solution to this is to add an app setting MSDEPLOY_RENAME_LOCKED_FILES with value 1.