Summary: I have 2 different Azure Function Apps (Node.js), sharing a single file storage account, however if I go into the Kudu Invocation Logs for either of them I see the entries from both Apps.
Here is my setup:
1 File Storage (shared by both Function Apps)
Service Bus 1 (sb-prod), with a single queue (somequeue)
Service Bus 2 (sb-staging), with a single queue (somequeue)
Function App 1 (func-prod), with a single function (somefunc)
Function App 2 (func-staging), with a single function (somefunc)
Both func-prod and func-staging are setup for continuous deployment from the same Bitbucket repo, but different branches
When a message is received in sb-prod it triggers somefunc in func-prod
When a message is received in sb-staging it triggers somefunc in func-staging
Note that the queue name and function name are the same in both prod and staging. That all seems to work fine. However if I go into Kudu and look at the Invocation Logs for debugging, it shows the execution of functions across both Function Apps (prod and staging shown in the logs for both). It is not respecting the folder structure on the file storage to only show the logs from the appropriate App. As far as I can tell, this is only a log viewing issue, and the functions aren't being run twice or messages being sent to the wrong function app. Any ideas on how to fix this? Or is this a bug and I would need to add a second storage account to fix it so that Kudu doesn't get confused? Is there any risk with this setup that messages from staging service bus end up in the prod app or vice versa?
By 'Kudu', I assume you mean the WebJobs Dashboard (not related to Kudu). The behavior you are seeing is quirky, but is in fact by design. See https://github.com/Azure/azure-webjobs-sdk/issues/1541 for more info.
Workarounds:
The best is to use App Insights instead of the WebJobs Dashboard
If you must use the WebJobs Dashboard, use distinct storage accounts
Related
I created an azure function app, and tested it locally where I using the console, was able to determine that the my application works, and does what it is supposed to do.
Now I have deployed it to Azure cloud, and started the service - but I don't seem to have any indication on whether it is running, no logs showing what state it is in,
nothing.
How do i view the console application log for my application running in the azure cloud?
I have an Azure Function App in the Portal which contains .NET Core 6 Http Trigger Function.
Now, it has run successfully 2 times.
You can observe the function app state in the Log Stream during the execution state and in idle state that there is no new trace, but your function host is running:
You can also observer the metric rates in Function App Overview blade, how many requests and when they came in last 30 minutes, 1 hour, etc. and you can see more metrics in Live Metrics blade from the Application Insights associated with that Function App.
You could also check any performance issues in the Function App using diagnostics.
Refer to Azure Function App Diagnostics Overview doc provided by Microsoft for knowing the issues, latency time related metrics and reports.
Can I deploy one Azure time trigger function from one repo to multiple App Services?
Example:
Currently I have a repo with one Azure function in it (name Function1, runs every few mins).
I have 5 customers, I have a database for each customer and therefore I have 5 connection strings. Each customer requires me to host the function in isolated environment independent from the other customers.
The function "Function1" does the same logic for each of my customers. It just accesses a different database for each using the different connection string.
Therefore, I created 5 App Services: Function1-Customer1, Function1-Customer2, ... to satisfy the "independent environment requirement".
Each App Service has the unique db connection string assigned in the App Settings.
I tried to deploy the "Function1" to all these 5 App Services. However, when then going to see the Log Stream for any of the App Services it seems that only one instance of that function is running, depending on which App Service deployed last.
So for example, if Function1-Customer1 deployed last and I go to Function1-Customer2 or Function1-Customer3 to see the Log Streams, both outputs a conn string of Function1-Customer1. If Function1-Customer2 deployed last then I would see its conn string in all other App Services.
Is it possible to deploy the Function1 to serve all these 5 App Services? Or do I need a different architecture here?
The functions coordinate by obtaining leases in the underlying blob storage. If two function apps end up fighting over the same lease, they will block each other even though they are supposed to do different things. You can explore this by looking at the blobs in the underlying storage account an check the "lease" status.
Based on our discussion in the comments, I would recommend to use a dedicated storage account for each function app. I would not recommend AzureFunctionsWebHost__hostid or similar solutions, since it adds more complexity.
For each trigger Azure function manages it's own queue in Azure Queue Storage. You can use single function app and trigger 5 different tasks for each customer or you can create separate Azure storage account for each function app.
I am using Azure Functions V2 with a Service Bus trigger using 1.0.23 of the C# Functions SDK. I'm using the following approach to get secrets from KeyVault and use them within the settings of the triggers: How to map Azure Functions secrets from Key Vault automatically
The function, especially when it has done nothing for a while, doesn't fire when there are messages on the subscription. If I then go to the portal and execute manually (yes, that particular execution is fired with a null message) it kicks it into life and picks up the other messages on the queue and processes them correctly.
This obviously isn't ideally for our automated tests. Has anybody seen this, or know of anything that will help?
Also, the Function App is running on a consumption plan.
App Service Plan
If you're using App Service plan then it's simple, just make use of Always on
Consumption Plan
If you're using Consumption plan, the issue could be that your triggers did not sync properly with the Azure Infrastructure (Central Listener). It could have happened due to the way you deployed/edited your trigger related settings as explained in issue #210 below.
When you access the function directly from Portal, it might be forcing your function app to come alive, but as you can see that's only a workaround. Something similar is mentioned here
Take a look at these issues:
Service Bus Topic Trigger goes to sleep - Consumption Plan
They also mention that it wakes up only on accessing it via the portal or calling a HTTP triggered function in the same app, which is similar to the behavior you are seeing.
Issue #210
Issue #681
There are 3 suggested ways to resolve it, mentioned as part of Issue #210 above
In order to synchronize triggers when these deployment options are
used, open the Azure Portal and click the Refresh button, or make a
API call to the sync triggers endpoint:
https://github.com/davidebbo/AzureWebsitesSamples/blob/master/ARMTemplates/FunctionsWebDeploy.json#L90
Powershell sample:
https://github.com/davidebbo/AzureWebsitesSamples/blob/master/PowerShell/HelperFunctions.ps1#L360-L365
I've had a similar issue. ServiceBus connection was injected using ServiceBus value in ConnectionStrings section of Function configuration. This is enough when Function is in hot state but after transitioning to cold state AzureWebJobsServiceBus value is used to connect to service bus. So in my case setting AzureWebJobsServiceBus to ServiceBus connection string in Function configuration fixed this.
I have a function app with three Timer Trigger functions in it. I want to use the staging/production functionality provided by the Slots (preview), so I set up the VSTS deployment for two separate branches. The primary function app polls master and the slot polls a branch called staging.
The problem is that when I start the function app, the main functions schedule and run, but the Slot functions don't seem to get scheduled to run at all. Things I've tried:
I set the hosts.json file for each with a separate 'id' field to avoid a conflict on the locks that determine whether or not they can run. Looking in the storage account, I can see a folder for each app (the main and the slot) and a folder for each function, which I think means they shouldn't be using the same locks.
Use a separate storage account for the Slot app
Stop the main function app while keeping the Slot app running
Can anyone tell me what might be wrong with my setup or if there's a known bug with Slots (preview) preventing this from working?
I have an app (.exe) that picks up a file and imports it into a database. I have to move this set up into Azure. I am familiar with Azure SQL and Azure File Storage. What I am not familiar with is how I execute am app within Azure.
My app reads rows out of my Azure database to determine where the file is (in Azure File Storage) and then dumps the data into a specified table. I'm unsure if this scenario is appropriate for Azure Scheduler or if I need an App Service to set up a WebJob.
Is there any possibility I can put my app in a directly in Azure File Storage and point a task to that location to execute it (then it might be easier to resolve the file locations of the files to be imported).
thanks.
This is a good scenario for Azure Functions, if you want to just run some code on a schedule in Azure.
Functions are like Web Jobs (they share the same SDK in fact) so you can trigger on a schedule or from a storage queue, etc., but you don't need an app service to run your code in. There are some great intro videos here Azure Functions Documentation , and here is a link to a comparison of the hosting options between web jobs, functions, flow and logic apps.
You can edit the function directly in the portal (paste/type your c# or node.js code straight in), or use source control to manage it.
If you really want to keep your app as an exe and run it like that, then you will need to use the azure scheduler to do this instead, which is a basic job runner.
Decisions, decisions...!
Looking at https://azure.microsoft.com/en-gb/documentation/articles/scheduler-intro/ it seems that the only actions that are supported are:
HTTP, HTTPS,
a storage queue,
a service bus queue,
a service bus topic
so running a self contains .exe or script doesn't look to be possible.
Do you agree?