I am using Azure Functions V2 with a Service Bus trigger using 1.0.23 of the C# Functions SDK. I'm using the following approach to get secrets from KeyVault and use them within the settings of the triggers: How to map Azure Functions secrets from Key Vault automatically
The function, especially when it has done nothing for a while, doesn't fire when there are messages on the subscription. If I then go to the portal and execute manually (yes, that particular execution is fired with a null message) it kicks it into life and picks up the other messages on the queue and processes them correctly.
This obviously isn't ideally for our automated tests. Has anybody seen this, or know of anything that will help?
Also, the Function App is running on a consumption plan.
App Service Plan
If you're using App Service plan then it's simple, just make use of Always on
Consumption Plan
If you're using Consumption plan, the issue could be that your triggers did not sync properly with the Azure Infrastructure (Central Listener). It could have happened due to the way you deployed/edited your trigger related settings as explained in issue #210 below.
When you access the function directly from Portal, it might be forcing your function app to come alive, but as you can see that's only a workaround. Something similar is mentioned here
Take a look at these issues:
Service Bus Topic Trigger goes to sleep - Consumption Plan
They also mention that it wakes up only on accessing it via the portal or calling a HTTP triggered function in the same app, which is similar to the behavior you are seeing.
Issue #210
Issue #681
There are 3 suggested ways to resolve it, mentioned as part of Issue #210 above
In order to synchronize triggers when these deployment options are
used, open the Azure Portal and click the Refresh button, or make a
API call to the sync triggers endpoint:
https://github.com/davidebbo/AzureWebsitesSamples/blob/master/ARMTemplates/FunctionsWebDeploy.json#L90
Powershell sample:
https://github.com/davidebbo/AzureWebsitesSamples/blob/master/PowerShell/HelperFunctions.ps1#L360-L365
I've had a similar issue. ServiceBus connection was injected using ServiceBus value in ConnectionStrings section of Function configuration. This is enough when Function is in hot state but after transitioning to cold state AzureWebJobsServiceBus value is used to connect to service bus. So in my case setting AzureWebJobsServiceBus to ServiceBus connection string in Function configuration fixed this.
Related
I have a background worker that listens to a service bus (Azure Service bus) for messages.
Each message stands for an async task that the service should work on, but for the case that no event is reaching the bus, I also want to trigger the service automatically each day.
The service bus is currently triggered by user events that are generated in different APIs.
This works fine, but who should trigger my service with a certain schedule?
I could of course write a second service that sends a message to the bus each week, but it feels kind of overkill to have a service running only for this task.
I am wondering if there is a better solution how I could do this? Even an Azure Function seems overkill for me...
How would you address this issue?
Azure Functions are ideal because you can create both a timer triggered, and service bus triggered function in the same Function App.
If you feel it is excessive, I suggest the other option is to use the Azure Web jobs which can run in the App Services.
You can have timer-triggered Web jobs and use that Web job SDK to trigger them whenever there is a message in the Azure Service Bus.
Refer to the Scheduled Webjobs and Service Bus triggered webjobs for more information.
I have a function set up with ServiceBusTrigger which runs when it is deployed and then keeps running while I'm testing and sending messages to a topic. However, if I wait an hour or so and then send more messages they are not processed until I either restart the function app or disable and reenable the actual function.
How can I change this so that the function is always "on"?
Sounds like you are using a dedicated App Service Plan. Make sure you have "Always On" enabled. You need to have at least a "Basic" plan. If you don't want to pay for a basic plan, I suggest you use a consumption plan.
https://github.com/Azure/Azure-Functions/wiki/Enable-Always-On-when-running-on-dedicated-App-Service-Plan
How do I turn on "always-on" for an Azure Function?
I have a (C#-based) Function App on a consumption plan that only contains queue triggers. When I deploy it (via Azure DevOps) and have something written to the queue, the trigger does not fire unless I go to the Azure Console and visit the Function App. It also works to add an HTTP trigger to the App and call that. After that, all other triggers work.
The same phenomenon is observed with Timer triggers.
My hypothesis is that these triggers only work when the runtime is active but not directly after deployment when no runtime was created. Is that true? If so, what is the suggested way around this?
My only workaround idea is to add an HTTP trigger and fire regular keepalive pings to that trigger. But that sounds wrong.
Summary: I have 2 different Azure Function Apps (Node.js), sharing a single file storage account, however if I go into the Kudu Invocation Logs for either of them I see the entries from both Apps.
Here is my setup:
1 File Storage (shared by both Function Apps)
Service Bus 1 (sb-prod), with a single queue (somequeue)
Service Bus 2 (sb-staging), with a single queue (somequeue)
Function App 1 (func-prod), with a single function (somefunc)
Function App 2 (func-staging), with a single function (somefunc)
Both func-prod and func-staging are setup for continuous deployment from the same Bitbucket repo, but different branches
When a message is received in sb-prod it triggers somefunc in func-prod
When a message is received in sb-staging it triggers somefunc in func-staging
Note that the queue name and function name are the same in both prod and staging. That all seems to work fine. However if I go into Kudu and look at the Invocation Logs for debugging, it shows the execution of functions across both Function Apps (prod and staging shown in the logs for both). It is not respecting the folder structure on the file storage to only show the logs from the appropriate App. As far as I can tell, this is only a log viewing issue, and the functions aren't being run twice or messages being sent to the wrong function app. Any ideas on how to fix this? Or is this a bug and I would need to add a second storage account to fix it so that Kudu doesn't get confused? Is there any risk with this setup that messages from staging service bus end up in the prod app or vice versa?
By 'Kudu', I assume you mean the WebJobs Dashboard (not related to Kudu). The behavior you are seeing is quirky, but is in fact by design. See https://github.com/Azure/azure-webjobs-sdk/issues/1541 for more info.
Workarounds:
The best is to use App Insights instead of the WebJobs Dashboard
If you must use the WebJobs Dashboard, use distinct storage accounts
I'm simply trying to work out how best to retrieve messages as quickly as possible from an Azure Service Bus Queue.
I was shocked that there wasn't some way to properly subscribe to the queue for notifications and that I'm going to have to poll. (unless I'm wrong in which case the documentation is terrible).
I got long polling working, but checking a single message every 60 seconds looks like it'll cost around £900 per month (again, unless I've misunderstood that). And if I add a redundant/second service to poll it'll double.
So I'm wondering what the best/most cost efficient way of doing it is.
Essentially I just want to take a message from the queue, perform an API lookup on some internally held data (perhaps using hybrid services?) and then perhaps post a message back to a different queue with some additional information .
I looked at worker roles(?) -- is that something that could do it?
I should mention that I've been looking at doing this with node.js.
Check out these videos from Scott Hanselman and Mark Simms on Azure Queues.
It's C# but you get the idea.
https://channel9.msdn.com/Search?term=azure%20queues%20simms#ch9Search
Touches on:
Storage Queues vs. Service Bus Queues
Grabbing messages in bulk vs. one by one (chunky vs. chatty)
Dealing with poison messages (bad actors)
Misc implementation details
Much more stuff i can't remember now
As for your compute, you can either do a VM, a Worker Role (Cloud Services), App Service Webjobs, or Azure Functions.
The Webjobs SDK and Azure Functions bot have a way to subscribe to Queue events (notify on message).
(Listed from IaaS to PaaS to FaaS - Azure Functions - if such a thing exists).
Azure Functions already has sample code provided as templates to do all that with Node. Just make a new Function and follow the wizard.
If you need to touch data on-prem you either need to look at integrating with a VNET that has site-to-site connectivity back to your prem, or Hybrid Connections (App Service only!). Azure Functions can't do that yet, but every other compute is a go.
https://azure.microsoft.com/en-us/documentation/articles/web-sites-hybrid-connection-get-started/
(That tutorial is Windows only but you can pull data from any OS. The Hybrid Connection Manager has to live on a Windows box, but then it acts as a reverse proxy to any host on your network).
To deal with Azure ServiceBus Queue easily, the best option seems to be Azure Webjob.
There is a ServiceBusTrigger that allows you to get messages from an Azure ServiceBus queue.
For node.js integration, you should have a look at Azure Function. It is built on top of the webjob SDK and have node.js integration :
Azure Functions NodeJS developer reference
Azure Functions Service Bus triggers and bindings for queues and topics
In the second article, there is an example on how get messages from a queue using Azure Function and nodejs :
module.exports = function(context, myQueueItem) {
context.log('Node.js ServiceBus queue trigger function processed message', myQueueItem);
context.done();
};