Disabled Azure Function Still Pulls Messages off of Azure Storage Queue - azure

I have a basic QueueTrigger Azure function. When I disable the function in the azure portal it's still pulling messages off the storage queue (because when I look at the queue in the Azure Queue Storage Explorer the queue is empty and if i add a message it is immediately pulled off).
Here is the code:
[FunctionName("ProcessMessage")]
public static void Run([QueueTrigger("queue-name", Connection = "queue-connection")] Models.Message message, TraceWriter log)
{
log.Info($"C# Queue trigger function processed: {message}");
}
I noticed that when I stop the whole functions app it stops processing messages off the queue, but I was hoping that I could disable queue processing temporarily without stopping the whole function app. How does one do that?
Thanks!

Disabling the V1 function created in Visual Studio does not work in the azure portal. You should use the attribute:
https://learn.microsoft.com/en-us/azure/azure-functions/disable-function#functions-1x---c-class-libraries
(see important section)

Related

Azure function with Service Bus trigger calls the namespace with no reason

I have the following code:
[FunctionName("myFunc")]
public static void Run([ServiceBusTrigger("myQueue", Connection = "ConnectionString")]string myQueueItem, ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
It is published in an Azure Function App that has Managed Identity configured to a namespace. The app shows no executions (no messages have been sent to the queue/namespace):
Yet, at the same time the namespace shows it has received requests:
The namespace has no other queues and NOTHING else connects/accesses/requests the namespace and queue. Also, if the function app is stopped the requests stop, as well.
I'm trying to figure out why would the function app send requests to the namespace (and so many) when it wasn't even triggered.
At least one reason that I can think of is that it could be monitoring the queue length.
Azure Functions will auto-scale up/down based on the number of messages in the queue after all.

Azure Function is not triggerd after putting a message to a queue in service bus

I have a queue in a service bus. After putting a message into a queue an azure logic app and an azure functions should betriggered and process the content.
My Azure logic app is triggered but my azure funcction is not triggered. My code for azure function:
[FunctionName("ReadMEssageFromQueue")]
public static void Run([ServiceBusTrigger("messagequeue", Connection = "AzureWebJobsStorage")]string myQueueItem, ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
host json:
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"AzureWebJobsStorage": "******" // connection string of my service bus
}
}
should I set something in service bus queue to send message to both ressources?
Azure Service Bus Queue messages are picked up by only one processor. So I think in your case, the logic app is picking up and consuming the message first and the message is not available for the function to process. You can try temporarily disabling the logic app and letting function pick the message to confirm this.
Ref: azure-service-bus-queue-with-multiple-listeners
You can trigger the Azure function from your logic app (not sure if it'll help your use case), or you can use Azure Service Bus topics as topics support the model where multiple consumers can subscribe to a topic.
The former option might be a better approach for you from cost perspective, as you'd need to use Standard tier of service bus in order to use topics feature, which means additional cost for you over your current setup.
Also, you might want to use some other name for service bus connection string as AzureWebJobsStorage is used for storage account connection string

Azure Functions - Queue trigger consumes message on fail

This issue only occurs when I use the Azure Portal Editor. If I upload from Visual Studio, this issue does not occur, but I cannot upload from Visual Studio due this unrelated bug: Azure Functions - only use connection string in Application Settings in cloud for queue trigger.
When using the Azure Portal Editor, if I throw an exception from C# or use context.done(error) from JavaScript, Application Insights shows an error occurred, but the message is simply consumed. The message is not retried, and it does not go to a poison queue.
The same code for C# correctly retries when uploaded from Visual Studio, so I believe this is a configuration issue. I have tried modifying the host.json file for the Azure Portal Editor version to:
{
"queues": {
"visibilityTimeout": "00:00:15",
"maxDequeueCount": 5
}
}
but the message was still getting consumed instead of retried. How do I fix this so that I can get messages to retry when coding with the Azure Portal Editor?
Notes:
In JavaScript, context.bindingData.dequeueCount returns 0.
Azure Function runtime version: 1.0.11913.0 (~1).
I'm using a Consumption App Plan.
I was using the manual trigger from the Azure Portal Editor, which has different behavior from creating a message in the queue. When I put a message in the queue, the Azure Function worked as expected.
For local development, if your function is async use Task for the return type.
public async Task Run
instead of void:
public async void Run

What happened when using same Storage account for multiple Azure WebJobs (dev/live)?

In my small job, I just use same Storage Account for AzureWebJobsDashboard and AzureWebJobsStorage configs.
But what happened when we use same connection string for local Debugging and Published job equally? Are they treated in isolated manner? Or do they have any conflict issue?
I looked into blobs of published job and found azure-webjobs-dashboad/functions/instances or azure-webjobs-dashboad/functions/recent/by-job-run/{jobname}, or azure-webjobs-hosts/output-logs directories; they have no discriminator specified among jobs while some other directories have GUID with job name.
Note that my job will be run in continuous.
Or do they have any conflict issue?
No, there is no conflict issue. Base on my experience, it is not recommended to local debugging while Published job is running in the azure with the same connection string. Take Azure Storage queue for example, we can't control which queue should be executed in the local or in the azure. If we try to use debug it locally, please have a try to stop the continue WebJob from Azure Portal.
If we try to know WebJob is executed from which instance we could log the instance info in the code with the environment variable WEBSITE_INSTANCE_ID.The following is the code sample:
public static void ProcessQueueMessage([QueueTrigger("queue")] string message, TextWriter log)
{
string instance = Environment.GetEnvironmentVariable("WEBSITE_INSTANCE_ID");
string newMsg = $"WEBSITE_INSTANCE_ID:{instance}, timestamp:{DateTime.Now}, Message:{message}";
log.WriteLine(newMsg);
Console.WriteLine(newMsg);
}
More info please refer to how to use azure queue storage with the WebJob SDK. The following is snipped from the document.
If your web app runs on multiple instances, a continuous WebJob runs on each machine, and each machine will wait for triggers and attempt to run functions. The WebJobs SDK queue trigger automatically prevents a function from processing a queue message multiple times; functions do not have to be written to be idempotent
Update :
About timer trigger we could find more explanation of timer trigger in the WebJob from the GitHub document.So if you want to debug locally, please have a try to stop the WebJob from the azure portal.
only a single instance of a particular timer function will be running across all instances (you don't want multiple instances to process the same timer event).

Azure continuous webjob (blob) triggering only once

I have an Azure webjob that has a few blob triggered functions within. I uploaded this to Azure via the Add job dialog on the portal and set it to "Run Continuously" The behavior expected was that any time a blob is added /modified to the containers specified in the blob trigger the corresponding function be called. This however does not happen.
The only way to trigger the functions (after having uploaded blobs) is to Stop the web job and restart it again.
Every time I restart the job the functions
seem to be triggered and triggered only once. Any subsequent blob updates don’t seem to trigger them again.
On the portal however the WebJob shows as 'Running' however no functions get triggered after the initial trigger.
The main function for this web job looks like this :
static void Main()
{
var host = new JobHost();
host.RunAndBlock();
}
What could be the issue ?
The trigger functions are standard blob triggered functions and work for the first time - hence I am not yet sharing that code.
UPDATE
The function signature looks like this
public static void UpdateData([BlobTrigger("inputcontainer/{env}-update-{name}")] Stream input, string name, string env, TextWriter logger)
public static void DeleteData([BlobTrigger("inputcontainer/{env}-delete-{name}")] Stream input, string name, string env, TextWriter logger)
Because of how the blob triggers are implemented, it can take up to 10 minutes for the function to be invoked.
If the function is not triggered even after 10 minutes, please share with us the function signature and the names of blobs that you are uploading.

Resources