Store a copy of queue message raw data to somewhere - azure

I'm debugging an Azure Queue Trigger function. I'd like to find a way to inspect the raw message enqueued. The tricky part is the Queue Trigger function is still running, and I have no access to the caller(client) which inserts the message.
So the regular approach to read queue message, like from Storage Explorer, doesn't work for me, because Queue Trigger function processes the message very quickly
I was thinking to duplicate the message enqueued and store it somewhere but didn't find a practical way.

I do agree with #codebrane you can use logs for this to check which message is enqueued as below:
run.csx:
using System;
public static void Run(string myQueueItem, ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
}
function.cs:
{
"bindings": [
{
"name": "myQueueItem",
"type": "queueTrigger",
"direction": "in",
"queueName": "myqueue-items",
"connection": "AzureWebJobsStorage"
}
]
}
In logs:
This way you can check which message is enqueued.
You can send these logs to log analytics workspace and check there also Reference.

Related

Sequence processing with Azure Function & Service Bus

I have an issue with Azure Function Service Bus trigger.
The issue is Azure function cannot wait a message done before process a new message. It process Parallel, it not wait 5s before get next message. But i need it process sequencecy (as image bellow).
How can i do that?
[FunctionName("HttpStartSingle")]
public static void Run(
[ServiceBusTrigger("MyServiceBusQueue", Connection = "Connection")]string myQueueItem,
[OrchestrationClient] DurableOrchestrationClient starter,
ILogger log)
{
Console.WriteLine($"MessageId={myQueueItem}");
Thread.Sleep(5000);
}
I resolved my problem by using this config in my host.json
{
"version": "2.0",
"extensions": {
"serviceBus": {
"messageHandlerOptions": {
"maxConcurrentCalls": 1
}
}
}}
There are two approaches you can accomplish this,
(1) You are looking for Durable Function with function chaining
For background jobs you often need to ensure that only one instance of
a particular orchestrator runs at a time. This can be done in Durable
Functions by assigning a specific instance ID to an orchestrator when
creating it.
(2) Based on the messages that you are writing to Queue, you need to partition the data, that will automatically handle the order of messages which you do not need to handle manually by azure function
In general, ordered messaging is not something I'd be striving to implement since the order can and at some point will be distorted. Saying that, in some scenarios, it's required. For that, you should either use Durable Function to orchestrate your messages or use Service Bus message Sessions.
Azure Functions has recently added support for ordered message delivery (accent on the delivery part as processing can still fail). It's almost the same as the normal Function, with a slight change that you need to instruct the SDK to utilize sessions.
public async Task Run(
[ServiceBusTrigger("queue",
Connection = "ServiceBusConnectionString",
IsSessionsEnabled = true)] Message message, // Enable Sessions
ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {Encoding.UTF8.GetString(message.MessageId)}");
await _cosmosDbClient.Save(...);
}
Here's a post for more detials.
Warning: using sessions will require messages to be sent with a session ID, potentially requiring a change on the sending side.

Error while Reading from ServiceBus topic/subscription using Azure funtion

I'm currently developing azure functions (new at it) but I'm getting the below error while trying to read from a topic/subscription. I have no idea what's causing this. Any help would be appreciated.
[20/12/2018 14:22:22] Loaded custom extension: ServiceBusExtensionConfig from 'referenced by: Method='Function.ContentCacheUpdate.ReadNotificationQueue.Run', Parameter='mySbMsg'.'
[20/12/2018 14:22:22] Generating 1 job function(s)
[20/12/2018 14:22:23] Found the following functions:
[20/12/2018 14:22:23] Function.ContentCacheUpdate.ReadNotificationQueue.Run
[20/12/2018 14:22:23]
[20/12/2018 14:22:23] Host initialized (1208ms)
Listening on http://localhost:7071/
Hit CTRL-C to exit...
[20/12/2018 14:22:23] Host started (1682ms)
[20/12/2018 14:22:23] Job host started
[20/12/2018 14:22:23] Host lock lease acquired by instance ID '000000000000000000000000EB6A5850'.
My function looks like this
private const string TopicName = "testtopic";
[FunctionName("Function2")]
public static void Run([ServiceBusTrigger(TopicName, "SubscriptionName", Connection = "MyBindingConnection")]string mySbMsg, ILogger log)
{
log.LogInformation($"C# ServiceBus topic trigger function processed message: {mySbMsg}");
}
and my local.settings.json file is
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"AzureWebJobsDashboard": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"TopicName": "testtopic",
"SubscriptionName": "testsubscription",
"MyBindingConnection": "Endpoint=sb://test-.xxxxxxxxxxxxxxxxxxxx="
}
}
Thanks
Looks like your Azure Functions Core Tools used by VS is outdated. To fix this,
First, go to VS menus>Tools>Extensions and Updates, find Azure Functions and Web Jobs Tools, update it if it's not the latest(15.10.2046.0 right now). Close all VS instances. Wait for the update to finish(if there is).
Then clean the old tools and templates and use VS to download new tools.
Remove %localappdata%\AzureFunctionsTools and %userprofile%\.templateengine folder.
Reopen VS to create a new Function project, wait at the creation dialog, See Making sure all templates are up to date....
After a while, we can see the tip changes as
Click Refresh to work with the latest template instantly.
After creating a new v2 ServiceBus Topic trigger, change your code as below. Connection looks for value in app settings(local.setting.json) by default, while for others properties, we need to wrap them with percent sign. Check details in doc.
[FunctionName("MyServiceBusTrigger")]
public static void Run([ServiceBusTrigger("%TopicName%", "%SubscriptionName%", Connection = "MyBindingConnection")]string mySbMsg, ILogger log)
{
log.LogInformation($"C# ServiceBus topic trigger function processed message: {mySbMsg}");
}
local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"TopicName": "testtopic",
"SubscriptionName": "testsubscription",
"MyBindingConnection": "Endpoint=sb://test-.xxxxxxxxxxxxxxxxxxxx="
}
}
Even I did using this way,Can you cross check if there are messages in the subscription.

Azure function service bus trigger: How to stop batches of data from event stream and only allow one message from the queue at a time?

This is my first time using Azure functions and service bus.
I'm trying to build a function app in Visual Studio and I am able to connect to the queue topic (which I don't control). The problem is that every time I enable a breakpoint, 10s of messages are processed at once in VS which is making local testing very difficult (not to mention the problems arising from database pools).
How do I ensure that only 1 message gets processed at once until I hit complete?
public static void Run([ServiceBusTrigger("xxx", "yyy", AccessRights.Manage)]BrokeredMessage msg, TraceWriter log)
{
// do something here for one message at a time.
}
Set maxConcurrentCalls to 1 in the host.json. You also could get answer from host.json reference for Azure Functions
maxConcurrentCalls 16 The maximum number of concurrent calls to the callback that the message pump should initiate. By default, the Functions runtime processes multiple messages concurrently. To direct the runtime to process only a single queue or topic message at a time, set maxConcurrentCalls to 1
For function apps 2.0 you need to to update the host.json like this:
{
"version": "2.0",
...
"extensions": {
"serviceBus": {
"messageHandlerOptions": {
"maxConcurrentCalls": 1
}
}
}
}
Since the host.json file is usually published to Azure along with the function itself, it is preferable not to modify it for debugging and development purposes. Any change to host.json can be made to local.settings.json instead.
Here is how to set maxConcurrentCalls to 1:
{
"IsEncrypted": false,
"Values": {
...
"AzureFunctionsJobHost:Extensions:ServiceBus:MessageHandlerOptions:MaxConcurrentCalls": 1
}
}
This override functionality is described here: https://learn.microsoft.com/en-us/azure/azure-functions/functions-host-json#override-hostjson-values

How to write to DLQ?

I am currently trying to handle DeadLetterQueue messages.
How can I write a message to a DLQ?
I tried the following:
{
"type": "serviceBus",
"name": "outputSbMsg",
"queueName": "dlq-test",
"connection": "bus",
"accessRights_": "Manage",
"direction": "out"
}
But when I write to mySbMsg, I get the following error:
Cannot directly create a client on a sub-queue. Create a client on the main queue and use that to create receivers on the appropriate sub-queue.
How can I write to a DLQ (preferably using node)?
AFAIK, you can't write an arbitrary message directly to Service Bus DLQ.
This is not related to Azure Functions. Normally, a C# service bus client would
Dequeue a message from a queue.
If message processing fails, it would either throw exception, call BrokeredMessage.Abandon() or BrokeredMessage.DeadLetter().
Each one of those may move the message to DLQ (potentially after several re-tries).
Now, this doesn't change much in the context of C# Azure Functions
You create an in binding which gets a message from a queue.
If processing fails, you throw an exception, or call BrokeredMessage.Abandon() or BrokeredMessage.DeadLetter() explicitly, if needed.
The message is moved to DLQ (potentially after several re-tries).
Now, in Node you don't have BrokeredMessage class. But you can still throw error. If you want the message to move to DLQ after the first error, set the Max Delivery Count property of your service bus queue to 1.

Azure webjob not appearing to respect MaxDequeueCount property

I've got an Azure webjob with several queue-triggered functions. The SDK documentation at https://learn.microsoft.com/en-us/azure/app-service-web/websites-dotnet-webjobs-sdk-storage-queues-how-to#config defines the MaxDequeueCount property as:
The maximum number of retries before a queue message is sent to a
poison queue (default is 5).
but I'm not seeing this behavior. In my webjob I've got:
JobHostConfiguration config = new JobHostConfiguration();
config.Queues.MaxDequeueCount = 1;
JobHost host = new JobHost(config);
host.RunAndBlock();
and then I've got a queue-triggered function in which I throw an exception:
public void ProcessQueueMessage([QueueTrigger("azurewejobtestingqueue")] string item, TextWriter logger)
{
if ( item == "exception" )
{
throw new Exception();
}
}
Looking at the webjobs dashboard I see that the SDK makes 5 attempts (5 is the default as stated above):
and after the 5th attempt the message is moved to the poison queue. I expect to see 1 retry (or no retries?) not 5.
UPDATE: Enabled detailed logging for the web app and opted to save those logs to an Azure blob container. Found some logs relevant to my problem located in the azure-jobs-host-archive container. Here's an example showing an item with a dequeue count of 96:
{
"Type": "FunctionCompleted",
"EndTime": "2017-02-22T00:07:40.8133081+00:00",
"Failure": {
"ExceptionType": "Microsoft.Azure.WebJobs.Host.FunctionInvocationException",
"ExceptionDetails": "Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: ItemProcessor.ProcessQueueMessage ---> MyApp.Exceptions.MySpecialAppExceptionType: Exception of type 'MyApp.Exceptions.MySpecialAppExceptionType' was thrown.
},
"ParameterLogs": {},
"FunctionInstanceId": "1ffac7b0-1290-4343-8ee1-2af0d39ae2c9",
"Function": {
"Id": "MyApp.Processors.ItemProcessor.ProcessQueueMessage",
"FullName": "MyApp.Processors.ItemProcessor.ProcessQueueMessage",
"ShortName": "ItemProcessor.ProcessQueueMessage",
"Parameters": [
{
"Type": "QueueTrigger",
"AccountName": "MyStorageAccount",
"QueueName": "stuff-processor",
"Name": "sourceFeedItemQueueItem"
},
{
"Type": "BindingData",
"Name": "dequeueCount"
},
{
"Type": "ParameterDescriptor",
"Name": "logger"
}
]
},
"Arguments": {
"sourceFeedItemQueueItem": "{\"SourceFeedUpdateID\":437530,\"PodcastFeedID\":\"2d48D2sf2\"}",
"dequeueCount": "96",
"logger": null
},
"Reason": "AutomaticTrigger",
"ReasonDetails": "New queue message detected on 'stuff-processor'.",
"StartTime": "2017-02-22T00:07:40.6017341+00:00",
"OutputBlob": {
"ContainerName": "azure-webjobs-hosts",
"BlobName": "output-logs/1ffd3c7b012c043438ed12af0d39ae2c9.txt"
},
"ParameterLogBlob": {
"ContainerName": "azure-webjobs-hosts",
"BlobName": "output-logs/1cf2c1b012sa0d3438ee12daf0d39ae2c9.params.txt"
},
"LogLevel": "Info",
"HostInstanceId": "d1825bdb-d92a-4657-81a4-36253e01ea5e",
"HostDisplayName": "ItemProcessor",
"SharedQueueName": "azure-webjobs-host-490daea03c70316f8aa2509438afe8ef",
"InstanceQueueName": "azure-webjobs-host-d18252sdbd92a4657d1a436253e01ea5e",
"Heartbeat": {
"SharedContainerName": "azure-webjobs-hosts",
"SharedDirectoryName": "heartbeats/490baea03cfdfd0416f8aa25aqr438afe8ef",
"InstanceBlobName": "zd1825bdbdsdgga465781a436q53e01ea5e",
"ExpirationInSeconds": 45
},
"WebJobRunIdentifier": {
"WebSiteName": "myappengine",
"JobType": "Continuous",
"JobName": "ItemProcessor",
"RunId": ""
}
}
What I'm further looking for though are logs which would show me detail for a particular queue item where processing succeeds (and hence is removed from the queue) or fails due to an exception and is placed in the poison queue. I so far haven't found any logs showing that detail. The log files referenced in the output above do not contain data of this sort.
UPDATE 2: Looked at the state of my poison queue and it seems like it could be a smoking gun but I'm too dense to put 2 and 2 together. Looking at the screenshot of the queue below you can see the message with the ID (left column) 431210 in there many times. The fact that it appears multiple times says to me that the message in the original queue is failing improperly.
As mentioned by Rob W, this issue exists when using WindowsAzure.Storage > 7.1.2. The issue has apparently been fixed in issue #1141 but this has not yet made it into a release.
Contributer asifferman has shared a code snippet in a comment post on issue #985. that appears to resolve the problem (it worked perfectly for me).
In case of link rot, and to meet SO rules, here's the post along with the code snippet:
For those (like me) who cannot wait the next release to get the
WebJobs SDK to work with the latest releases of Azure Storage, and
based on the explanations of #brettsam, you can simply write a custom
CustomQueueProcessorFactory to create a new CloudQueueMessage in
CopyMessageToPoisonQueueAsync.
namespace ConsoleApplication1
{
using Microsoft.Azure.WebJobs.Host.Queues;
using Microsoft.WindowsAzure.Storage.Queue;
using System.Threading;
using System.Threading.Tasks;
public class CustomQueueProcessorFactory : IQueueProcessorFactory
{
public QueueProcessor Create(QueueProcessorFactoryContext context)
{
return new CustomQueueProcessor(context);
}
private class CustomQueueProcessor : QueueProcessor
{
public CustomQueueProcessor(QueueProcessorFactoryContext context)
: base(context)
{
}
protected override Task CopyMessageToPoisonQueueAsync(CloudQueueMessage message, CloudQueue poisonQueue, CancellationToken cancellationToken)
{
var newMessage = new CloudQueueMessage(message.Id, message.PopReceipt);
newMessage.SetMessageContent(message.AsBytes);
return base.CopyMessageToPoisonQueueAsync(newMessage, poisonQueue, cancellationToken);
}
}
}
}
Then in your Main, you just have to set the custom queue processor
factory in the job host configuration:
var config = new JobHostConfiguration();
config.Queues.QueueProcessorFactory = new CustomQueueProcessorFactory();
I could get it work with WindowsAzure.Storage 8.1.1 and
Microsoft.Azure.WebJobs 2.0.0. Hope that helps!
If you are still seeking an answer, we tried some of the answers listed without success. It turns out that it was a version issue with the Storage sdk (WindowsAzure.Storage) and the Webjob sdk (Microsoft.Azure.WebJobs). To fix it, we ended up having to downgrade our version of the Storage sdk to 7.2.1 (we had recently upgraded to 8.1.1). Based on the article below, the engineers are now aware of the problems and will hopefully have it fixed soon:
https://github.com/Azure/azure-webjobs-sdk/issues/1045
MaxDequeueCount property works correctly for me if I configure it.
So it is very odd that it is not working for you. When I set
config.Queues.MaxDequeueCount = 2; then I get the expected result please refer to the screenshot.
And we also could use dequeueCount to control the retry times. The following is the demo code for no try.
public void ProcessQueueMessage([QueueTrigger("queue")] string item, int dequeueCount, TextWriter logger)
{
if (dequeueCount == 1)
{
if (item == "exception")
{
throw new Exception();
}
logger.WriteLine($"NewMsge: {item}");
Console.WriteLine($"NewMsge: {item}");
}
}
Log info please refer to the screenshot
I suspect it's because you're not actually running the binaries that you think you are in Azure. This one threw me for a loop as well.
When you're running triggered WebJobs on Azure, publishing a new version of the WebJob doesn't cause the old triggered WebJob to be immediately unloaded and the new one started. If you look at your WebJob logs, I suspect you will not see a restart when you republished.
This is because Kudu by default copies all of your WebJob files to a temp directory and executes them. From the Kudu WebJob docs:
The WebJob is copied to a temporary directory under %TEMP%\jobs{job
type}{job name}{random name} and will run from there This option
prevents the original WebJob binaries from being locked which might
cause issues redeploying the WebJob. For example updating an .exe file
that is currently running.
The only success I've had in making sure that a newly published triggered WebJob is actually running is to do the following:
Log into the Kudu console. It's https://yourappname.scm.azurewebsites.net. You'll use the same credentials that you do when logging into the Azure Portal.
Once logged in, click on the Process Explorer menu option at the top. Find your WebJob process that's currently running, and kill it.
FTP into your Web App. Browse to the directory containing your WebJob code, and delete it. It should be under /app_data/jobs/triggered/[your webjob name].
I then hop over to the portal, browse to by Web App management blade that hosts the WebJob, click on the WebJobs menu option, and confirm that the old WebJob is no longer there.
Publish my new WebJob from Visual Studio.
That should guarantee you that you're running the code that you publish. Hope this helps.
I am seeing the same thing where messages go way past the max dequeue count. I will post more details in a bit, but I am also seeing what appears to be a very large number end up in poison queue. So I suspect that it is adding to poison queue after 5, but that trying more which ends up in lots in poison queue (hundreds).
For anyone using the Azure WebJobs v3.x SDK:
In v3.x, hosts.json does not work for WebJob.
Instead, version 3.x uses the standard ASP.NET Core APIs, so you need to configure it using the ConfigureWebJobs method:
static async Task Main()
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddAzureStorage(a => {
a.BatchSize = 8;
a.NewBatchThreshold = 4;
a.MaxDequeueCount = 4;
a.MaxPollingInterval = TimeSpan.FromSeconds(15);
});
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
Docs: https://learn.microsoft.com/pt-pt/azure/app-service/webjobs-sdk-how-to#queue-storage-trigger-configuration

Resources