Triggering WebJob Method Based on Message Property - azure

I have an Azure WebJobs project which handles a number of time-consuming tasks triggered by website actions. It works fine.
But the mapping from message to method call uses magic strings:
public class SomeClass
{
public async Task ProcessMessage(
[ QueueTrigger( "%" + nameof( ContainerQueueConstants.FilteredVoterFiles ) + "%" ) ] AgencyOutreachMessage
msg,
TextWriter azureLogWriter
)
{
PhaseNames.SetNames( "Exporting Data", "Job Completed" );
await ExecuteFromMessage( msg, azureLogWriter, Launch );
}
}
public class ContainerQueueConstants
{
public const string ImportFile = "import-file";
public const string VoterTraits = "voter-traits";
public const string Voter = "voter";
public const string FilteredVoterFiles = "filtered-voter-files";
}
I'd like to get away from using hard-coded strings for queue names. Ideally, I'd like to be able to route a message to a particular method based on the value of a property contained in the message.
But I'm not sure if that's even possible, at least in the 1.1.x version of the WebJobs SDK.
Suggestions or advice appreciated.

I suggest using N CloudQueue instances to monitor N different Storage Queues. Since you're doing this in a WebJob, you will probably do this as a continuous webjob and have to perform the polling for each queue yourself. You will also have to take responsibility for removing successfully processed messages.
The QueueTriggerAttribute has built-in support for deadlettering. I do not believe that there is automatic deadlettering support if you do not use the QueueTriggerAttribute.

Related

azure webhook to stream splunk data

I want to schedule splunk report to an azure web-hook and persist it into Cosmos DB.(after from processing ) This tutorial gave me some insight on how to process and persist data into cosmos db via the azure functions ( in java ). To solve the next part of the puzzle I"m reaching out for some advise on how to go about:
How to setup and host a webhook on Azure ?
Should I set a HttpTrigger , inside the EventHubOutput function and deploy it into the function app.? Or should I use the Webhook from Azure Event Grid ?(not clear on how to do this ). I'm NOT looking to stream any heavy volumes of data and want to keep the consumption cost low. So , which route should I take here?. Any pointers to tutorials will be of help here.
How do I handle a webhook data processing on #EventHubOutput ( referring the java example in the tutorial) ?. What is the setup and configuration I need to do here ? Any working examples will be of help .
I ended up using just the #HttpTrigger and binding the output using #CosmosDBOutput to persist the data. Something like this , would like to know if there are any better approaches.
public class Function {
#FunctionName("PostData")
public HttpResponseMessage run(
#HttpTrigger(
name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
#CosmosDBOutput( name = "databaseOutput", databaseName = "SplunkDataSource",
collectionName = "loginData",
connectionStringSetting = "CosmosDBConnectionString")
OutputBinding<String> document,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
// Parse the payload
String data = request.getBody().get();
if (data == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST).body(
"Please pass a name on the query string or in the request body").build();
} else {
// Write the data to the Cosmos document.
document.setValue(data);
context.getLogger().info("Persisting payload to db :" + data);
return request.createResponseBuilder(HttpStatus.OK).body(data).build();
}
}

Batch insert to Table Storage via Azure function

I have a following azure storage queue trigger azure function which is binded to azure table for the output.
[FunctionName("TestFunction")]
public static async Task<IActionResult> Run(
[QueueTrigger("myqueue", Connection = "connection")]string myQueueItem,
[Table("TableXyzObject"), StorageAccount("connection")] IAsyncCollector<TableXyzObject> tableXyzObjectRecords)
{
var tableAbcObject = new TableXyzObject();
try
{
tableAbcObject.PartitionKey = DateTime.UtcNow.ToString("MMddyyyy");
tableAbcObject.RowKey = Guid.NewGuid();
tableAbcObject.RandomString = myQueueItem;
await tableXyzObjectRecords.AddAsync(tableAbcObject);
}
catch (Exception ex)
{
}
return new OkObjectResult(tableAbcObject);
}
public class TableXyzObject : TableEntity
{
public string RandomString { get; set; }
}
}
}
I am looking for a way to read 15 messages from poisonqueue which is different than myqueue (queue trigger on above azure function) and batch insert it in to dynamic table (tableXyz, tableAbc etc) based on few conditions in the queue message. Since we have different poison queues, we want to pick up messages from multiple poison queues (name of the poison queue will be provided in the myqueue message). This is done to avoid to spinning up new azure function every time we have a new poison queue.
Following is the approach I have in my mind,
--> I might have to get 15 queue messages using queueClient (create new one) method - ReceiveMessages(15) of Azure.Storage.Queue package
--> And do a batch insert using TableBatchOperation class (cannot use output binding)
Is there any better approch than this?
Unfortunately, storage queues don't have a great solution for this. If you want it to be dynamic then the idea of implementing your own clients and table outputs is probably your best option. The one thing I would suggest changing is using a timer trigger instead of a queue trigger. If you are putting a message on your trigger queue every time you add something to the poison queue it would work as is, but if not a timer trigger ensures that poisoned messages are handled in a timely fashion.
Original Answer (incorrectly relating to Service Bus queues)
Bryan is correct that creating a new queue client inside your function isn't the best way to go about this. Fortunately, the Service Bus extension does allow batching. Unfortunately the docs haven't quite caught up yet.
Just make your trigger receive an array:
[QueueTrigger("myqueue", Connection = "connection")]string myQueueItem[]
You can set your max batch size in the host.json:
"extensions": {
"serviceBus": {
"batchOptions": {
"maxMessageCount": 15
}
}
}

Azure Functions dynamic input / output binding based on Service Bus trigger message content

I'm trying to create Azure Function 2.0 with multiple bindings. The function gets triggered by Azure Service Bus Queue message and I would like to read Blob based on this message content. I've already tried below code:
public static class Functions
{
[FunctionName(nameof(MyFunctionName))]
public static async Task MyFunctionName(
[ServiceBusTrigger(Consts.QueueName, Connection = Consts.ServiceBusConnection)] string message,
[Blob("container/{message}-xyz.txt", FileAccess.Read, Connection = "StorageConnName")] string blobContent
)
{
// processing the blob content
}
}
but I'm getting following error:
Microsoft.Azure.WebJobs.Host: Error indexing method 'MyFunctionName'. Microsoft.Azure.WebJobs.Host: Unable to resolve binding parameter 'message'. Binding expressions must map to either a value provided by the trigger or a property of the value the trigger is bound to, or must be a system binding expression (e.g. sys.randguid, sys.utcnow, etc.).
I saw somewhere that dynamic bindings can be used but perhaps it's not possible to create input binding based on another input binding. Any ideas?
I'm actually surprise that did not work. There are lots of quirks with bindings. Please give this a shot:
public static class Functions
{
[FunctionName(nameof(MyFunctionName))]
public static async Task MyFunctionName(
[ServiceBusTrigger(Consts.QueueName, Connection = Consts.ServiceBusConnection)] MyConcreteMessage message,
[Blob("container/{message}-xyz.txt", FileAccess.Read, Connection = "StorageConnName")] string blobContent
)
{
// processing the blob content
}
}
Create a DTO:
public class MyConcreteMessage
{
public string message {get;set;}
}
Ensure that the message that you are using in the servicebus is something like this:
{
"message": "MyAwesomeFile"
}
It should now be able to read your binding container/{message}-xyz.txt and recognize that message

How to send Azure SignalR messages to clients on all instances of a multi-instance Azure Application

We are evaluating how to send messages to connected clients via SignalR. Our application is published in Azure, and has multiple instances. We are able to successfully pass messages to clients connected to the same instance, but not other instances.
We initially were looking at ServiceBus, but we (perhaps mistakenly) found out that AzureSignalR should basically be a service bus that handles all of the backend stuff for us.
We set up signalR in Startup.cs such as:
public void ConfigureServices(IServiceCollection services)
{
var signalRConnString = Configuration.GetConnectionString("AxiomSignalRPrimaryEndPoint");
services.AddSignalR()
.AddAzureSignalR(signalRConnString)
.AddJsonProtocol(options =>
{
options.PayloadSerializerSettings.ContractResolver = new DefaultContractResolver();
});
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseAzureSignalR(routes =>
{
routes.MapHub<CallRegistrationHub>("/callRegistrationHub");
routes.MapHub<CaseHeaderHub>("/caseHeaderHub");
routes.MapHub<EmployeesHub>("/employeesHub");
});
}
Issue
We have to store some objects that should probably be on the service bus, and not stored in an individual instance; However, I am unsure of how to tell the hub that the objects should be on the bus and not internal to that specific instance of the hub, as below:
public class EmployeesHub : Hub
{
private static volatile List<Tuple<string, string, string,string, int>> UpdateList = new List<Tuple<string, string, string,string,int>>();
private static volatile List<Tuple<string, int>> ConnectedClients = new List<Tuple<string, int>>();
}
We have functions that need to send messages to all connected clients that are looking at the current record regardless of in what instance they reside:
public async void LockField(string fieldName, string value, string userName, int IdRec)
{
var clients = ConnectedClients.Where(x => x.Item1 != Context.ConnectionId && x.Item2 == IdRec).Select(x => x.Item1).Distinct().ToList();
clients.ForEach(async x =>
{
await Clients.Client(x).SendAsync("LockField", fieldName, value, userName, true);
});
if (!UpdateList.Any(x=> x.Item1 == Context.ConnectionId && x.Item3 == fieldName && x.Item5 == IdRec))
{
UpdateList.Add(new Tuple<string, string, string,string,int>(Context.ConnectionId,userName, fieldName, value, IdRec));
}
}
This is not working for different instances (which makes sense, because each instance will have its own objects.. However, we were hoping that by using AzureSignalR instead of SignalR (AzureSignalR conn string has an endpoint to the Azure service) that it would handle the service bus functionality for us. We are not sure what steps to take to get this functioning correctly.
Thanks.
The reason for this issue is that I was preemptively attempting to limit message traffic. I was attempting to only send messages to clients that were looking at the same record. However, because my objects were instance-specific, it would only grab the connection IDs from the current instance's object.
Further testing (using ARR affinity) proves that on a Clients.All() call, all clients, including those in different instances, receive the message.
So, our AzureSignalR setup appears to be correct.
Current POC Solution - currently testing
-When a client registers, we will broadcast to all connected clients "What field do you have locked for this Id?"
-If client is on a different Id, it will ignore the message.
-If client does not have any fields locked, it will ignore the message.
-If client has a field locked, it will respond to the message with required info.
-AzureSignalR will then rebroadcast the data required to perform a lock.
This increases message count, but not significantly. But it will resolve the multiple instances holding different connected ClientIds issue.
Just a thought, but have you tried using SignalR Groups? https://learn.microsoft.com/en-us/aspnet/core/signalr/groups?view=aspnetcore-2.2#groups-in-signalr
You could try creating a group for each combination of IdRec and fieldName and then just broadcast messages to the group. This is the gist of how I think your LockField function might look:
public async void LockField(string fieldName, string value, string userName, int IdRec)
{
string groupName = GetGroupName(IdRec, fieldName);
await Clients.Group(groupName).SendAsync("LockField", fieldName, value, userName, true);
await this.Groups.AddToGroupAsync(Context.ConnectionId, groupName);
}
You could implement the GetGroupName method however you please, so long as it produces unique strings. A simple solution might be something like
public string GetGroupName(int IdRec, string fieldName)
{
return $"{IdRec} - {fieldName}";
}

Queue messages that are moved to Poison Queue still show as queue count, but stay hidden

I am testing the Poison message handling of the Webjob that I am building.
Everything seems to be working as expected except, one strange thing:
When a message is moved to the “-poison” queue, its ghost seems to remain hidden (invisible) in the main job queue. That means if I have 6 poison messages moved to the “-poison” queue, storage explorer shows “Showing 0 of 6 messages in queue”. I can not see the 6 hidden messages in the Storage Explorer.
I tried to delete the job queue and recreating it, but the strange issue still happening after I run my tests. Storage explorer shows “Showing 0 of 6 messages in queue”.
What is happening behind the scene?
Update 1
I did some investigation and I think WebJob SDK does not delete the poison message.
I went through WebJob SDK source code and I think this line of code is not being executed for some reason:
https://github.com/Azure/azure-webjobs-sdk/blob/dev/src/Microsoft.Azure.WebJobs.Host/Queues/QueueProcessor.cs#L119
Here is my Function that can help reproducing the issue:
public class Functions
{
public static void ProcessQueueMessage([QueueTrigger("%QueueName%")] string message, TextWriter log)
{
if (message.Contains("Break"))
{
throw new Exception($"Error while processing message {message}");
}
log.WriteLine($"Processed message {message}");
}
}
Update 2
Here is the WebJob SDK I am using:
As far as I know, the azure storage SDK 8.+ is not work well with the Azure webjobs SDK2.0 (related issue).
If you use storage SDK 8.+ the poison messages stay undeleted-but-invisible.
Workaround method is using the low azure storage SDK 7.2.1.
It will work well.
And this issue will be solved in the future SDK version.
I have the same problem.
The problem is when then Message copy in poison queue pass by ref without visibility time https://github.com/Azure/azure-webjobs-sdk/blob/dev/src/Microsoft.Azure.WebJobs.Host/Queues/QueueProcessor.cs#L145 and when try to delete the message from original queue the service returns 404 not found. Is a problem in azure-webjobs-sdk and the solution is to make this change
await AddMessageAndCreateIfNotExistsAsync(poisonQueue, new CloudQueueMessage(message.AsString), cancellationToken);
in https://github.com/Azure/azure-webjobs-sdk/blob/dev/src/Microsoft.Azure.WebJobs.Host/Queues/QueueProcessor.cs#L145
we wait new version with this fix
Custom solution
To solve this create your own CustomProcessor and in CopyMessageToPoisonQueueAsync function create new CloudMessage from original to pass in poison queue, see example below.
var config = new JobHostConfiguration
config.Queues.QueueProcessorFactory = new CustomQueueProcessorFactory();
public QueueProcessor Create(QueueProcessorFactoryContext context)
{
// demonstrates how the Queue.ServiceClient options can be configured
context.Queue.ServiceClient.DefaultRequestOptions.ServerTimeout = TimeSpan.FromSeconds(30);
// demonstrates how queue options can be customized
context.Queue.EncodeMessage = true;
// return the custom queue processor
return new CustomQueueProcessor(context);
}
/// <summary>
/// Custom QueueProcessor demonstrating some of the virtuals that can be overridden
/// to customize queue processing.
/// </summary>
private class CustomQueueProcessor : QueueProcessor
{
private QueueProcessorFactoryContext _context;
public CustomQueueProcessor(QueueProcessorFactoryContext context)
: base(context)
{
_context = context;
}
public override async Task CompleteProcessingMessageAsync(CloudQueueMessage message, FunctionResult result, CancellationToken cancellationToken)
{
await base.CompleteProcessingMessageAsync(message, result, cancellationToken);
}
protected override async Task CopyMessageToPoisonQueueAsync(CloudQueueMessage message, CloudQueue poisonQueue, CancellationToken cancellationToken)
{
var msg = new CloudQueueMessage(message.AsString);
await base.CopyMessageToPoisonQueueAsync(msg, poisonQueue, cancellationToken);
}
protected override void OnMessageAddedToPoisonQueue(PoisonMessageEventArgs e)
{
base.OnMessageAddedToPoisonQueue(e);
}
}
For anyone out there still having this issue. This should be fixed since 2.1.0-beta1-10851. The downside is that there is currently no stable released version of 2.1.0 yet.

Resources