How to specify EventHub Consumer Group in a WebJob? - azure

I am using WebJob's bindings to EventHub as described here:
https://github.com/Azure/azure-webjobs-sdk/wiki/EventHub-support
While the webjob is running, trying to run the Azure Service Bus Explorer on the same hub result in this exception:
Exception: A receiver with a higher epoch '14' already exists. A new receiver with epoch 0 cannot be created.
Make sure you are creating receiver with increasing epoch value to ensure connectivity, or ensure all old epoch receivers are closed or disconnected.
From what I understand, this is caused by the 2 listeners(webjob & bus explorer) using the same Consumer Group.
So my question, how can I specify a different Consumer Group in my webjob ?
My current code look like this:
Program.cs:
var config = new JobHostConfiguration()
{
NameResolver = new NameResolver()
};
string eventHubConnectionString = ConfigurationManager.ConnectionStrings["EventHub"].ConnectionString;
string eventHubName = ConfigurationManager.AppSettings["EventHubName"];
string eventProcessorHostStorageConnectionString = ConfigurationManager.ConnectionStrings["EventProcessorHostStorage"].ConnectionString; ;
var eventHubConfig = new EventHubConfiguration();
eventHubConfig.AddReceiver(eventHubName, eventHubConnectionString, eventProcessorHostStorageConnectionString);
config.UseEventHub(eventHubConfig);
var host = new JobHost(config);
host.RunAndBlock();
Functions.cs:
public class Functions
{
public static void Trigger([EventHubTrigger("%EventHubName%")] string message, TextWriter log)
{
log.WriteLine(message);
}
}
[Edit - Bonus Question]
I don't fully grasp the use of Consumer Group and 'epoch' thing. One Consumer Group is limited to one receiver ?

The EventHubTrigger has an optional ConsumerGroup property (source). So, based on that modify the trigger like this:
public class Functions
{
public static void Trigger([EventHubTrigger("%EventHubName%", ConsumerGroup = "MyConsumerGroup")] string message, TextWriter log)
{
log.WriteLine(message);
}
}

Related

Not able to get Azure function to trigger on Event Hub

I've been trying to get started with Azure Event Hubs & Azure functions.
While following the exact documentation - I'm unable to get the event hub trigger to work.
I have the following setup:
Function I call in an HTTP function (to generate event logs):
#EventHubOutput(name = "message-new", eventHubName = "KCETest1", connection = "KCETest1_all_EVENTHUB", dataType = "string" )
public String sendOrder(ExecutionContext context) {
return "foobar";
}
Snapshot of my event hub showing events are being triggered:
Event Hub Receiving messages
However, I'm not able to get anything to work to trigger an azure function.
I tried all the below just to get at least something written to the logs.
(as you can see - I've been playing around with all variables - thinking this might be the rootcause)
public class EventHubTriggerJava1 {
/**
* This function will be invoked when an event is received from Event Hub.
*/
#FunctionName("EventHubTriggerJava1")
public void run(
#EventHubTrigger(name = "msg", eventHubName = "KCETest1", connection = "KCETest1_all_EVENTHUB", consumerGroup = "$Default", cardinality = Cardinality.ONE , dataType = "string") String message,
final ExecutionContext context
) {
context.getLogger().info("Java Event Hub trigger 1function executed.");
context.getLogger().info("Length:" + message);
}
#FunctionName("EventHubTriggerJava2")
public void run(
#EventHubTrigger(name = "msg2", eventHubName = "testhub1", connection = "KCETest1_all_EVENTHUB", consumerGroup = "$Default", cardinality = Cardinality.ONE) EventData message,
final ExecutionContext context
) {
context.getLogger().info("Java Event Hub trigger 2function executed.");
context.getLogger().info("Length:" + message.toString());
}
#FunctionName("EventHubTriggerJava3")
public void run(
#EventHubTrigger(name = "msg3", eventHubName = "testhub1", connection = "KCETest1_all_EVENTHUB", consumerGroup = "$Default") List<String> message,
final ExecutionContext context
) {
context.getLogger().info("Java Event Hub trigger 3 function executed.");
context.getLogger().info("Length:" + message.get(0).toString());
}
}
I'm running out of ideas - any suggestions from your side?
Thx a lot!

How to trigger both Azure Function by Service Bus message and output message to Service Bus within single Azure Function

I need to trigger an Azure Function based on Service Bus message that will do some logic and will write back to Service Bus some message that will potentially trigger another Azure function etc..
I have lack of understanding how to do it properly in standard way.
Based on this document Azure Service Bus trigger for Azure Functions we can do first part: trigger azure function by Service Bus message.
Code:
#FunctionName("sbtopicprocessor")
public void run(
#ServiceBusTopicTrigger(
name = "message",
topicName = "mytopicname",
subscriptionName = "mysubscription",
connection = "ServiceBusConnection"
) String message,
final ExecutionContext context
) {
context.getLogger().info(message);
}
Based on this document Azure Service Bus output binding for Azure Functions we can do second part: trigger output message to Service Bus.
Code:
#FunctionName("sbtopicsend")
public HttpResponseMessage run(
#HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
#ServiceBusTopicOutput(name = "message", topicName = "mytopicname", subscriptionName = "mysubscription", connection = "ServiceBusConnection") OutputBinding<String> message,
final ExecutionContext context) {
String name = request.getBody().orElse("Azure Functions");
message.setValue(name);
return request.createResponseBuilder(HttpStatus.OK).body("Hello, " + name).build();
}
But I need both input / output functionalities within one function ? Should I call second function from the first one via http which seems for me a little bit awkward or should I use Service bus sdk within fist function.
Thanks for any help.
I don't work with Java but you can combine the Trigger and the Output in one function.
#FunctionName("sbtopicprocessor")
public void run(
#ServiceBusTopicTrigger(
name = "message",
topicName = "mytopicname",
subscriptionName = "mysubscription",
connection = "ServiceBusConnection"
) String messageRequest,
#ServiceBusTopicOutput(name = "message", topicName = "mytopicname", subscriptionName = "mysubscription", connection = "ServiceBusConnection") OutputBinding<String> message, final ExecutionContext context
) {
message.setValue(messageRequest.name);
}
You can combine any type of Trigger with any type of Output in one function.

Abstracting Spring Cloud Stream Producer and Consumer code

I have a Service that is Producing and Consuming messages from different Spring Cloud Stream Channels (bound to EventHub/Kafka topics). There are several such Services which are setup similarly.
The configuration looks like below
public interface MessageStreams {
String WORKSPACE = "workspace";
String UPLOADNOTIFICATION = "uploadnotification";
String BLOBNOTIFICATION = "blobnotification";
String INGESTIONSTATUS = "ingestionstatusproducer";
#Input(WORKSPACE)
SubscribableChannel workspaceChannel();
#Output(UPLOADNOTIFICATION)
MessageChannel uploadNotificationChannel();
#Input(BLOBNOTIFICATION)
SubscribableChannel blobNotificationChannel();
#Output(INGESTIONSTATUS)
MessageChannel ingestionStatusChannel();
}
#EnableBinding(MessageStreams.class)
public class EventHubStreamsConfiguration {
}
The Producer/Publisher code looks like below
#Service
#Slf4j
public class IngestionStatusEventPublisher {
private final MessageStreams messageStreams;
public IngestionStatusEventPublisher(MessageStreams messageStreams) {
this.messageStreams = messageStreams;
}
public void sendIngestionStatusEvent() {
log.info("Sending ingestion status event");
System.out.println("Sending ingestion status event");
MessageChannel messageChannel = messageStreams.ingestionStatusChannel();
boolean messageSent = messageChannel.send(MessageBuilder
.withPayload(IngestionStatusMessage.builder()
.correlationId("some-correlation-id")
.status("done")
.source("some-source")
.eventTime(OffsetDateTime.now())
.build())
.setHeader("tenant-id", "some-tenant")
.build());
log.info("Ingestion status event sent successfully {}", messageSent);
}
}
Similarly I have multiple other Publishers which publish to different Event Hubs/Topics. Notice that there is a tenant-id header being set for each published message. This is something specific to my multi-tenant application to track the tenant context. Also notice that I am getting the channel to be published to while sending the message.
My Consumer code looks like below
#Component
#Slf4j
public class IngestionStatusEventHandler {
private AtomicInteger eventCount = new AtomicInteger();
#StreamListener(TestMessageStreams.INGESTIONSTATUS)
public void handleEvent(#Payload IngestionStatusMessage message, #Header(name = "tenant-id") String tenantId) throws Exception {
log.info("New ingestion status event received: {} in Consumer: {}", message, Thread.currentThread().getName());
// set the tenant context as thread local from the header.
}
Again I have several such consumers and also there is a tenant context that is set in each consumer based on the incoming tenant-id header that is sent by the Publisher.
My questions is
How do I get rid of the boiler plate code of setting the tenant-id header in Publisher and setting the tenant context in the Consumer by abstracting it into a library which could be included in all the different Services that I have.
Also, is there a way of dynamically identifying the Channel based on the Type of the Message being published. for ex IngestionStatusMessage.class in the given scenario
To set and tenant-id header in the common code and to avoid its copy/pasting in every microservice you can use a ChannelInterceptor and make it as global one with a #GlobalChannelInterceptor and its patterns option.
See more info in Spring Integration: https://docs.spring.io/spring-integration/docs/5.3.0.BUILD-SNAPSHOT/reference/html/core.html#channel-interceptors
https://docs.spring.io/spring-integration/docs/5.3.0.BUILD-SNAPSHOT/reference/html/overview.html#configuration-enable-integration
You can't make a channel selection by the payload type because the payload type is really determined from the #StreamListener method signature.
You can try to have a general #Router with a Message<?> expectation and then return a particular channel name to route according that request message context.
See https://docs.spring.io/spring-integration/docs/5.3.0.BUILD-SNAPSHOT/reference/html/message-routing.html#messaging-routing-chapter

best practices with poison message handling for Azure service bus topic

Dealing with poison messages (throwing exception while consuming) from Azure Service Bus can lead to loops till number of retries has reached maxDeliveryCount setting of topic subscription.
Does the SequenceNumber of message added by Azure Service bus keeps on increasing on each failed attempt till it reaches maxDeliveryCount ?
Setting maxDeliveryCount = 1, is that best practice to deal with poison messages so that consumer never attempt twice to process message once it failed
Best practices depend on your application and your retry approach.
Most of time I noticed message get failed
Dependent service not available (Redis, SQL connection issue)
Faulty message (message doesn't have a mandatory parameter or some value is incorrect)
Process code issue (bug in message processing code)
For the 1st and 3rd scenario, I created C# web job to run and reprocess deadletter message.
Below is my code
internal class Program
{
private static string connectionString = ConfigurationSettings.AppSettings["GroupAssetConnection"];
private static string topicName = ConfigurationSettings.AppSettings["GroupAssetTopic"];
private static string subscriptionName = ConfigurationSettings.AppSettings["GroupAssetSubscription"];
private static string databaseEndPoint = ConfigurationSettings.AppSettings["DatabaseEndPoint"];
private static string databaseKey = ConfigurationSettings.AppSettings["DatabaseKey"];
private static string deadLetterQueuePath = "/$DeadLetterQueue";
private static void Main(string[] args)
{
try
{
ReadDLQMessages(groupAssetSyncService, log);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
throw;
}
finally
{
documentClient.Dispose();
}
Console.WriteLine("All message read successfully from Deadletter queue");
Console.ReadLine();
}
public static void ReadDLQMessages(IGroupAssetSyncService groupSyncService, ILog log)
{
int counter = 1;
SubscriptionClient subscriptionClient = SubscriptionClient.CreateFromConnectionString(connectionString, topicName, subscriptionName + deadLetterQueuePath);
while (true)
{
BrokeredMessage bmessgage = subscriptionClient.Receive(TimeSpan.FromMilliseconds(500));
if (bmessgage != null)
{
string message = new StreamReader(bmessgage.GetBody<Stream>(), Encoding.UTF8).ReadToEnd();
syncService.UpdateDataAsync(message).GetAwaiter().GetResult();
Console.WriteLine($"{counter} message Received");
counter++;
bmessgage.Complete();
}
else
{
break;
}
}
subscriptionClient.Close();
}
}
For 2nd scenario, we manually verify deadletter messages (Custom UI/ Service Bus explore), sometimes we correct message data or sometimes we purge message and clear queue.
I won't recommend maxDeliveryCount=1. If some network/connection issue occurs, the built-in retry will process and clear from the queue. When I was working in a finance application, I was keeping maxDeliveryCount=5 while in my IoT application is maxDeliveryCount=3.
If you are reading messages in batch, a complete batch will re-process if an error occurred any of message.
SequenceNumber The sequence number can be trusted as a unique identifier since it is assigned by a central and neutral authority and not by clients. It also represents the true order of arrival, and is more precise than a time stamp as an order criterion, because time stamps may not have a high enough resolution at extreme message rates and may be subject to (however minimal) clock skew in situations where the broker ownership transitions between nodes.

What is alternate of ServiceBus MessagingFactory in .Net Core?

While converting my project from .Net framework 4.7 to .Net core 2.1, I'm facing issue with Servicebus MessagingFactory. I don't see any MessagingFactory class in new nuget package Microsoft.Azure.ServiceBus for .Net core.
My .Net framework 4.7 Code
private static readonly string messagingConnectionString = Environment.GetEnvironmentVariable("ServiceBusConnection");
private static Lazy<MessagingFactory> lazyMessagingFactory = new Lazy<MessagingFactory>(() =>
{
return MessagingFactory.CreateFromConnectionString(messagingConnectionString);
});
public static MessagingFactory MessagingFactory
{
get
{
return lazyMessagingFactory.Value;
}
}
public static MessagingFactory EventHubMessageFactory
{
get
{
return lazyEventhubMessagingFactory.Value;
}
}
public async Task SendMessageToQueueAsync(string queueName, string message)
{
QueueClient queueClient = MessagingFactory.CreateQueueClient(queueName);
BrokeredMessage brokeredMessage = new BrokeredMessage(new MemoryStream(Encoding.UTF8.GetBytes(message)), true);
await queueClient.SendAsync(brokeredMessage);
}
It was a best practices for high performance application, Also I have many queues under single service bus namespace and I push message based on configuration. I don't want to create QueueClient object in every request and don't want to maintain connection string for every queue.
What is alternate of MessagingFactory in .Net Core?
There are major changes when you are migrating .NetFramework code into .Netcore, you can see Guide for migrating to Azure.Messaging.ServiceBus from Microsoft.Azure.ServiceBus
Example below
static void Main(string[] args)
{
string connectionString = "<connection_string>";
string queueName = "<queue_name>";
// since ServiceBusClient implements IAsyncDisposable we create it with "await using"
var client = new ServiceBusClient(connectionString);
// create the sender
ServiceBusSender sender = client.CreateSender(queueName);
for (int i = 0; i < 10; i++)
{
// create a message that we can send. UTF-8 encoding is used when providing a string.
ServiceBusMessage message = new ServiceBusMessage($"Hello world {i}!");
// send the message
sender.SendMessageAsync(message).GetAwaiter().GetResult();
}
sender.DisposeAsync().GetAwaiter().GetResult();
client.DisposeAsync().GetAwaiter().GetResult();
}
https://github.com/MicrosoftDocs/azure-docs/issues/46830
https://github.com/Azure/azure-service-bus-dotnet/issues/556
While MessagingFactory is gone, the idea of connection pooling and sharing connections is there. When you create your clients, passing a connection will reuse it. When passing a connection string, will cause clients to establish a new connection.
So you can manually create a ServiceBusConnection or reuse one of an existing client. You can pass the connection object in the constructors of the clients you create. Take care not to close a connection accidentally, e.g. by closing the client that created it.

Resources