Azure Service Bus performance issue MassTransit - azure

So I've been playing around with MassTransit and Azure Service Bus Premium, here's a sample of one of my consumers. Hypothetical initial load for one publisher would be about 1000 messages a second. However whenever I attempt to configure a consumer, it seems to generally average out at about 20-40 messages per loop.
cfg.ReceiveEndpoint("ReceivePoint", e =>{
e.PrefetchCount = 500;
e.MaxConcurrentCalls = 20;
e.Batch<IBlahContract>(b => {
b.MessageLimit = 500;
b.TimeLimit = TimeSpan.FromSeconds(1);
b.Consumer(() => new BatchBlahConsumer(provider.GetRequiredService<IRepository>(), provider.GetRequiredService<ILogger<BatchBlahConsumer>>()));
});
});
I did try Throughput test which managed a thousand plus messages a second. Did anyone get any tips on how to achieve optimal performance? And might it make more sense to consider a managed instance of RabbitMq since this needs to scale? It just feels like Azure Service Bus isn't really suited to such high throughput?
Edit: Slight addition to this, suspect it's related to a requirement to keep prefetch to about 20 and then consumer concurrency is what really defines performance. So basically, it needs consumer level configuration in terms of estimated requirements. Which would make me lean more towards using rabbit.

Your batch message limit is 500, which is honestly way too high. With the MaxConcurrentCalls set at 20, you'll always hit the timeout instead of the batch size limit, because the Azure client library will only ever deliver 20 messages at once, and the batch size is significantly higher than that value (500 vs 20). You need to set it high enough that it can complete a batch or you'll always be completing the batch on timeout alone.
Lower the batch size, and increase the MaxConncurrentCalls, so that they are the same, or at least so the batch size is less than the concurrent calls limit, so that batches can be completed upon message receipt instead of waiting to time out.

Related

Bulls Queue Performance and Scalability: Queue.add(), Queue.getJob(jobId), Job.remove()

My use case is to create dynamic delayed job. (I am Using Bulls Queue which can be used to create delayed Jobs.)
Based on some event add some more delay to the delayed interval (further delay the job).
Since I could not find any function to update the Delayed Interval for a Job I came up with the following steps:
onEvent(jobId):
// queue is of Type Bull.Queue
// job is of type bull.Job
job = queue.getJob(jobId)
data = job.data
delay = job.toJSON().delay
job.remove()
queue.add("jobName", {value: 1}, {jobId: jobId, delayed: delay + someValue})
This pretty much solves my problem.
But I am worried about the SCALE at which these operations will happen.
I am expecting nearly 50K events per minute or even more in near future.
My Queue size is expected to grow based on unique JobId.
I am expecting more than:
1 million daily entry
around 4-5 million weekly entry
10-12 million monthly entry.
Also, after 60-70 days delayed interval for jobs will reach, and older jobs will be removed one by one.
I can run multiple processor to handle these delayed job which is not an issue.
My queue size will be stabilise after 60-70 days and more or less my queue will have around 10 million jobs.
I can vertically scale my REDIS as required.
But I want to understand the time complexity for below queries:
queue.getJob(jobId) // Get Job By Id
job.remove() // remove job from queue
queue.add(name, data, opts) // add a delayed job to this queue
If any of these operations are O(N) OR the QUEUE can keep some max number of Jobs which is less than 10 million.
Then I might have to discard this design and come up with something entirely different.
Need advice from experienced folks who can guide me on how solve this problem.
Any kind of help is appreciated.
Taking reference from the source code:
queue.getJob(jobId)
This should be O(1) since it's mostly using hash based solutions using hmget. You're only requesting one job and according to official redis docs, the time complexity is O(N) where N is the requested number of keys which will be in the order of O(1) since I'm expecting bull is storing few number of fields at the hash key.
job.remove()
Considering that a considerable number of your jobs is going to be delayed and a fraction of them are moved to waiting or active queue. This should be O(logN) on an amortized level as it's mostly using zrem for these operations.
queue.add(name, data, opts)
For job addition in a delayed queue, bull is using zadd so this is again O(logN).

Azure Function with Event Hub trigger receives weird amount of events

I have an Event Hub and Azure Function connected to it. With small amounts of data all works well, but when I tested it with 10 000 events, I got very peculiar results.
For test purposes I send into Event hub numbers from 0 to 9999 and log data in application insights and in service bus. For the first test I see in Azure that hub got exactly 10 000 events, but service bus and AI got all messages between 0 and 4500, and every second message after 4500 (so it lost about 30%). In second test, I got all messages from 0 to 9999, but every second message between 3500 and 3200 was duplicated. I would like to get all messages once, what did I do wrong?
public async Task Run([EventHubTrigger("%EventHubName%", Connection = "AzureEventHubConnectionString")] EventData[] events, ILogger log)
{
int id = _random.Next(1, 100000);
_context.Log.TraceInfo("Started. Count: " + events.Length + ". " + id); //AI log
foreach (var message in events)
{
//log with ASB
var mess = new Message();
mess.Body = message.EventBody.ToArray();
await queueClient.SendAsync(mess);
}
_context.Log.TraceInfo("Completed. " + id); //AI log
}
By using EventData[] events, you are reading events from hub in batch mode, thats why you see X events processing at a time then next seconds you process next batch.
Instead of EventData[] use simply EventData.
When you send events to hub check that all events are sent with the same partition key if you want try batch processing otherwise they can be splitted in several partitions depending on TU (throughput units), PU (Processing Units) and CU (Capacity Units).
Egress: Up to 2 MB per second or 4096 events per second.
Refer to this document.
Throughput limits for Basic, Standard, Premium..:
There are a couple of things likely happening, though I can only speculate with the limited context that we have. Knowing more about the testing methodology, tier of your Event Hubs namespace, and the number of partitions in your Event Hub would help.
The first thing to be aware of is that the timing between when an event is published and when it is available in a partition to be read is non-deterministic. When a publish operation completes, the Event Hubs broker has acknowledged receipt of the events and taken responsibility for ensuring they are persisted to multiple replicas and made available in a specific partition. However, it is not a guarantee that the event can immediately be read.
Depending on how you sent the events, the broker may also need to route events from a gateway by performing a round-robin or applying a hash algorithm. If you're looking to optimize the time from publish to availability, taking ownership of partition distribution and publishing directly to a partition can help, as can ensuring that you're publishing with the right degree of concurrency for your host environment and scenario.
With respect to duplication, it's important to be aware that Event Hubs offers an "at least once" guarantee; your consuming application should expect some duplicates and needs to be able to handle them in the way that is appropriate for your application scenario.
Azure Functions uses a set of event processors in its infrastructure to read events. The processors collaborate with one another to share work and distribute the responsibility for partitions between them. Because collaboration takes place using storage as an intermediary to synchronize, there is an overlap of partition ownership when instances are scaled up or scaled down, during which time the potential for duplication is increased.
Functions makes the decision to scale based on the number of events that it sees waiting in partitions to be read. In the case of your test, if your publication pattern increases rapidly and Functions sees "the event backlog" grow to the point that it feels the need to scale by multiple instances, you'll see more duplication than you otherwise would for a period of 10-30 seconds until partition ownership normalizes. To mitigate this, using an approach of gradually increasing speed of publishing over a 1-2 minute period can help to smooth out the scaling and reduce (but not eliminate) duplication.

Would SQS batch size max limit result in slower processing through Lambdas?

I'm aware that AWS has allowed SQS to be one of the event source mappings for Lambdas. I'm glad that this is possible now as I would then not have to poll from the queue every few seconds through a cron job. However, it appears that the maximum possible value for batchSize is limited to 10. From my understanding, the batchSize is the number of messages a single Lambda invocation will receive from the queue.
This sounds like it could be an issue for me because, in my case, I may have a few hundreds of thousands of messages at a time in the queue. Those messages don't need any heavy processing; they just need to be parsed and saved to the database as a record. It's pretty simple.
If the batchSize is limited to only 10 messages per retrieval, I foresee a few issues that I may have:
It may actually take a long time to finish processing the messages on the queue.
Not only is 10 messages per retrieval slow, since the messages are very simple to process, processing only 10 messages in a single Lambda invocation sounds a little wasteful because, given the simplicity of what is needed to be done to process the messages, I'm pretty sure it can process at least a few thousands messages in a single Lambda invocation.
Having only 10 messages per retrieval may also mean that I need to make more write operations to my database because each of these messages need to inserted as a record on the database.
Are my concerns valid in this case? If so, is there anything else I can do with SQS and Lambdas to overcome those concerns?
Your assumption about a limit of 10 is correct.
Lambda will spin up more instances to run in parallel, if there are more messages available. See Scaling and Processing. This means that if there are 1000 messages available, Lambda might spin up 100 concurrent executions to quickly process all the messages.
Once a lambda function has processed the 10 messages of a batch, it continues with processing other batches. As lambda bills in 100ms intervals, the wasted time is minimal.
As for the database writes you could pre-process the messages before inserting them into the queue.
In that case you need to let you lambda function fetch the messages from the queue and process them rather than lambda getting triggered via SQS. Probably have a cloud watch event which can trigger lambda for you depending upon what your use case is.
Please note that SQS has a limit of max 10 messages in one go but you could write the code to make it much more efficient.
One of the package which is very efficient at is squiss-ts
In this case you could let your lambda function run for 15 mins (max time) and let it process as many messages possible. Idempotency is the key when you are desinging these kind of applications so in case if message wasn't processed in this run, it will be processed in the next run.
Downside of using this approach is that you need to scale your lambda's manually depending on how many messages you are anticipating.
You're right that a larger batch size seems appropriate for your use case.
As of late 2020, if you specify a batch window in seconds, you can then specify a batch size of up to 10,000 messages.
So with this new option you can now configure your lambda to wait and receive much larger batches per invocation.

Stream Analytics too slow (time slippage between two streams)?

Here's my stream analytics topology
EventHubSource => Job A (HoppingWindow every second) => EventHubA
EventHubSource => Job B (HoppingWindow every second) => EventHubB
Each job has a different consumer group in EventHubSource.
Each job is embarrassingly parallel and consumes only
14% SU resources.
When testing the JobA and JobC, the difference between the windowEnd and the original Event Time is just some few millisecond (~300), which is ok (latency from my producer + eventhub + stream analytics processing time).
But when I join both streams in a new Job C like this:
EventHubA
\
=> Job C (Join Datediff = 0 and timestamp by windowEnd)
/
EventHubB
This produces some output, but the problems comes here:
The real events are multiple minutes apart even if they were pushed at the same time by Job A and B (same windowEnd)
When I inspect the data coming out from EventHub A and B, the difference between the windowEnd and the real event timestamp ranges between 39 and 44 minutes, for all of them. But when testing like mentionned above, it was only 300ms.
The worst part here is that when I run it in prod, it only emits some dozen events and stops, even if the input count is still in the thousands.
It's been weeks I'm working on this and everytime I'm dealing with some cryptic behavior from ASA, my topology is quite simple and I'm only using simple hopping windows of 1s hop, this shouldn't take weeks of tweaking and trial errors without even understanding what's happening.
For people who used ASA and AWS Kinesis analytics, did you find Kinesis analytics simpler to work with ? What annoys me here in ASA is the unpredictable behavior and issues without error messages (I activated log analytics and no error was there...)
Sorry to hear you encountered some issues with ASA. I see you have a 1 second hopping windows, but what is the total size of the windows and what is your approximate throughput?
Regarding the delay: Looking are your question, I think your ASA job may not have enough CPU resources, and then the event processing is delayed. Unfortunately this is not visible in the current SU% metric, but we plan to show metrics for both CPU and memory in the future.
To confirm this is the root cause, you can check the number of backlogged events in the job diagram. If there are lot of events backlogged, you may need to increase the number of SUs for this job.
You also mentioned the job stops after a dozen output, do you see an error message in the logs?

Send million events to Azure event hub every 5 seconds

I have a simple requirement, 1 million devices need to send a simple heartbeat to the event hub every 5 seconds, that works out to 200000 events per second. Since 1 throughput unit supports only 1000 events\sec, do I really need to purchase 200 throughput units to implement a simple heartbeat mechanism?
I'm really wondering about the claim of event hub supporting millions of events per second, how is that possible if a throughput unit only supports 1000. I need a HUGE number of throughput units and that's going to burn every last dollar. Unless I'm really missing something.

Resources