ThroughPut Unit and Partition Count - azure

I have question regarding Partition Count with relate to TUs. We have a below configuration and 3 Tus for the NameSpace, than will it have an impact based on no of partition for each eventhub, also should we just create partition count as 32 for better performance?. FYI we are using standard plan and kept partition count higher for first one as it receives more messages. We also use batch method to send messages to evenhub.

There is a potential issue if having 3 TUs. if the namespace has 3 TUs, then in a minute, the maximum size of ingress is 1M * 60 * 3 = 180M/minute, but in the table you posted, the total size is larger than 180M(109+58+39).
And for TU and partition count, you should take a look at How many partitions do I need?, Partitions. And you can follow the guide below from the above articles:
We recommend that you balance 1:1 throughput units and partitions to achieve optimal scale. A single partition has a guaranteed ingress and egress of up to one throughput unit. While you may be able to achieve higher throughput on a partition, performance is not guaranteed. This is why we strongly recommend that the number of partitions in an event hub be greater than or equal to the number of throughput units.

Plan as max 1 MB/sec per partition. In other words, think each partition as an individual stream which can process 1 MB/sec traffic at most. That said, your current configuration looks alright to me. However, you can still consider increasing partition counts depending on your traffic growth trajectory.

Related

Azure table storage: Partitioning strategy and performance targets

I am trying to come up with partitioning strategy for Azure Table storage.
I use Scalability and performance targets for Table storage as a primary source for my performance estimation. The document provides 2 numbers:
Maximum request rate per storage account 20,000 transactions per second, which assumes a 1-KiB entity size
Target throughput for a single table partition (1 KiB-entities) Up to 2,000 entities per second
Does it mean that I won't get much throughput benefits from having more than 10-ish partitions since TPS is caped by 10x 1 partition TPS limit?
Actually I am also wondering if they use "transactions" and "entities" interchangeably in the doc. Does the first number applicable to batch transaction or for batches I should divide 20000 by number of transactions in batch.
It is true that having more 10 partitions will not give you throughput benefits but that is only if all 10 partitions are are running at max throughput i.e 2000 tps. If any partition is not running at max throughput, then you will be under utilizing your table storage. This is why It is recommended HERE that
For optimal load balancing of traffic, you should use more partitions
so that Azure Table storage can distribute the partitions to more
partition servers.
We just worry about the partitioning/partition key and Azure will handle the load balancing upto the max throughput per partition/storage acct.

Why varying blob size gives different performance?

My cassandra table looks like this -
CREATE TABLE cs_readwrite.cs_rw_test (
part_id bigint,
s_id bigint,
begin_ts bigint,
end_ts bigint,
blob_data blob,
PRIMARY KEY (part_id, s_id, begin_ts, end_ts)
) WITH CLUSTERING ORDER BY (s_id ASC, begin_ts DESC, end_ts DESC)
When I insert 1 million row per client, with 8 kb blob per row and test the speed of insertions from different client hosts the speed is almost constant at ~100 mbps. But with the same table definition, from same client hosts if I insert rows with 16 bytes of blob data then my speed numbers are dramatically low ~4 to 5 mbps. Why is there such a speed difference? I am only measuring write speeds for now. My main concern is not speed (though some inputs will help) when I add more clients I see speed is almost constant for bigger blob size but for 16 bytes blob the speed is increasing only by 10-20% per added client before it becomes constant.
I have also looked at bin/nodetool tablehistograms output and do adjust number of partitions in my test data so no partition is > 100 mb.
Any insights/ links for documentation would be helpful. Thanks!
I think you are measuring the throughput in the wrong way. The throughput should be measured in transactions per second and not in data written per second.
Even though the amount of data written can play a role in determining the write throughput of a system but usually it depends on many other factors.
Compaction Strategy like STCS is write-optimized whereas LOCS is
read-optimized.
Connection speed and latency between the client and the cluster, and
between machines in the cluster
CPU usage of the node which is processing data, sending data to other
replicas and waiting for their acknowledgment.
Most of the writes are immediately written in memory instead of being written directly in the disk which basically makes the impact of the amount of data being written on final write throughput almost negligible whereas other fixed things like Network delay, CPU to coordinate the processing of data across nodes, etc have a bigger impact.
The way you should see it is that with 8KB of payload you are getting X transactions per second and with 16 Bytes you are getting Y transactions per second. Y will always be better than X but it will not be linearly proportional to the size difference.
You can find how writes are handled in cassandra explained in detail here.
Theres management overhead in Cassandra per row/partition, the more data (in bytes) you have in each row the less that overhead impacts throughput in bytes/sec. The reverse is true if you look at rows per sec as a metric of throughput. The larger the payloads the worse your rows/sec throughput would get.

Tradeoffs involved in count of partitions of event hub with azure functions

I realize this may be a duplicate of Why not always configure for max number of event hub partitions?. However, the product has evolved and the defaults have changed, and the original issues may no longer be a factor.
Consider the scenario where the primary consumers of an event hub will be eventHubTrigger Azure Functions. The trigger function is backed by an EventProcessorHost which will automatically scale up or down to consume from all available partitions as needed without any administration effort.
As I understand it, the monetary cost of the Azure Function is based on the execution duration and count of invocations, which would be driven by only by the count of events consumed and not affected by the degree parallelism due to the count of partitions.
In this case, would there be any higher costs or complexity from creating a hub with the max of 32 partitions, compared to one with a small number like 4?
Your thought process makes sense: I found myself creating 32 partitions by default in this exact scenario and had no issues with that so far.
You pay per provisioned in-/egress, partition count doesn't add cost.
The only requirement is that your partition key has enough unique values to load partitions more or less evenly.

High scale message processing in eventhub

As per my understanding, eventhub can process/ingest millions of messages per seconds. And to tune the ingesting, we can use throughput.
More throughput= more ingesting power.
But on receiving/consuming side, You can create upto 32 receivers(since we can create 32 partitions and one partition can be consumed by one receiver).
Based on above, if one single message takes 100 milisencond to process, one consumer can process 10 message per second and 32 consumer can process 32*10= 320 message per second.
How can I make my receiver consume more messages (for ex. 5-10k per seond).
1) Either I have to process message asynchronously inside ProcessEventsAsync. But in this case I would not be able to maintain ordering.
2) Or I have to ask Microsoft to allow me to create more partitions.
Please advice
TLDR: You will need to ask Microsoft to increase the number of partitions you are allowed, and remember that there is currently no way to increase the number on an already extant Event Hub.
You are correct that your unit of consumption parallelism is the partition. If your consumers can only do 10/seconds in order or even 100/second in order, then you will need more partitions to consume millions of events. While 100ms/event certainly seems slow to me and I think you should look for optimizations there (ie farm out work you don't need to wait for, commit less often etc), you will reach the point of needing more partitions at scale.
Some things to keep in mind: 32 partitions gives you only 32 Mb/s of ingress and 64Mb/s of egress. Both of these factors matter since that egress throughput is shared by all the consumer groups you use. So if you have 4 consumer groups reading the data (16Mb/s each) you'll need twice as many partitions (or at least throughput units) for input as you would based solely on your data ingress (because otherwise you would fall behind).
With regards to your comment about multitenancy, you will have one 'database consumer' group that handles all your tenants all of whose data will be flowing through the same hub? If so that sounds like a sensible use, what would not be so sensible is having one consumer group per tenant each consuming the entire stream.

Why hazelcast has default partition count of 271 and what are the parameters to chose one?

I just went through the hazelcast documentation.
It suggests that data partitioned across all the nodes.
And the number of partitions created in cluster 271 by default !
What parameters govern the selection of right partition count value. And why default partition count is 271 ?
271 is a prime number. And given any key, Hazelcast will hash the key and mod it with the partition count. In this context, prime numbers are believed to generate more pseudo-random result. Actually for user perspective, it is not that important to have it prime.
Then you may ask, why 271 but not other prime number.
Simply because 271 is a good number that will almost evenly distribute when you have less than 100 nodes. When you have more than 100 nodes, you need to increase it to make the distribution even.
Another reason to increase partition count is when you have large amount of data. Say you have 300 GB of data to be stored in data grid. Then each partition will have over 1GB and migration will take too long. Note that during migration, all updates to that partition are blocked. For sake of latency, you would like to have small data per partition. So increase it to a number where you are comfortable with the latency of moving partitions.
Note that partitions will migrate only when you add a new node.

Resources