Azure Eventhub / Event Processor Host: Partitioning not working as predicted - azure

We are working on a project right now which implements and uses the Azure Eventhub.
We use the Event Processor Host to process the data from the Eventhub. We have 32 partitions distributed on to 3 nodes and are wondering how the Event Processor Host distributes and balances the partitions on to the receivers / nodes – especially when using partition key.
We currently have 4 different customers (blue, orange, purple and light blue) which sends us different sizes of data. As you can see the blue customer on the left sends approx. 132k strings of data, while the light blue customer on the right only sends 28.
Our theory was, that given a partitionkey based on the customer (the coloridentification) we would see that a customers data would only be placed in one node.
Instead we can see that the data is somehow evenly distributed on the 3 nodes as seen below:
Node 1:
Node 2:
Node 3:
Is there something we’ve misunderstood in regards to how the use of the partitionkey works? From what we’ve read in the documentation, then when we don’t specify partition keys, then a “round-robin” approach will be used – but even with the use of a partition key, it somehow distributes them evenly.
Are we somehow stressing the nodes – with a blue customer having a huge amount of data and another customer having almost nothing? Or what is going on?
To visualize our theory we've drawn the following:
So are we stressing the top node with a blue customer, that in the end has to move a partition to the middle node?

A partition key is intended to be used when you want to be sure that a set of events is routed to the same partition, but you don't want to assign an explicit partition. In short, use of a partition key is an explicit request to control routing and prevents the service from balancing across partitions.
When you specify a partition key, it is used to produce a hash value that the Event Hubs service uses to assign the partition to which the event will be routed. Every event that uses the same partition key will be published to the same partition.
To allow the service to round-robin when publishing, you cannot specify a partition key or an explicit partition identifier.

Jesse already explained what partition key is good for so I won't repeat that.
If you want customer to consumer-node affinity, you should consider dedicating an independent eventhub to each customer so that you can tell your system something like
node-1 processes data from customerA only by consuming events from eventhub-1
node-2 processes data from customerB only by consuming events from eventhub-2
and so on...
Making use of partition key doesn't really address you business logic here.
One more thing. If you are planning to run this with larger number of customers in the future then you also need to consider to scale out your design to create affinity between customer and EH namespace as well.

Related

Equivalent to consumer group in Cassandra

I'm trying to create a sort of a consumer group as it exist in Kafka but for Cassandra. The goal is to have a request been paginated and each page done by one instance of an App.
Is there any notion like the consumer group one in Cassandra ?
The TL;DR; is that no, the consumer-group notion doesn't exist in the clients in Cassandra. The burden of which client processes what is entirely on the app developer.
You can use Cassandra's tokens to do selective paging.
Assuming 2 clients (easy example)
Client 1 pages from -2^63 to 0
Client 2 pages from 1 to 2^63 - 1
The above idea assumes you want to page through all the data in something similar to a batch process which wouldn't be a good fit for Cassandra.
If you're after the latest N results, where the 1st half is sent to client 1 and the second to client 2 you can use a logical bucket in your partitioning key.
If you're looking to scale the processing of a large number of Cassandra rows, you might consider a scalable execution platform like Flink or Storm. You'd be able to parallelize both the reading of the rows and the processing of the rows, although a parallelizable source (or spout in Storm) is not something you can get out of the box.

Custom controlled partitioning

I recently posted question and I received full answer. But I am encountering another problem.
Case scenario is the same as in my recent question.
How can I configure member to own partition key?
e.g. DataCenterOnRussia partition key must always be owned by member1 and DataCenterOnGermany partition key must always be owned by member2.
So member2 could request data from DataCenterOnRussia using PartitionAwareKey.
The intent of the PartitionAwareKey is to allow for data affinity ... orders for a customer should be stored in the same partition as the customer record, for example, since they are frequently accessed together.
The PartitionAwareKey allows grouping items together, but not a way to specify the placement of those items on a specific cluster member. (I guess if there were such a thing, it would likely be called MemberAwareKey).
A cluster in Hazelcast isn't a fixed-size entity; it is dynamically scalable, so members might be added or removed, and it is fault-tolerant, so a member could be lost without loss of the data that happened to be on that member. In order to support those features, the cluster must have the freedom to move partitions around to different machines as the cluster topology changes.
Hazelcast recommends that all members of a cluster be similarly configured (equivalent memory configuration, most particularly) because of the idea that cluster members are interchangeable, at least as far as data storage. (The MemberSelector facility does provide a provision for handling systems that have different processing capability, e.g., number of processor cores; but nothing similar exits to allow placement of specific data entries or partitions on a designated member).
If your use case requires specific placement on machines, it's an indication that those machines probably should not be part of the same cluster.

How to decide a good partition key for Azure Cosmos DB

I'm new to Azure Cosmos DB, but I want to have a vivid understanding of:
What is the partition key?
My understanding is shallow for now -> items with the same partition key will go to the same partition for storage, which could better load balancing when the system grows bigger.
How to decide on a good partition key?
Could somebody please provide an example?
Thanks a lot!
You have to choose your partition based on your workload. They can be classified into two.
Read Heavy
Write Heavy
Read heavy workloads are where the data is read more than it has been written, like the product catalog, where the insert/update frequency of the catalogs is less, and people browsing the product is more.
Write Heavy workloads are the ones where the data is written more than it is read. Common scenarios are IoT devices sending multiple data from multiple sensors. You will be writing lots of data to Cosmos DB because you may get data every second.
For read-heavy workload choose the partition key, where the property is used in the filter query. The product example will be the product id, which will be used mostly to fetch the data when the user wants to read the information and browse its reviews.
For Write-heavy workload choose the partition key, where the property is more unique. For example, in the IoT Scenario, use the partition key such as deviceid_signaldatetime, which is concatenating the device-id that sends the signal, and DateTime of the signal has more uniqueness.
1.What is the partition key?
In azure cosmos db , there are two partitions: physical partition and logical partition
A.Physical partition is a fixed amount of reserved SSD-backed storage combined with variable amount of compute resources.
B.Logical partition is a partition within a physical partition that stores all the data associated with a single partition key value.
I think the partiton key you mentioned is the logical partition key.The partition key acts as a logical partition for your data and provides Azure Cosmos DB with a natural boundary for distributing data across physical partitions.More details, you could refer to How does partitioning work.
2.How to decide a good partition key? Could somebody please provide an example?
You need consider to pick a property name that has a wide range of values and has even access patterns.An ideal partition key is one that appears frequently as a filter in your queries and has sufficient cardinality to ensure your solution is scalable.
For example, your data has fields named id and color and you query the color as filter more frequently.You need to pick the color not id for partition key which is more efficient for your query performance. Because every item has different id but maybe has same color.It has wide range. Also if you add a color,the partition key is scalable.
More details ,please read the Partition and scale in Azure Cosmos DB.
Hope it helps you.

Nifi GetEventHub is multiplying the data by the number of nodes

I have some flows that get the data from an azure eventhub, im using the GetAzureEventhub processor. The data that im getting is being multiplyed by the number of nodes that I have in the cluster, I have 4 nodes. If I indicate to the processor to just run on the primary node, the data is not replicated 4 times.
I found that the eventhub for each consumer group accepts up to 5 readers, I read this in this article, each reader will have its own separate offset and they consume the same data. So in conclussion Im reading the same data 4 times.
I have 2 questions:
How can I coordinate this 4 nodes in order to go throught the same reader?
In case this is not posible, how can indicate nifi to just one of the nodes to read?
Thanks, if you need any clarification, ask for it.
GetAzureEventHub currently does not perform any coordination across nodes so you would have to run it on primary node only to avoid duplication.
The processor would require refactoring to perform coordination across the nodes of the cluster and assign unique partitions to each node, and handle failures (i.e. if a node consuming partition 1 goes down, another node has to take over partition 1).
If the Azure client provided this coordination somehow (similar to the Kafka client) then it would require less work on the NiFi side, but I'm not familiar enough with Azure to know if it provides anything like this.

Azure Table Storage Partition Key

Two somewhat related questions.
1) Is there anyway to get an ID of the server a table entity lives on?
2) Will using a GUID give me the best partition key distribution possible? If not, what will?
we have been struggling for weeks on table storage performance. In short, it's really bad, but early on we realized that using a randomish partition key will distribute the entities across many servers, which is exactly what we want to do as we are trying to achieve 8000 reads per second. Apparently our partition key wasn't random enough, so for testing purposes, I have decided to just use a GUID. First impression is it is waaaaaay faster.
Really bad get performance is < 1000 per second. Partition key is Guid.NewGuid() and row key is the constant "UserInfo". Get is execute using TableOperation with pk and rk, nothing else as follows: TableOperation retrieveOperation = TableOperation.Retrieve(pk, rk); return cloudTable.ExecuteAsync(retrieveOperation). We always use indexed reads and never table scans. Also, VM size is medium or large, never anything smaller. Parallel no, async yes
As other users have pointed out, Azure Tables are strictly controlled by the runtime and thus you cannot control / check which specific storage nodes are handling your requests. Furthermore, any given partition is served by a single server, that is, entities belonging to the same partition cannot be split between several storage nodes (see HERE)
In Windows Azure table, the PartitionKey property is used as the partition key. All entities with same PartitionKey value are clustered together and they are served from a single server node. This allows the user to control entity locality by setting the PartitionKey values, and perform Entity Group Transactions over entities in that same partition.
You mention that you are targeting 8000 requests per second? If that is the case, you might be hitting a threshold that requires very good table/partitionkey design. Please see the article "Windows Azure Storage Abstractions and their Scalability Targets"
The following extract is applicable to your situation:
This will provide the following scalability targets for a single storage account created after June 7th 2012.
Capacity – Up to 200 TBs
Transactions – Up to 20,000 entities/messages/blobs per second
As other users pointed out, if your PartitionKey numbering follows an incremental pattern, the Azure runtime will recognize this and group some of your partitions within the same storage node.
Furthermore, if I understood your question correctly, you are currently assigning partition keys via GUID's? If that is the case, this means that every PartitionKey in your table will be unique, thus every partitionkey will have no more than 1 entity. As per the articles above, the way Azure table scales out is by grouping entities in their partition keys inside independent storage nodes. If your partitionkeys are unique and thus contain no more than one entity, this means that Azure table will scale out only one entity at a time! Now, we know Azure is not that dumb, and it groups partitionkeys when it detects a pattern in the way they are created. So if you are hitting this trigger in Azure and Azure is grouping your partitionkeys, it means your scalability capabilities are limited to the smartness of this grouping algorithm.
As per the the scalability targets above for 2012, each partitionkey should be able to give you 2,000 transactions per second. Theoretically, you should need no more than 4 partition keys in this case (assuming that the workload between the four is distributed equally).
I would suggest you to design your partition keys to group entities in such a way that no more than 2,000 entities per second per partition are reached, and drop using GUID's as partitionkeys. This will allow you to better support features such as Entity Transaction Group, reduce the complexity of your table design, and get the performance you are looking for.
Answering #1: There is no concept of a server that a particular table entity lives on. There are no particular servers to choose from, as Table Storage is a massive-scale multi-tenant storage system. So... there's no way to retrieve a server ID for a given table entity.
Answering #2: Choose a partition key that makes sense to your application. just remember that it's partition+row to access a given entity. If you do that, you'll have a fast, direct read. If you attempt to do a table- or partition-scan, your performance will certainly take a hit.
See
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx for more info on key selection (Note the numbers are 3 years old, but the guidance is still good).
Also this talk can be of some use as far as best practice : http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/WAD-B406#fbid=lCN9J5QiTDF.
In general a given partition can support up to 2000 tps, so spreading data across partitions will help achieve greater numbers. Something to consider is that atomic batch transactions only apply to entities that share the same partitionkey. Additionally, for smaller requests you may consider disabling Nagle as small requests may be getting held up at the client layer.
From the client end, I would recommend using the latest client lib (2.1) and Async methods as you have literally thousands of requests per second. (the talk has a few slides on client best practices)
Lastly, the next release of storage will support JSON and JSON no metadata which will dramatically reduce the size of the response body for the same objects, and subsequently the cpu cycles needed to parse them. If you use the latest client libs your application will be able to leverage these behaviors with little to no code change.

Resources