Partition key for Azure Cosmos DB collection - azure

I am bit new to Azure Cosmos DB and trying to understand the concepts.
I want help to decide the the best possible partition key for DocumentDB collection. Please refer image below which have possible partitions using different partition keys.
As mentioned in the blog post here,
An ideal partition key is one that appears frequently as a filter in
your queries and has sufficient cardinality to ensure your solution is
scalable.
From above line, I think, in my case, UserId can be used as partition key.
Can someone please suggest me which key is the best possible candidate for partition key?

From the 10 things to know about DocumentDB Partitioned Collections and micro official document , you could find lots of very good advice about choice of partitioning key, so I'm not going to repeat here.
The selection of partitioning keys depends on the data stored in the database and the frequent query filtering criteria.
It is often advised to partition on something like userid which is good if you have. Suppose your business logic has many queries for a given userid and want to look up no more than a few hundred entries. In such cases the data can be quickly extracted from a single partition without the overhead of having to collate data across partitions.
However, if you have millions of records for the user then partitioning on userid is perhaps the worst option as extracting large volumes of data from a single partition will soon exceed the overhead of collation. In such cases you want to distribute user data as evenly as possible over all partitions. You may need to find another column to be the partition key.
So , if the data volume is very large, I suggest that you do some simple tests based on your business logic and choose the best partitioning key for your performance. After all, the partitioning key cannot be changed once it is set up.
Hope it helps you.

It depends, but here are few things to consider:
The blog post you mentioned say:
Additionally, the storage size for documents belonging to the same partition key is limited to 10GB. An ideal partition key is one that appears frequently as a filter in your queries and has sufficient cardinality to ensure your solution is scalable.
Also, I really recommend to check this post and video, https://learn.microsoft.com/en-us/azure/cosmos-db/partition-data,
The choice of the partition key is an important decision that you have to make at design time. You must pick a property name that has a wide range of values and has even access patterns.
So make sure to choose a partition Key that has many values and meets those requirements.

Related

Are secondary indices always a bad idea in Cassandra even if I specify them in conjunction with the partitioning key in all my queries?

I know that secondary indices in Cassandra are generally a bad idea because the index is stored locally in each node i.e. not distributed across the cluster which may result in a query scanning a huge number of nodes. However, I don't understand why they are still a bad idea if I always specify the partition key in my queries and only use the secondary index as a final filter. I've read that they don't scale with large amounts of data even if I specify the partition key. Is this true? and if it's then why?
In general secondary indexes are bad idea, not only for the distributed part, but also for the index size and the number of distinct value, so if you have a field with high or low cardinality,you will be spending time on scanning many rows or many columns.
Also you can have other issue while dealing with tombstones ...
To answer your question, secondary index in Cassandra doesn't scale that good, but if you use a partition key and by it you tell Cassandra which node have the data, it perform really better !
you can find more details here in section F :
https://www.datastax.com/blog/2016/04/cassandra-native-secondary-index-deep-dive
I hope this helps !
These guys have a nice write-up on the performance impacts of secondary indexes: 
https://pantheon.io/blog/cassandra-scale-problem-secondary-indexes
The main impact (from the post) is that secondary indexes are local to each node, so to
satisfy a query by indexed value, each node has to query its own records to build the
final result set (as opposed to a primary key query where it is known exactly which node
needs to be queried). So there's not just an impact on writes, but on read performance as
well.
Cassandra on a ring of five machines, with a primary index of user IDs and a secondary index of user emails. If you were to query for a user by their ID or by their primary indexed key any machine in the ring would know which machine has a record of that user. One query, one read from disk. However to query a user by their email or their secondary indexed value each machine has to query its own record of users. One query, five reads from disk. By either scaling the number of users system wide, or by scaling the number of machines in the ring, the noise to signal-to-ratio increases and the overall efficiency of reading drops. In some cases to the point of timing out also.
Please refer below link for good explanation on secondary index.
https://dzone.com/articles/cassandra-scale-problem

Azure CosmosDb create partition only

And probably I already know the answer, yet I would love some feedback.
I have a Azure CosmosDb without partition key (empty), I want to create one because the RUs are too high so the performance improves.
My would-be partition is Date (20181005).
My question is if I don't send the Date as part of the queries (most of the times we request the object by ID), will the partition help on the performance?
I believe that it will since physically will organize documents better, however, I would love some feedback.
Thanks
The document id is only unique within it's own logical partiton. You can have multiple documents with the exact same id property as long as they are in different logical partitions.
If you partition your collection you have to deal with 2 (of many) realities.
The logical partition size cannot exceed 10GB
In order to have efficient queries and reads you have to provide the partition key value alongside your operations.
You can still do any querying operation using a cross partition query but this is something that should be avoided if possible. If you see yourself needing to use a cross partition query frequently then there is a problem with your partitioning strategy.
Bottomline is that your querying performance will be way worse without a partition key provided during the querying process.

How to decide a good partition key for Azure Cosmos DB

I'm new to Azure Cosmos DB, but I want to have a vivid understanding of:
What is the partition key?
My understanding is shallow for now -> items with the same partition key will go to the same partition for storage, which could better load balancing when the system grows bigger.
How to decide on a good partition key?
Could somebody please provide an example?
Thanks a lot!
You have to choose your partition based on your workload. They can be classified into two.
Read Heavy
Write Heavy
Read heavy workloads are where the data is read more than it has been written, like the product catalog, where the insert/update frequency of the catalogs is less, and people browsing the product is more.
Write Heavy workloads are the ones where the data is written more than it is read. Common scenarios are IoT devices sending multiple data from multiple sensors. You will be writing lots of data to Cosmos DB because you may get data every second.
For read-heavy workload choose the partition key, where the property is used in the filter query. The product example will be the product id, which will be used mostly to fetch the data when the user wants to read the information and browse its reviews.
For Write-heavy workload choose the partition key, where the property is more unique. For example, in the IoT Scenario, use the partition key such as deviceid_signaldatetime, which is concatenating the device-id that sends the signal, and DateTime of the signal has more uniqueness.
1.What is the partition key?
In azure cosmos db , there are two partitions: physical partition and logical partition
A.Physical partition is a fixed amount of reserved SSD-backed storage combined with variable amount of compute resources.
B.Logical partition is a partition within a physical partition that stores all the data associated with a single partition key value.
I think the partiton key you mentioned is the logical partition key.The partition key acts as a logical partition for your data and provides Azure Cosmos DB with a natural boundary for distributing data across physical partitions.More details, you could refer to How does partitioning work.
2.How to decide a good partition key? Could somebody please provide an example?
You need consider to pick a property name that has a wide range of values and has even access patterns.An ideal partition key is one that appears frequently as a filter in your queries and has sufficient cardinality to ensure your solution is scalable.
For example, your data has fields named id and color and you query the color as filter more frequently.You need to pick the color not id for partition key which is more efficient for your query performance. Because every item has different id but maybe has same color.It has wide range. Also if you add a color,the partition key is scalable.
More details ,please read the Partition and scale in Azure Cosmos DB.
Hope it helps you.

partitionkey and rowkey in azure table storage

I understand the benefit of a partition key in azure table storage. However, given my relational database background, I am a bit confused about how to retrieve an entity from azure table storage given just the rowkey. As far as I know, this is impossible. This means that I have to store the partition key/rowkey pair somewhere to just get the entity given the rowkey. Should I just introduce a 'sharding' table with one arbitrary partition key, which allows me to look up the partition key given the rowkey?
It is possible but will result in a table scan as described in this section of MSDN.
If you don't need multiple partitions then it is absolutely fine to use a single partition (e.g. using a constant) if your data isn't going to be enormous in size and needs the scalability of multiple partitions.
Another possible approach is to use your current RowKey as PartitionKey which would give you a highly scalable solution but would result in bad performance if you need to query ranges of rows.
The linked MSDN page talks about the pros and cons of both so I think with your knowledge about your specific problem domain you should be able to find a balanced solution.

What is the disadvantage to unique partition keys?

My data set will only ever be directly queried (meaning I am looking up a specific item by some identifier) or will be queried in full (meaning return every item in the table). Given that, is there any reason to not use a unique partition key?
From what I have read (e.g.: https://azure.microsoft.com/en-us/documentation/articles/storage-table-design-guide/#choosing-an-appropriate-partitionkey) the advantage of a non-unique partition key is being able to do transactional updates. I don't need transactional updates in this data set so is there any reason to partition by anything other than some unique thing (e.g., GUID)?
Assuming I go with a unique partition key per item, this means that each partition will have one row in it. Should I repeat the partition key in the row key or should I just have an empty string for a row key? Is a null row key allowed?
Zhaoxing's answer is essentially correct but I want to expand on it so you can understand a bit more why.
A table partition is defined as the table name plus the partition key. A single server can have many partitions, but a partition can only ever be on one server.
This fundamental design means that access to entities stored in a single partition cannot be load-balanced because partitions support atomic batch transactions. For this reason, the scalability target for an individual table partition is lower than for the table service as a whole. Spreading entities across many partitions allows Azure storage to scale your load much better.
Point queries are optimal which is great because it sounds like that's what you will be doing a lot of. If partition key has no logical meaning (ie, you won't want all the entities in a particular partition) you're best splitting out to many partition keys. Listing all entities in a table will always be slower because it's a scan. Azure storage will return continuation tokens if we hit timeout, 1000 entities, or a server boundary (as discussed above). Many of the storage client libraries have convenience methods which should help you handle this by automatically following these tokens as you iterate through the list.
TL;DR: With the information you've given I'd recommend a unique partition key per item. Null row keys are not allowed, but however else you'd like to construct the row key is fine.
Reading:
Azure Storage Table Design Guide
Azure Storage Performance Check List
If you don't need EntityGroupTransaction to update entities in batch, unique partition keys are good option to you.
Table service auto-scale feature may not work perfectly I think. When some of data in a partition are 'hot', table service will move them to another cluster to enhance performance. But since you have unique partition key, probably non of your entity will be determined as 'hot', while if you grouped them in partitions some partition will be 'hot' and moved. This problem below may also be there if you are using static partition key.
Besides, table service may returns partial entities of your query when
More than 1000 entities in result.
Partition boundary is crossed.
From your request you also need full query (return all entities). If your are using unique partition key this mean each entity is a unique partition, so your query will only return 1 entity with a continue token. And you need to fire another query with this continue token to retrieve the next entity. I don't think this is what you want.
So my suggestion is, select a reasonable partition key in any cases, even though it looks useless in your business, because it helps table service to optimize your data.

Resources