Is there a way to calculate how many RUs I would need if the a documentdb database is expected to have roughly 800 writes a second and 1500 reads a second?
Each read is a simple retrieve based on the index, and each item will have about 15 small data fields (a few bools, short strings, and short doubles).
Each write will be an update of most of the data values for the record.
The documentations states 1 RU = 1kb GET, well each GET in this instance should be less than 1kb I would suspect so the reads would be about 1500 RU/s but I have no idea how to calculate the writes; any help would be greatly appreciated.
There's a simple to use capacity planning tool available online. You can simply upload a sample JSON document and then specify how many reads and writes per second you expect and it will estimate your required RU/s throughput.
As David so eloquently pointed out, this should only be used as a starting point to give you a ballpark of what your minimum RU cost might be. If your primary read pattern was simply retrieving documents directly by their Id then it might be relatively accurate. In reality, RU is calculated based on the complexity of your queries. So once you have your baseline it's important to do proper analysis of your query patterns and get a feel for their RU cost.
Luckily, the ease and speed with which you can scale Cosmos in response to load is one of it's most compelling features in my opinion. In my experience, adding or removing RU throughput is done within a matter of seconds so you can definitely add a layer of intelligent database tuning within your application to optimize your cost and usage.
Related
I have a question regarding CosmosDB and how to deal with the costs and RUs. Lets say I have JSON document which is around 100KB big. Now when I want to update a property and use an upsert the cosmos will do a replace which is essentially a delete and create which will result in relatively high consumption of RU right?
My first question is, can I reduce the amount of RUs for updating when I split the file into smaller parts? Let's say 10x 10KB documents. Because then a smaller document has to be replaced and it needs less CPU etc.
So this would be the case for upserts. But now there is a game changer for ComsosDB called partial update.
How would it be in this case? Would smaller files lead to a decrease in RU consumption? Because in the background cosmos has to parse the file and insert the new property. Bigger file more to parse more consumption of RU?
My last question is: Will the split into more files lead to higher consumption of RUs because I have to do 10 requests instead of one?
I'm going to preface my answer with the comment that anything performance related is something that users need to test themselves because the benefits or trade-offs can vary widely. There is however some general guidance around this.
In scenarios where you have very large documents with frequent updates on a small number of properties, it is often better to shred that document with one document that has the frequently updated properties and another that has the static properties. Smaller documents consume less RU/s to update and also reduce the load on the client and network payload.
Partial updates provides zero benefit over Update or Upsert for RU/s regardless of whether you shred the document or not. The service still needs to patch and merge the entire document. It will only reduce CPU consumption and network payload due to the smaller amount of data.
I'm a beginner to Azure. I'm using log monitors to view the logs for a Cosmos DB resource. I could see one log with Replace operation which is consuming a lot of average RUs.
Generally operation names should be CREATE/DELETE/UPDATE/READ . But why REPLACE operation has come in place over here - I could not understand it. And why the REPLACE operation is consuming lot of RUs?
What can I try next?
Updates in Cosmos are full replacement operations rather than in-place updates, as such these consume more RU/s than inserts. Also, the larger the document, the more throughput required for the update.
Strategies to optimize throughput consumption on update operations typically center around splitting documents into two with properties that don't change going into one document that is typically larger and another document with those properties that change frequently going into an other that is smaller. This will allow for the updates to be made on a smaller document which reduces RU/s consumed to do the operation.
All that said, 12 RU/s is not an inordinate amount of RU/s for a replace operation. I don't think you will get much, if any throughput reduction doing this. But you can certainly try.
We are planning in writing 10000 JSON Documents to Azure Cosmos DB (MongoDB), Does the Throughput Units matter, if so, can we increase for the batch load and set it back to low number
Yes you can do that. The lowest the RUs can be is 400. Scale up before you're about to do your insert and then turn it down again. As always, that part can be automated if you know when the documents are going to be inserted.
Check out the DocumentClient documentation and more specifically ReplaceOfferAsync.
You can scale the RU/sec allocation up or down at any time. You'll want to look at your insertion cost (RU cost is returned in a header) for a typical document, to get an idea of how many documents you might be able to write, per second, before getting throttled.
Also keep in mind: if you scale your RU out beyond what an underlying physical partition can provide, Cosmos DB will scale out your collection to have additional physical partitions. This means you might not be able to scale your RU back down to the bare minimum later (though you will be able to scale down).
We migrated our mobile app (still being developed) from Parse to Azure. Everything is running, but the price of DocumentDB is so high that we can't continue with Azure without fix that. Probably we're doing something wrong.
1) Price seams to have a bottleneck in the DocumentDB requests.
Running a process to load the data (about 0.5 million documents), memory and CPU was ok, but the DocumentDB request limit was a bottleneck, and the price charged was very high.
2) Even after the end of this data migration (few days of processing), azure continue to charge us every day.
We can't understand what is going on here. The graphic for use are flat, but the price is still climbing, as you can see in the imagens.
Any ideas?
Thanks!
From your screenshots, you have 15 collections under the Parse database. With Parse: Aside from the system classes, each of your user-defined classes gets stored in its own collection. And given that each (non-partitioned) collection has a starting run-rate of ~$24/month (for an S1 collection), you can see where the baseline cost would be for 15 collections (around $360).
You're paying for reserved storage and RU capacity. Regardless of RU utilization, you pay whatever the cost is for that capacity (e.g. S2 runs around $50/month / collection, even if you don't execute a single query). Similar to spinning up a VM of a certain CPU capacity and then running nothing on it.
The default throughput setting for the parse collections is set to 1000 RUPS. This will cost $60 per collection (at the rate of $6 per 100 RUPS). Once you finish the parse migration, the throughput can be lowered if you believe the workload decreased. This will reduce the charge.
To learn how to do this, take a look at https://azure.microsoft.com/en-us/documentation/articles/documentdb-performance-levels/ (Changing the throughput of a Collection).
The key thing to note is that DocumentDB delivers predictable performance by reserving resources to satisfy your application's throughput needs. Because application load and access patterns change over time, DocumentDB allows you to easily increase or decrease the amount of reserved throughput available to your application.
Azure is a "pay-for-what-you-use" model, especially around resources like DocumentDB and SQL Database where you pay for the level of performance required along with required storage space. So if your requirements are that all queries/transactions have sub-second response times, you may pay more to get that performance guarantee (ignoring optimizations, etc.)
One thing I would seriously look into is the DocumentDB Cost Estimation tool; this allows you to get estimates of throughput costs based upon transaction types based on sample JSON documents you provide:
So in this example, I have an 8KB JSON document, where I expect to store 500K of them (to get an approx. storage cost) and specifying I need throughput to create 100 documents/sec, read 10/sec, and update 100/sec (I used the same document as an example of what the update will look like).
NOTE this needs to be done PER DOCUMENT -- if you're storing documents that do not necessarily conform to a given "schema" or structure in the same collection, then you'll need to repeat this process for EVERY type of document.
Based on this information, I cause use those values as inputs into the pricing calculator. This tells me that I can estimate about $450/mo for DocumentDB services alone (if this was my anticipated usage pattern).
There are additional ways you can optimize the Request Units (RUs -- metric used to measure the cost of the given request/transaction -- and what you're getting billed for): optimizing index strategies, optimizing queries, etc. Review the documentation on Request Units for more details.
I use Azure Table storage as a time series database. The database is constantly extended with more rows, (approximately 20 rows per second for each partition). Every day I create new partitions for the day's data so that all partition have a similar size and never get too big.
Until now everything worked flawlessly, when I wanted to retrieve data from a specific partition it would never take more than 2.5 secs for 1000 values and on average it would take 1 sec.
When I tried to query all the data of a partition though things got really really slow, towards the middle of the procedure each query would take 30-40 sec for 1000 values.
So I cancelled the procedure just to re start it for a smaller range. But now all queries take too long. From the beginning all queries need 15-30 secs. Can that mean that data got rearranged in a non efficient way and that's why I am seeing this dramatic decrease in performance? If yes is there a way to handle such a rearrangement?
I would definitely recommend you to go over the links Jason pointed above. You have not given too much detail about how you generate your partition keys but from sounds of it you are falling into several anti patterns. Including by applying Append (or Prepend) and too many entities in a single partition. I would recommend you to reduce your partition size and also put either a hash or a random prefix to your partition keys so they are not in lexicographical order.
Azure storage follows a range partitioning scheme in the background, so even if the partition keys you picked up are unique, if they are sequential they will fall into the same range and potentially be served by a single partition server, which would hamper the ability of azure storage service overall to load balance and scale out your storage requests.
The other aspect you should think is how you are reading the entities back, the best recommendation is point query with partition key and row key, worst is a full table scan with no PK and RK, there in the middle you have partition scan which in your case will also be pretty bad performance due to your partition size.
One of the challenges with time series data is that you can end up writing all your data to a single partition which prevents Table Storage from allocating additional resources to help you scale. Similarly for read operations you are constrained by potentially having all your data in a single partition which means you are limited to 2000 entities / second - whereas if you spread your data across multiple partitions you can parallelize the query and yield far greater scale.
Do you have Storage Analytics enabled? I would be interested to know if you are getting throttled at all or what other potential issues might be going on. Take a look at the Storage Monitoring, Diagnosing and Troubleshooting guide for more information.
If you still can't find the information you want please email AzTableFeedback#microsoft.com and we would be happy to follow up with you.
The Azure Storage Table Design Guide talks about general scalability guidance as well as patterns / anti-patterns (see the append only anti-pattern for a good overview) which is worth looking at.