I'm trying to bulk load about 25 million rows from an Azure SQL table into three different tables in Azure Table Storage. I'm currently managing to process about 50-100 rows / second, which means that at current speeds, it'll take me about 70-140 hours to finish the load. That's a long time, and it seems like it ought to be possible to speed that up.
Here's what I'm doing:
Kick off 10 separate tasks
For each task, read the next 10,000 unprocessed records from the SQL DB
For each of the three destination ATS tables, group the 10,000 records by that table's partition key
In parallel (up to 10 simultaneously), for each partition key, segment the partition into (max) 100-row segments
In parallel (up to 10 simultaneously), for each segment, create a new TableBatchOperation.
For each row from the chunk, execute a batch.InsertOrReplace() statement (because some of the data has already been loaded, and I don't know which)
Execute the batch asynchronously
Rinse and repeat (with lots of flow control, error checking, etc.)
Some notes:
I've tried this several different ways, with lots of different parameters for the various numbers up above, and I'm still not getting it down to less than 10-20 ms / event.
It doesn't seem to be CPU bound, as the VM doing the load is averaging about 10-20% CPU.
It doesn't seem to be SQL-bound, as the SQL select statement is the fastest part of the operation by at least two orders of magnitude.
It's presumably not network-bound, as the VM executing the batch is in the same data center (US West).
I'm getting reasonable partition density, i.e., each 10K set of records is getting broken up into a couple hundred partitions for each table.
With perfect partition density, I'd have up to 3000 tasks running simultaneously (10 master tasks * 3 tables * 10 partitions * 10 segments). But they're executing asynchronously, and they're nearly all I/O bound (by ATS), so I don't think we're hitting any threading limits on the VM executing the process.
The only other obvious idea I can come up with is one that I tried earlier on, namely, to do an order by partition key in the SQL select statements, so that we can get perfect partition density for the batch inserts. For various reasons that has proven to be difficult, as the table's indexes aren't quite setup for that. And while I would expect some speed up on the ATS side using that approach, given that I'm already grouping the 10K records by their partition keys, I wouldn't expect to get that much additional performance improvement.
Any other suggestions for speeding this up? Or is this about as fast as anybody else has been able to get?
Still open to other suggestions, but I found this page here quite helpful:
http://blogs.msmvps.com/nunogodinho/2013/11/20/windows-azure-storage-performance-best-practices/
Specifically, these:
ServicePointManager.Expect100Continue = false;
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.DefaultConnectionLimit = 100;
With those, I was able to drop the average processing time from ~10-20 ms / event down to ~2 ms.
Much better.
But as I said, still open to other suggestions. I've read about other folks getting upwards of 20,000 operations per second on ATS, and I'm still stuck around 500.
What about your partition keys? If they are incremental numbers, then Azure will optimize them into one storage node. So you should use completly different partition keys "A1", "B2" etc. instead of "1", "2" etc.
In this situation all of your partitions will be handled by different storage nodes, and performance will be multitplied.
Related
I have a table in cassandra DB that is populated. It provably has around 10000 records. When I try to execute select count(*), my query times out. Surprisingly, it times out even when i restrict the query with the partition key. The table has a column that is filled with a lot of text. I can't understand how that would be a problem, but i thought, i'd mention it. Any suggestions?
Doing a COUNT() of the rows in a partition shouldn't timeout unless it contains thousands and thousands of rows. More importantly, the query is most likely timing out when the partition contains thousands of tombstones.
You haven't provided a lot of information in your question and ideally you should have included:
the table schema
a sample query
In any case if you are storing queue-like datasets and deleting rows after they've been processed (because it's a queue) then you are generating lots of tombstones within the partition. Once you've reached the maximum tombstone_failure_threshold (default is 100K tombstones) then Cassandra will stop reading any more rows.
Unfortunately, it's hard to say what's happening in your case without the necessary details. Cheers!
A SELECT COUNT(*) needs to scan the entire database and can potentially take an extremely long time - much longer than the typical timeout. Ideally, such a query would be paged - periodically returning empty pages until the final page contains the count - to avoid timing out. But currently in Cassandra - and also in Scylla - this isn't done.
As Erick noted in his reply, everything becomes worse if you also have a lot of tombstones: You said you only have 10,000 rows, but it's easy to imagine a use case where the data changes frequently, and you actually have for each row 100 deleted rows - so Cassandra needs to scan through 1 million rows (most of them already dead), not 10,000.
Another issue to consider is that when your cluster is very large, scanning usually contact nodes sequentially, and each node many times (depending on the number of vnodes), so the scan time on a very large cluster will be large even if there are just a few actual rows in the database. By the way, unlike a regular scan, an aggregation like COUNT(*) can actually be done internally in parallel. Scylla recently implemented this and it speeds up counts (and other aggregation), but if I understand correctly, this feature is not in Cassandra.
Finally, you said that "Surprisingly, it times out even when i restrict the query with the partition key.". The question is how you restricted the query with a partition key. If you restricted the partition key itself to a range, it will still be slow because Cassandra still needs to scan all the partitions and compare their keys to the range. What you should have done is to restrict the token of the partition key, e.g., something like
where token(p) >= -9223372036854775808 and token(p) < ....
I have a single structured row as input with write rate of 10K per seconds. Each row has 20 columns. Some queries should be answered on these inputs. Because most of the queries needs different WHERE, GROUP BY or ORDER BY, The final data model ended up like this:
primary key for table of query1 : ((column1,column2),column3,column4)
primary key for table of query2 : ((column3,column4),column2,column1)
and so on
I am aware of the limit in number of tables in Cassandra data model (200 is warning and 500 would fail)
Because for every input row I should do an insert in every table, the final write per seconds became big * big data!:
writes per seconds = 10K (input)
* number of tables (queries)
* replication factor
The main question: am I on the right path? Is it normal to have a table for every query even when the input rate is already so high?
Shouldn't I use something like spark or hadoop instead of relying on bare datamodel? Or event Hbase instead of Cassandra?
It could be that Elassandra would resolve your problem.
The query system is quite different from CQL, but the duplication for indexing would automatically be managed by Elassandra on the backend. All the columns of one table will be indexed so the Elasticsearch part of Elassandra can be used with the REST API to query anything you'd like.
In one of my tests, I pushed a huge amount of data to an Elassandra database (8Gb) going non-stop and I never timed out. Also the search engine remained ready pretty much the whole time. More or less what you are talking about. The docs says that it takes 5 to 10 seconds for newly added data to become available in the Elassandra indexes. I guess it will somewhat depend on your installation, but I think that's more than enough speed for most applications.
The use of Elassandra may sound a bit hairy at first, but once in place, it's incredible how fast you can find results. It includes incredible (powerful) WHERE for sure. The GROUP BY is a bit difficult to put in place. The ORDER BY is simple enough, however, when (re-)ordering you lose on speed... Something to keep in mind. On my tests, though, even the ORDER BY equivalents was very fast.
I use Azure Table storage as a time series database. The database is constantly extended with more rows, (approximately 20 rows per second for each partition). Every day I create new partitions for the day's data so that all partition have a similar size and never get too big.
Until now everything worked flawlessly, when I wanted to retrieve data from a specific partition it would never take more than 2.5 secs for 1000 values and on average it would take 1 sec.
When I tried to query all the data of a partition though things got really really slow, towards the middle of the procedure each query would take 30-40 sec for 1000 values.
So I cancelled the procedure just to re start it for a smaller range. But now all queries take too long. From the beginning all queries need 15-30 secs. Can that mean that data got rearranged in a non efficient way and that's why I am seeing this dramatic decrease in performance? If yes is there a way to handle such a rearrangement?
I would definitely recommend you to go over the links Jason pointed above. You have not given too much detail about how you generate your partition keys but from sounds of it you are falling into several anti patterns. Including by applying Append (or Prepend) and too many entities in a single partition. I would recommend you to reduce your partition size and also put either a hash or a random prefix to your partition keys so they are not in lexicographical order.
Azure storage follows a range partitioning scheme in the background, so even if the partition keys you picked up are unique, if they are sequential they will fall into the same range and potentially be served by a single partition server, which would hamper the ability of azure storage service overall to load balance and scale out your storage requests.
The other aspect you should think is how you are reading the entities back, the best recommendation is point query with partition key and row key, worst is a full table scan with no PK and RK, there in the middle you have partition scan which in your case will also be pretty bad performance due to your partition size.
One of the challenges with time series data is that you can end up writing all your data to a single partition which prevents Table Storage from allocating additional resources to help you scale. Similarly for read operations you are constrained by potentially having all your data in a single partition which means you are limited to 2000 entities / second - whereas if you spread your data across multiple partitions you can parallelize the query and yield far greater scale.
Do you have Storage Analytics enabled? I would be interested to know if you are getting throttled at all or what other potential issues might be going on. Take a look at the Storage Monitoring, Diagnosing and Troubleshooting guide for more information.
If you still can't find the information you want please email AzTableFeedback#microsoft.com and we would be happy to follow up with you.
The Azure Storage Table Design Guide talks about general scalability guidance as well as patterns / anti-patterns (see the append only anti-pattern for a good overview) which is worth looking at.
I have a system that needs to process ~150 jobs per day.
Additionally, I need to query past jobs efficiently, usually by time-range, but sometimes by other properties like job owner or resource used.
I know that running queries across table partitions can slow down my application, but what if I just put every row into one partition? If I use datetime.ticks as my rowkey and my query ranges are always small, will this scale well?
I tried putting data into separate partitions by time, but it seems like my queries get slower as more partitions are included in the query.
The partition is a scale unit. Each partition can receive up to 2000tps before you start receiving throttling errors. As such, as long as you don't forsee exceeding that volume, you should be find fine keeping a single partition.
However, as the size of the partition grows, so will query times. So you may want to factor that in as well.
Is there a way around 500 entities / second / partition with ATS (Azure Table Storage)? OK with dirty reads. If in insert is not immediately available for read then OK.
Looking to move some large tables from SQL to ATS.
Scale: Because of these tables the size is bumping the 150 GB limit of SQL Azure
Insert speed: Inverted index for query speed. Insert order is not
sorted by the table clustered index which causes rapid SQL table
fragmentation. ATS most likely has an insert advantage over SQL.
Cost: ATS has a lower monthy cost. But ATS has a higher load cost as millions of rows and cannot batch as the order of the load is not by partition.
Query speed: A search is almost never on just one partitionKey. A search will have a SQL component and zero or more ATS components. This ATS query is always by partitionKey and returning rowKeys. Raw search on partitionKey is fast the problem is the time to return the entities (rows). A given partitionKey will have on average 1,000 rowKeys which is 2 seconds at 500 entities / second / partition. But there will be some partitionKeys with over 100,000 rowKeys which equates to over 3 minutes. Return 10,000 rows at a time and in SQL and no query is over 10 seconds as with the power of joins don't have to bring down 100,000 rows to have those rows considered in the where.
Is there a was around this select entity speed with ATS? For scale and insert speed would like to go to ATS.
Windows Azure Storage Abstractions and their Scalability Targets
How to get most out of Windows Azure Tables
Designing a Scalable Partitioning Strategy for Windows Azure Table Storage
Turn entity tracking off for query results that are not going to be modified:
context.MergeOption = MergeOption.NoTracking;
One potential workaround is to stripe the data across multiple partitions and/or tables, perform queries across all the (sub)partitions in parallel and merge the results.
For example, for striping across partitions, prepending the partition key with a single digit can multiple the scalability of the partition 10 times.
So a partition key, say ABCDEFGH, could be sub partitioned 0ABCDEFGH to 9ABCDEFGH.
Writes are made to a partition, with the prefix digit generated either randomly or in round robin fashion.
Reads would query across all 10 partitions in parallel and merge the results.
For striping across tables, one of N tables can be written to randomly or in round robin fashion and queried similarly in parallel.
Edit: I had originally stated that the limit was 500 transaction/partition/sec. That was incorrect. The limit is actually 500 entities/partition/sec, as stated in the original question.
This also applies to the query speeds you've calculated. If you query an ATS PartitionKey and it returns 1000 entities, that will likely take only a little longer, perhaps a few hundred milliseconds, than returning a single entity. On the other hand, if the query returns more than 1000 entities it will be much slower, as each set of 1000 rows requires an essentially independent transaction and must be done in serial.
It's not completely clear to me what you're doing, but it sounds like a lot of querying. Keep in mind that querying ATS on non-key columns tends to be very slow. If you're doing a lot of that, you might be better served by using SQL Azure Federations and fan-out queries instead.