We're going to be using Cassandra for storing large volumes of data. This data is inserted and read but never updated or deleted. From what I've understood, UPDATE ops lead to tombstones and DELETE ops to shadows.
In order to design around this, intend to use monthly tables and TRUNCATE and then DROP tables after n (approx 4) months. Under the assumption that the data is evenly distributed and there's enough disk to store this - are there any other caveats to this approach.
On a side note, is there a technical term for this schema design? I'm happy to share more information if the question needs more details.
I have seen some projects that were doing that to avoid deletions, so they just had the monthly tables, that were removed after N months by just dropping them (you don't need to do TRUNCATE before drop!). But that knowledge about table naming required that applications knew about it.
But in your case Time Window Compaction Strategy in combination with TTLs may work better because it drop the whole SSTables when all data in them expires. You can look into this blog post for explanations on how it works and where it should be used.
Related
I need to capture time-series sensor data in Cassandra. The best practices for handling time-series data in DynamoDB is as follow:
Create one table per time period, provisioned with write capacity less than 1,000 write capacity units (WCUs).
Before the end of each time period, prebuild the table for the next period.
As soon as a table is no longer being written to, reduce its provisioned write capacity. Also reduce the provisioned read capacity of earlier tables as they age, and archive or delete the ones whose contents will rarely or never be needed.
Now I am wondering how I can implement the same concept in Cassandra! Is there any way to manually configure write/read capacity in Cassandra as well?
This really depends on your own requirements that you need to discuss with development, etc.
There are several ways to handle time-series data in Cassandra:
Have one table for everything. As Chris mentioned, just include the time component into partition key, like a day, and store data per sensor/day. If the data won't be updated, and you know in advance how long they will be kept, so you can set TTL to data, then you can use TimeWindowCompactionStrategy. Advantage of this approach is that you have only one table and don't need to maintain multiple tables - that's make easier for development and maintenance.
The same approach as you described - create a separate table for period of time, like a month, and write data into them. In this case you can effectively drop the whole table when data "expires". Using this approach you can update data if necessary, and don't require to set TTL on data. But this requires more work for development and ops teams as you need to reach multiple tables. Also, take into account that there are some limits on the number of tables in the cluster - it's recommended not to have more than 200 tables as every table requires a memory to keep metadata, etc. Although, some things, like, a bloom filter, could be tuned to occupy less memory for tables that are rarely read.
For cassandra just make a single table but include some time period in the partition key (so the partitions do not grow indefinitely and get too large). No table maintenance and read/write capacity is really more dependent on workload and schema, size of cluster etc but shouldn't really need to be worried about except for sizing the cluster.
My table is a time series one. The queries are going to process the latest entries and TTL expire them after successful processing. If they are not successfully processed, TTL will not set.
The only query I plan to run on this is to select all entries for a given entry_type. They will be processed and records corresponding to processed entries will be expired.
This way every time I run this query I will get all records in the table that are not processed and processing will be done. Is this a reasonable approach?
Would using a listenablefuture with my own executor add any value to this considering that the thread doing the select is just processing.
I am concerned about the TTL and tombstones. But if I use clustering key of timeuuid type is this ok?
You are right one important thing getting in your way will be tombstones. By Default you will keep them around for 10 days. Depending on your access patter this might cause significant problems. You can lower this by setting the directly on the table or change it in the cassandra yaml file. Then it will be valid for all the newly created table gc_grace_seconds
http://docs.datastax.com/en/cql/3.1/cql/cql_reference/tabProp.html
It is very important that you make sure you are running the repair on whole cluster once within this period. So if you lower this setting to let's say 2 days, then within two days you have to have one full repair done on the cluster. This is very important because processed data will reaper. I saw this happening multiple times, and is never pleasant especially if you are using cassandra as a queue and it seems to me that you might be using it in your solution. I'll try to give some tips at the end of the answer.
I'm slightly worried about you setting the ttl dynamically depending on result. What would be the point of inserting the ttl-ed data that was successful and keeping forever the data that wasn't. I guess some sort of audit or something similar. Again this is a queue pattern, try to avoid this if possible. Also one thing to keep in mind is that you will almost always insert the data once in the beginning and then once again with the ttl should your processing be o.k.
Also getting all entries might be a bit tricky. For very moderate load 10-100 req/s this might be reasonable but if you have thousands per second getting all the requests every time might not be a good idea. At least not if you put them into single partition.
Separating the workload is also good idea. So yes using listenable future seems totally legit.
Setting clustering key to be timeuuid is usually the case with time series thata and I totally agree with you on this one.
In reality as I mentioned earlier you have to to take into account you will be saving 10 days worth of data (unless you tweak it) no matter what you do, it doesn't matter if you ttl it. It's still going to be ther, and every time cassandra will scan the partition will have to read the ttl-ed columns. In short this is just pain. I would seriously consider actually using something as kafka if I were you because what you are describing simply looks to me like a queue.
If you still want to stick with cassandra then please consider using buckets (adding date info to partitioning key and having a composite partitioning key). Depending on the load you are expecting you will have to bucket by month, week, day, hour even minutes. In some cases you might even want to add artificial columns to reduce load on the cluster. But then again this might be out of scope of this question.
Be very careful when using cassandra as a queue, it's a known antipattern. You can do it, but there are a lot of variables and it extremely depends on the load you are using. I once consulted a team that sort of went down the path of cassandra as a queue. Since basically using cassandra there was a must I recommended them bucketing the data by day (did some calculations that proved this is o.k. time unit) and I also had a look at this solution https://github.com/paradoxical-io/cassieq basically there are a lot of good stuff in this repo when using cassandra as a queue, data models etc. Basically this team had zombie rows, slow reading because of the tombstones etc. etc.
Also the way you described it it might happen that you have "hot rows" basically since you would just have one wide partition where all your data would go some nodes in the cluster might not even be that good utilised. This can be avoided by artificial columns.
When using cassandra as a queue it's very easy to mess a lot of things up. (But it's possible for moderate workloads)
I use Azure Table storage as a time series database. The database is constantly extended with more rows, (approximately 20 rows per second for each partition). Every day I create new partitions for the day's data so that all partition have a similar size and never get too big.
Until now everything worked flawlessly, when I wanted to retrieve data from a specific partition it would never take more than 2.5 secs for 1000 values and on average it would take 1 sec.
When I tried to query all the data of a partition though things got really really slow, towards the middle of the procedure each query would take 30-40 sec for 1000 values.
So I cancelled the procedure just to re start it for a smaller range. But now all queries take too long. From the beginning all queries need 15-30 secs. Can that mean that data got rearranged in a non efficient way and that's why I am seeing this dramatic decrease in performance? If yes is there a way to handle such a rearrangement?
I would definitely recommend you to go over the links Jason pointed above. You have not given too much detail about how you generate your partition keys but from sounds of it you are falling into several anti patterns. Including by applying Append (or Prepend) and too many entities in a single partition. I would recommend you to reduce your partition size and also put either a hash or a random prefix to your partition keys so they are not in lexicographical order.
Azure storage follows a range partitioning scheme in the background, so even if the partition keys you picked up are unique, if they are sequential they will fall into the same range and potentially be served by a single partition server, which would hamper the ability of azure storage service overall to load balance and scale out your storage requests.
The other aspect you should think is how you are reading the entities back, the best recommendation is point query with partition key and row key, worst is a full table scan with no PK and RK, there in the middle you have partition scan which in your case will also be pretty bad performance due to your partition size.
One of the challenges with time series data is that you can end up writing all your data to a single partition which prevents Table Storage from allocating additional resources to help you scale. Similarly for read operations you are constrained by potentially having all your data in a single partition which means you are limited to 2000 entities / second - whereas if you spread your data across multiple partitions you can parallelize the query and yield far greater scale.
Do you have Storage Analytics enabled? I would be interested to know if you are getting throttled at all or what other potential issues might be going on. Take a look at the Storage Monitoring, Diagnosing and Troubleshooting guide for more information.
If you still can't find the information you want please email AzTableFeedback#microsoft.com and we would be happy to follow up with you.
The Azure Storage Table Design Guide talks about general scalability guidance as well as patterns / anti-patterns (see the append only anti-pattern for a good overview) which is worth looking at.
I am using Casandra 2.0
My write load is somewhat similar to the queueing antipattern mentioned here: datastax
I am looking at pushing 30 - 40GB of data into cassandra every 24 hours and expiring that data within 24 hours. My current approach is to set a TTL on everything that I insert.
I am experimenting with how I partition my data as seen here: cassandra wide vs skinny rows
I have two column families. The first family contains metadata and the second contains data. There are N metadata to 1 data and a metadata may be rewritten M times throughout the day to point to a new data.
I suspect that the metadata churn is causing problems with reads in that finding the right metadata may require scanning all M items.
I suspect that the data churn is leading to excessive work compacting and garbage collecting.
It seems like creating a keyspace for each day and dropping the old keyspace after 24 hours would remove remove the need to do compaction entirely.
Aside from having to handle issues with what keyspace the user reads from on requests that overlap keyspaces, are there any other major flaws with this plan?
From my practice using partitioning is much better idea than using ttl.
It reduces cpu pressure
It partitions your data in Oracle manner, so searches are faster.
You can change your mind and keep the old data; using ttl it is difficult(I see one option - to migrate data before deletion)
If your rows are wide your can make them narrower.
Given a simple CQL table which stores an ID and a Blob, is there any problem or performance impact of storing potentially billions of rows?
I know with earlier versions of Cassandra wide rows were de rigueur, but CQL seems to encourage us to move away from that. I don't have any particular requirement to ensure the data is clustered together or able to filter in any order. I'm wondering whether very many rows in a CQL table could be problematic in any way.
I'm considering binning my data, that is - creating a partition key which is a hash%n of the ID and would limit the data to n 'bins' (millions of?). Before I add that overhead I'd like to validate whether it's actually worthwhile.
First, I don't think is correct.
I know with earlier versions of Cassandra wide rows were de rigueur, but CQL seems to encourage us to move away from that.
Wide rows are supported and well. There's a post from Jonathan Ellis Does CQL support dynamic columns / wide rows?:
A common misunderstanding is that CQL does not support dynamic columns or wide rows. On the contrary, CQL was designed to support everything you can do with the Thrift model, but make it easier and more accessible.
For the part about the "performance impact of storing potentially billions of rows" I think the important part to keep in mind is the size of these rows.
According to Aaron Morton in this mail thread:
When rows get above a few 10’s of MB things can slow down, when they get above
50 MB they can be a pain, when they get above 100MB it’s a warning sign. And
when they get above 1GB, well you you don’t want to know what happens then.
and later:
Larger rows take longer to go through compaction, tend to cause more JVM GC and
have issue during repair. See the in_memory_compaction_limit_in_mb comments in
the yaml file. During repair we detect differences in ranges of rows and stream
them between the nodes. If you have wide rows and a single column is our of sync
we will create a new copy of that row on the node, which must then be compacted.
I’ve seen the load on nodes with very wide rows go down by 150GB just by
reducing the compaction settings.
IMHO all things been equal rows in the few 10’s of MB work better.
In a chat with Aaron Morton (last pickle) he indicated that billions of rows per table is not necessarily problematic.
Leaving this answer for reference, but not selecting it as "talked to a guy who knows a lot more than me" isn't particularly scientific.