I have a question about optimal Cassandra database design: is it efficient to have a single table with a large number of skinny rows or is it efficient to have a keyspace with many many tables?
The context:
I am trying to store data from multiple sensors. One approach would be to have a single table that stores data from all sensors. The other approach would be to have one table per sensor. Which one is better?
Please advise.
I'd go with fewer tables for a number of reasons:
As Andy Tolbert mentioned in his reply, each table introduces some overhead which builds up to a large amount when you have 10s or 100s of thousands of tables. Think of it as increasing your overhead/value ratio
If you are dealing with a large number of tables, chances are you'll be creating some of them dynamically during the application's normal operating time. If that is the case, you may see errors in Cassandra as it can fail to propagate the schemas of some new tables across the cluster when it's under pressure. I've seen this in C* 2.0 but I'm not sure if it's still an issue in the latest versions.
Most of the benefits of a multi-table schema can be gained from putting extra thought into single-table data modelling. Having said that, there are cases when segregating data into discrete tables really is the most appropriate solution. One example of this is in certain multi-tenancy systems where data for different tenants needs to be kept physically separate and backed up in isolation, for regulatory reasons.
It is much better and idiomatic to have 1 table for all sensors. There is some overhead introduced with each table (mxbeans for metrics, files, etc.) so you don't have want to have too many.
When you say 'a large number of skinny rows' I don't anticipate that being a problem, you can have many unique keys/partitions (some crazy large number).
Related
I've been trying to think about what the ideal table structure would be for the fastest Spark queries.
I'll try and provide a use case: Let's say your gathering stats for every car in the world and you want to use calculate various metrics with basic math (i.e. add, sub, mult, div).
Would be better to structure the data in a tall table with minimal fields like: day, metric, type, value?
Or would it be better to build a wide tables, that may store metrics independently. With more fields like: day, emmision_value, tire_pressure_value, speed_value, weight_value, heat_value, radio_value, etc .
Is it right to say that tall tables are better for spark? I assume it would be less memory intensive with a taller table.
As mentioned in the comments, this is a subjective question not exactly related to spark, but I'll try and answer none the less.
I assume it would be less memory intensive with a taller table.
Not really, the amount of storage required should be the same in either case based on the use case you have mentioned so let's get this out of the way. In case of taller tables there more rows and lesser columns and in case of wide tables the opposite. So on a cell level it should roughly be the same. I'm considering un compressed data independent of storage format.
Now lets talk about the mentioned use case. Simply put, it's aggregations. This may be fed downstream or may be used for reporting. Generally keeping this is mind, wider tables/views are better simply because - Lesser rows per day = less I/O as less shuffle.
Having said that, look through the cons below as well,
Schema evolution problems due to fixed schema
more suited for batch processing
Taller tables will be more streaming friendly, easier to extend for additional metrics and if its used with a source that supports push down, can result in quick partial scans.
in short, it very much depends on your operations.
We're investigating options to store and read a lot of immutable data (events) and I'd like some feedback on whether Cassandra would be a good fit.
Requirements:
We need to store about 10 events per seconds (but the rate will increase). Each event is small, about 1 Kb.
A really important requirement is that we need to be able to replay all events in order. For us it would be fine to read all data in insertion order (like a table scan) so an explicit sort might not be necessary.
Querying the data in any other way is not a prime concern and since Cassandra is a schema db I don't suppose it's possible when the events come in many different forms? Would Cassandra be a good fit for this? If so is there something one should be aware of?
I've had the exact same requirements for a "project" (rather a tool) a year ago, and I used Cassandra and I didn't regret. In general it fits very well. You can fit quite a lot of data in a Cassandra cluster and the performance is impressive (although you might need tweaking) and the natural ordering is a nice thing to have.
Rather than expressing the benefits of using it, I'll rather concentrate on possible pitfalls you might not consider before starting.
You have to think about your schema. The data is naturally ordered within one row by the clustering key, in your case it will be the timestamp. However, you cannot order data between different rows. They might be ordered after the query, but it is not guaranteed in any way so don't think about it. There was some kind of way to write a query before 2.1 I believe (using order by and disabling paging and allowing filtering) but that introduced bad performance and I don't think it is even possible now. So you should order data between rows on your querying side.
This might be an issue if you have multiple variable types (such as temperature and pressure) that have to be replayed at the same time, and you put them in different rows. You have to get those rows with different variable types, then do your resorting on the querying side. Another way to do it is to put all variable types in one row, but than filtering for only a subset is an issue to solve.
Rowlength is limited to 2 billion elements, and although that seems a lot, it really is not unreachable with time series data. Especially because you don't want to get near those two billions, keep it lower in hundreds of millions maximum. If you put some parameter on which you will split the rows (some increasing index or rounding by day/month/year) you will have to implement that in your query logic as well.
Experiment with your queries first on a dummy example. You cannot arbitrarily use <, > or = in queries. There are specific rules in SQL with filtering, or using the WHERE clause..
All in all these things might seem important, but they are really not too much of a hassle when you get to know Cassandra a bit. I'm underlining them just to give you a heads up. If something is not logical at first just fall back to understanding why it is like that and the whole theory about data distribution and the ring topology.
Don't expect too much from the collections within the columns, their length is limited to ~65000 elements.
Don't fall into the misconception that batched statements are faster (this one is a classic :) )
Based on the requirements you expressed, Cassandra could be a good fit as it's a write-optimized data store. Timeseries are quite a common pattern and you can define a clustering order, for example, on the timestamp of the events in order to retrieve all the events in time order. I've found this article on Datastax Academy very useful when wanted to learn about time series.
Variable data structure it's not a problem: you can store the data in a BLOB, then parse it internally from your application (i.e. store it as JSON and read it in your model), or you could even store the data in a map, although collections in Cassandra have some caveats that it's good to be aware of. Here you can find docs about collections in Cassandra 2.0/2.1.
Cassandra is quite different from a SQL database, and although CQL has some similarities there are fundamental differences in usage patterns. It's very important to know how Cassandra works and how to model your data in order to pursue efficiency - a great article from Datastax explains the basics of data modelling.
In a nutshell: Cassandra may be a good fit for you, but before using it take some time to understand its internals as it could be a bad beast if you use it poorly.
I have a 40 column RDBMS table which I am porting to Cassandra.
Using the estimator at http://docs.datastax.com/en/cassandra/2.1/cassandra/planning/architecturePlanningUserData_t.html
I created a excel sheet with column names, data types, size of each column etc.
The Cassandra specific overhead for each RDBMS row is a whopping 1KB when the actual data is only 192 bytes.
Since the overheads are proportional to number of columns, I thought it would be much better if I just create a UDT for the fields that are not part of the primary key. That way, I would incur the column overhead only once.
Also, I don't intend to run queries on inner fields of the UDT. Even if I did want that, Cassandra has very limited querying features that work on non PK fields.
Is this a good strategy to adopt? Are there any pitfalls? Are all these overheads easily eliminated by compression or some other internal operation?
On the surface, this isn't a bad idea at all. You are essentially abstracting your data by another level, but in a way that it is still manageable to meet your needs. It's actually good thinking.
I have a 40 column RDBMS table
This part slightly worries me. Essentially, you'd be creating a UDT with 40 properties. Not a huge deal in and of itself. Cassandra should handle that just fine.
But while you may not be querying on the inner fields of the UDT, you need to ask yourself how often you plan to update them. Cassandra stores UDTs as "frozen" types in a single column. This is important to understand for two reasons:
You cannot read a single property of a UDT without reading all properties of the UDT.
Likewise, you cannot update a single property in a UDT without rewriting all of them, either.
So you should keep that in mind while designing your application. As long as you won't be writing frequent updates to individual properties of the UDT, this should be a good solution for you.
we trying to build a data-ware house for our transaction system.
- We make 5000 -6000 transaction per day, they can go > 20,000.
- Each transaction produce a file, size (> 4MB)
we want to have a system, which can make updates to the existing data, consistent and availability, and have good read performance. Infrastructure is not any issue.
Hbase or cassandra or any other ? your help and guidance is highly appreciated.
Many thanks!
Most of newer nosql platform can do what you need in terms of performance - both hbase and cassandra scales horizontally (also Aerospike and others) so performances can be guaranteed if the data-model respect the "product-patterns" for data distribution.
I would not choose the technology in terms of performances.
What I would do is:
a list of different features offered by a bunch of products and then consider the one that, out of the box, best fit my needs
a list of operation I need to do on data and check if I am not going "against" some specific product
While 1 is easily done the 2 need a deep product analysis. For instance you say you need to update existing data -- let's imagine you choose Cassandra and you update very very frequently a column on which you put a secondary index (that, under the hood, creates a lookup table) for searching purpose. Any time you perform an update on this column on the lookup table a deletion and insertion is performed. You can read in this article that performing many deletes in Cassandra is considered an anti-pattern and can lead to problematic situations. This is just an example I did on Cassandra because is the one I know best among nosql products and not to tell you avoid Cassandra.
Currently our system uses PostgreSQL, however we seem to have pushed the limit of its capabilities. Some of our tables need to handle over 100 read/write operations per second so it is probably time to scale horizontally across multiple machines.
Have a lot of experience using GAE's Big Table. Big Table had rich options for querying. For example, queries were possible against list data fields. Cassandra is supposed to be based off of Big Table, but if I understand correctly, for Cassandra, we will actually have to custom-code a layer on top of Cassandra that uses and maintains index tables.
Would be great if there was an open source database available for which we did not have to build our own custom logic for maintaining index tables, zig-zag merge joins, etc...
Is Cassandra a good candidate here? Or are there ones that might be considered better?
Unless the operations are huge joins or return hundreds of thousands of rows, any database you choose will be able to sustain 100 ops/s. Cassandra will have no problems serving thousands if not tens of thousands of reads and writes per node.
Without knowing more about your particular use case it's impossible to give you meaningful advice. Cassandra is a great database, but if it's right for you I don't know. I'd suggest looking through the cassandra tag here on Stack Overflow and look at what people ask about and if it looks at all like what you're trying to do, and if the answers say that it's possible with Cassandra (I know I've answered quite a few questions where the answer was that Cassandra wasn't the best choice for that particular case).
Cassandra and GAE Big Table have big similarities, but also big differences. One thing that trips up new Cassandra users is that there really isn't any way of doing things like "add this thing only unless that other thing was there" or "add an item and remove all but the last N items".