Cassandra wide row versus regular row - cassandra

I need some help with data modelling for Cassandra.
Here is the problem description:
I have 3 servers processing user requests NodeA,NodeB and NodeC. I have a 1000 different developers ( potentially 10000 ) and must maintain a $ balance for each of them per processing node.
I can see 2 ways of modeling this:
1) CF with developerid+balanceid as the row key. The column names will be NodeA, NodeB and NodeC.
create table {
developerBalanceid int primarykey;
nodeA varchar;
nodeB varchar;
nodeC varchar;
}
2) CF with wide rows with node ids as keys. The column name will be developerid+balanceid. This seems similar to time-series data being stored in Cassandra.
create table {
nodeid varchar as primary key;
developerBalanceid int; //this will be dynamic columns
}
Operations:
a) Writes: Every 5 seconds , every node will update the $ balance for every developer. More specifically, at every time t+5, node A will write 1000 balance values. node B will write a 1000 balance values and node C too.
b) Reads: Reads also occur every 5 seconds to read a specific developerBalance.
It appears 2) is the best way to model this.
I do have some concerns about how wide rows will work with the query I want to do.
In the worst case , what is the number of iops that a wide row read will incur.
Should I be looking at other optimizations like compression on the writes?
I understand that I can run some tests and examine performance. But I would like to hear other experiences too.

The essential rule when modeling with Cassandra is "model from your queries". The main argument in your question is:
read a specific developerBalance.
If you query by developerBalance, then developerBalance must be the beginning of your primary key. Your solution 1 is better to me.
With the solution 2 you won't be able to write
select * from my_table where developerBalanceid=?
... without scanning the whole cluster
You must understand what Cassandra querying can not do, what are partition key and cluster key. Another link

Related

Datamodel for Scylla/Cassandra for table partition key is not known beforehand -> static field?

I am using ScyllaDb, but I think this also applies to Cassandra since ScyllaDb is compatible with Cassandra.
I have the following table (I got ~5 of this kind of tables):
create table batch_job_conversation (
conversation_id uuid,
primary key (conversation_id)
);
This is used by a batch job to make sure some fields are kept in sync. In the application, a lot of concurrent writes/reads can happen. Once in a while, I will correct the values with a batch job.
A lot of writes can happen to the same row, so it will overwrite the rows. A batch job currently picks up rows with this query:
select * from batch_job_conversation
Then the batch job will read the data at that point and makes sure things are in sync. I think this query is bad because it stresses all the partitions and the node coordinator because it needs to visit ALL partitions.
My question is if it is better for this kind of tables to have a fixed field? Something like this:
create table batch_job_conversation (
always_zero int,
conversation_id uuid,
primary key ((always_zero), conversation_id)
);
And than the query would be this:
select * from batch_job_conversation where always_zero = 0
For each batch job I can use a different partition key. The amount of rows in these tables will be roughly the same size (a few thousand at most). The tables will overwrite the same row probably a lot of times.
Is it better to have a fixed value? Is there another way to handle this? I don't have a logical partition key I can use.
second model would create a LARGE partition and you don't want that, trust me ;-)
(you would do a partition scan on top of large partition, which is worse than original full scan)
(and another advice - keep your partitions small and have a lot of them, then all your cpus will be used rather equally)
first approach is OK - and is called FULL SCAN, BUT
you need to manage it properly
there are several ways, we blogged about it in https://www.scylladb.com/2017/02/13/efficient-full-table-scans-with-scylla-1-6/
and basically it boils down to divide and conquer
also note spark implements full scans too
hth
L

data modeling of cassandra for node based use cases

I have a cql table which has 2 columns
{
long minuteTimeStamp -> only minute part of epoch time. seconds are ignored.
String data -> some data
}
I have a 5 node cassandra cluster and I want to distribute per minute data uniformly on all 5 nodes. So if per minute data is ~10k records, so each node should consume ~2k data.
I also want to consume each minute data parallelly, means 5 different readers read data 1 on each node.
I came to one solution like I also keep one more column in table like
{
long minuteTimeStamp
int shardIdx
String data
partition key : (minuteTimeStamp,shardIdx)
}
By doing this while writing the data, I will do circular round-robin on shardIdx. Since cassandra uses vnodes, so it might be possible that (min0,0) goes to node0, and (min0,1) also goes to node0 only as this token might also belong to node0. This way I can create some hotspots and it will also hamper read, as 5 parallel readers who wanted to read 1 on each node, but more than one reader might land to same node.
How can we design our partition-key so that data is uniformly distributed without writing a custom partitioner ?
There's no need to make the data distribution more complex by sharding.
The default Murmur3Partitioner will distribute your data evenly across nodes as you approach hundreds of thousands of partitions.
If your use case is really going to hotspot on "data 1", then that's more an inherent problem with your use case/access pattern but it's rare in practice unless you have a super-node issue (for example) in a social graph use case where you have Taylor Swift or Barack Obama having millions more followers than everyone else. Cheers!

Cassandra querying multiple partitions on a single node

We have less than 50GB of data for a table and we are trying to come up with a reasonable design for our Cassandra database. With so little data we are thinking of having all data on each node (2 node cluster with replication factor of 2 to start with).
We want to use Cassandra for easy replication - safeguarding against failover, having copies of data in different parts of the world and Cassandra is brilliant for that.
Moreover, best model that we currently came up with would imply that a single query (consistency level 1-2) would involve getting data from multiple partitions (avg=2, 90th %=20). Most of the queries would ask for data from <= 2 partitions but some might go up to 5k.
So my question here is whether it is really a problem? Is Cassandra slow to retrieve data from multiple partitions if we ensure that all the partitions are on the single node?
EDIT:
Misread question my apologies for other folks coming here later. Please look at the code for TokenAwarePolicy as a basis to determine replica owners, once you have that you can combine your query with the IN query to get multiple partitions from a single node. Be mindful of total query size still.
Original for reference:
Don't get data from multiple partitions in a single query, the detail of the why is here
The TLDR you're better off querying asynchronously from multiple different partitions that requiring the coordinator to do that work.
You require more of a retry if you fail (which is particularly ugly when you have a very large partition or two in that query)
You're waiting on the slowest query for any response to come back, when you could be returning part of the answer as it comes in (or even include a progress meter based on the parts being done).
I did some testing on my machine and results are contradicting what Ryan Svihla proposed in another answer.
TL;DR storing same data in multiple partitions and retrieving via IN operator is much slower than storing the data in a single partition and retrieving it in one go. PLEASE NOTE, that all of the action is on a single Cassandra node (as the conclusion should be more than obvious for a distributed Cassandra cluster)
Case A
Insert X rows into a single partition of the table defined below. Retrieve all of them via SELECT specifying the partition key in WHERE.
Case B
Insert X rows each into a separate partition of the table defined below. Retrieve all of them via SELECT specifying multiple partition keys using WHERE pKey IN (...).
Table definition
pKey: Text PARTITION KEY
cColumn: Int CLUSTERING KEY
sParam: DateTime STATIC
param: Text (size of each was 500 B in tests)
Results
Using Phantom Driver
X = 100
A - 10ms
B - 150ms
r = 15
X = 1000
A - 20ms
B - 1400ms
r = 70
X = 10000
A - 100ms
B - 14000ms
r = 140
Using DevCenter (it has a limit of 1000 rows retrieved in one go)
X = 100
A - 20ms
B - 900ms
r = 45
X = 1000
A - 30ms
B - 1300ms
r = 43
Technical details:
Phantom driver v 2.13.0
Cassandra 3.0.9
Windows 10
DevCenter 1.6

Cassandra data model for application logs (billions of operations!)

Say, I want to collect logs from a huge application cluster which produces 1000-5000 records per second. In future this number might reach 100000 records per second, aggregated from a 10000-strong datacenter.
CREATE TABLE operation_log (
-- Seconds will be used as row keys, thus each row will
-- contain 1000-5000 log messages.
time_s bigint,
time_ms int, -- Microseconds (to sort data within one row).
uuid uuid, -- Monotonous UUID (NOT time-based UUID1)
host text,
username text,
accountno bigint,
remoteaddr inet,
op_type text,
-- For future filters — renaming a column must be faster
-- than adding a column?
reserved1 text,
reserved2 text,
reserved3 text,
reserved4 text,
reserved5 text,
-- 16*n bytes of UUIDs of connected messages, usually 0,
-- sometimes up to 100.
submessages blob,
request text,
PRIMARY KEY ((time_s), time_ms, uuid)) -- Partition on time_s
-- Because queries will be "from current time into the past"
WITH CLUSTERING ORDER BY (time_ms DESC)
CREATE INDEX oplog_remoteaddr ON operation_log (remoteaddr);
...
(secondary indices on host, username, accountno, op_type);
...
CREATE TABLE uuid_lookup (
uuid uuid,
time_s bigint,
time_ms int,
PRIMARY KEY (uuid));
I want to use OrderedPartitioner which will spread data all over the cluster by its time_s (seconds). It must also scale to dozens of concurrent data writers as more application log aggregators are added to the application cluster (uniqueness and consistency is guaranteed by the uuid part of the PK).
Analysts will have to look at this data by performing these sorts of queries:
range query over time_s, filtering on any of the data fields (SELECT * FROM operation_log WHERE time_s < $time1 AND time_s > $time2 AND $filters),
pagination query from the results of the previous one (SELECT * FROM operation_log WHERE time_s < $time1 AND time_s > $time2 AND token(uuid) < token($uuid) AND $filters),
count messages filtered by any data fields within a time range (SELECT COUNT(*) FROM operation_log WHERE time_s < $time1 AND time_s > $time2 AND $filters),
group all data by any of the data fields within some range (will be performed by application code),
request dozens or hundreds of log messages by their uuid (hundreds of SELECT * FROM uuid_lookup WHERE uuid IN [00000005-3ecd-0c92-fae3-1f48, ...]).
My questions are:
Is this a sane data model?
Is using OrderedPartitioner the way to go here?
Does provisioning a few columns for potential filter make sense? Or is adding a column every once in a while cheap enough to run on a Cassandra cluster with some reserved headroom?
Is there anything that prevents it from scaling to 100000 inserted rows per second from hundreds of aggregators and storing a petabyte or two of queryable data, provided that the number of concurrent queryists will never exceed 10?
This data model is close to a sane model, with several important modifications/caveats:
Do not use ByteOrderedPartitioner, especially not with time as the key. Doing this will result in severe hotspots on your cluster, as you'll do most of your reads and all your writes to only part of the data range (and therefore a small subset of your cluster). Use Murmur3Partitioner.
To enable your range queries, you'll need a sentinel key--a key you can know in advance. For log data, this is probably a time bucket + some other known value that's not time-based (so your writes are evenly distributed).
Your indices might be ok, but it's hard to tell without knowing your data. Make sure your values are low in cardinality, or the index won't scale well.
Make sure any potential filter columns adhere to the low cardinality rule. Better yet, if you don't need real-time queries, use Spark to do your analysis. You should create new columns as needed, as this is not a big deal. Cassandra stores them sparsely. Better yet, if you use Spark, you can store these values in a map.
If you follow these guidelines, you can scale as big as you want. If not, you will have very poor performance and will likely get performance equivalent to a single node.

Cassandra or Hbase?

I have a requirement, where I want to store the following:
Mac Address // PKEY
TimeStamp // PKEY
LocationID
ownerName
Signal Strength
The insertion logic is as follows:
Store the above statistics for each active device (MacAddress) once every hour at each location (LocationID)
The entries are created at end of each hour, so the primary key will always be MAC+TimeStamp
There are no updates, only insertions
The queries which can be performed are as follows:
Give me all the entries for last 'N' hours Where MacAddress = "...."
Give me all the entries for last 'N' hours Where LocationID IN (locID1, locID2, ..);
Needless to say, there are billions of entries, and I want to use either HBASE or Cassandra. I've tried to explore, and it seems that Cassandra may not be correct choice.
The reasons for that is if I have the following in cassandra:
< < RowKey > MacAddress:TimeStamp > >
+ LocationID
+ OwnerName
+ Signal Strength
Both the queries will scan the whole database, right? Even if I add an index on LocationID, that is only going to help in the second query to some extent, because there is no index on timestamp (I believe that seaching on timestamp is not fast, as the MacAddress:TimeStamp composite Key would not allow us to search only on timestamp, and instead, a full scan would happen, is that correct?).
I'm stuck here big time, and any insights would really help, if we should opt HBase or Cassandra.
The right way to model this with Cassandra is to use a table partitioned by mac address, ordered by timestamp, and indexed on location id. See the Cassandra data model documentation, especially the section on clustering [predefined sorting]. None of your queries will require a full table scan.
You have to remember that NoSql instances like Cassandra allow horizontal scaling and make it a lot easier to shard the data. By developing a shard strategy (identifying shard key, etc) you could dramatically reduce the size of the data on a single instance and make queries (even when trying to query massive data sets) doable.
Either one would work for this query:
Give me all the entries for last 'N' hours Where MacAddress = "...."
In cassandra you would want to use an ordered partitioner so you can do easy scans. That way you would not have to scan the entire table. (I'm a little rusty on Cassandra).
In hbase it is always ordered by the rowkey so the scan becomes easy. You would just set a start and stop rowkey. Conceptually it would be:
scan.setStartRow(mac+":"+timestamp);
scan.setStopRow(mac+":"+endtimestamp);
And then it would only scan over the rows for the given mac address for the given time period--only a small subset of the data.
This query is much harder:
Give me all the entries for last 'N' hours Where LocationID IN
(locID1, locID2, ..);
Cassandra does have secondary indexes so it seems like it would be "easy" but I don't know how much data it would scan through. I haven't looked at Cassandra since it added secondary indexes.
In hbase you'd have to scan the entire table or create a second table. I would recommend creating a second table where the rowkey would be < location:timestamp > and you'd duplicate the data. Then you'd use that table to lookup the data by location using a scan and setting the start and end keys.

Resources