I am starting with an initial idea of rewriting mammoth spark-kafka-hbase application with spark-kafka-cassandra(on kubernetes).
I have the following data models one supports all-time inserts and other one supports upserts
Approach 1:
create table test.inv_positions(
location_id int,
item bigint,
time_id timestamp,
sales_floor_qty int,
backroom_qty int,
in_backroom boolean,
transit_qty int,
primary
key ((location_id), item,time_id) )
with clustering order by (item
asc,time_id DESC);
This table keeps inserting as timeid is part of clustering col. I am thinking to read latest (timeid is desc) by fetch 1 and somehow delete the old record by either setting TTL on key cols or delete them overnight.
Concerns: TTL or delete the old records creates tombstones.
Approach 2:
create table test.inv_positions(
location_id int,
item bigint,
time_id timestamp,
sales_floor_qty int,
backroom_qty int,
in_backroom boolean,
transit_qty int,
primary key ((location_id),
item) ) with clustering order by (item asc);
This table if a new record comes for the same location and item, it upserts it. Its easy to read and no need to worry about purging old records
Concerns : I have another application on Cassandra that updates different col at different time and we still have read issues. That said, upserts also creates tombstones but how worse compared to approach 1? or any other better way to modeling it right?
First approach seems good. TTL and delete, both create tombstones. you can refer compaction strategy for TTL based deletes. TWCS is better for TTL based deletes else you can use STCS for simple deletes. Also,configure gc_grace_seconds accordingly to clear tombstones smoothly because heavy tombstones leads read latency.
Related
I have a table that uses TWCS including a counter column:
create table sensors_by_time (
group text, // sensor group
date date, // bucketing
id text, // sensor id
count counter, // detected count
primary key ((group, date), id))
WITH CLUSTERING ORDER BY (id DESC)
AND compaction = {
'compaction_window_size': '24',
'compaction_window_unit': 'HOURS',
'class': 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy'}
After a week I have 7 sstables (1 for each day). I need the data for 7 days so i thought to use ttl and gc_grace_seconds but Cassandra doe's not support ttl on table with counter column..
My other option is use some job to delete data older than 7 days but I understand that It's not good for my performance because of the TWCS: http://www.redshots.com/cassandra-twcs-must-have-ttls/
How should i delete old data from such a table?
I know I'm resurrecting an old question, but I ran into a similar problem, and wrote a tool to help solve it. On each node, you'll have to:
stop the cassandra process
delete the SSTables that contain the old records
start the process again
The difficult part is knowing which SSTables contain date ranges you're no longer interested in. Cassandra comes with a tool, sstablemetadata, that display SSTable metadata, including the Min/Max timestamps.
sstablemetadata is slow, and the output is difficult to process. Instead try ls-sstm, which outputs nicely formatted tabular data about each SSTable within a Cassandra table directory: https://github.com/lokkju/cassandra-tools/blob/main/ls-sstm.sh
With Cassandra it is possible to specify the cluster ordering on a table with a particular column.
CREATE TABLE myTable (
user_id INT,
message TEXT,
modified DATE,
PRIMARY KEY ((user_id), modified)
)
WITH CLUSTERING ORDER BY (modified DESC);
Note: In this example, there is one message per user_id (intended)
Given this table my understanding is that the query's performance will be better in cases where recent data is queried.
However, if one where to make updates to the "modified" column does it add extra overhead on the server to "re-order" and is that overhead vs query performance significant?
In other words given this table would it perform better if the "CLUSTERING ORDER BY (modified DESC)" was dropped?
UPDATE: Updated the invalid CQL by adding modified to primary key, however, the original questions still stand.
In order to make modified a clustering column, it needs to be defined in the primary key.
CREATE TABLE myTable (
user_id INT,
message TEXT,
modified DATE,
PRIMARY KEY ((user_id), modified)
)
WITH CLUSTERING ORDER BY (modified DESC);
This way, your data will be sorted primarily by the hashed value of the user_id, and within each user_id by modified. You don't need to drop the "WITH CLUSTERING ORDER BY (modified DESC)"
Moving the comment as an answer, as reply of the updated question:
if one where to make updates to the "modified" column does it add
extra overhead on the server to "re-order" and is that overhead vs
query performance significant?
If modified is defined as part of the clustering key, you won't be able to update that record, but you will be able to add as many records as needed, each time with a different modified date.
Cassandra is an append-only database engine: this means that any update to the records will add a new record with a different timestamp, a select will consider the records with the latest timestamp. This means that there is no "re-order" operation.
Dropping or creating the clustering order should be defined in base of the query of how the information will be retrieved, if you are going to use only the latest records of that user_id, it makes sense to have the clustering order as you defined it.
in your data model user_id is a rowkey/shardkey/partition key (userid) that is important for data locality and the clustering column (modified) specifies the order that the data is arranged inside the partition. combination of these two keys makes the primary key.
Even in RDBS world, updating PK is avoidble for sake of data integrity.
however in cassandra there is no constraints/relation between column families/tables.
Assigning exact same values to Pk fields(userid,modified) will result in update the existing record else it will add set of fields.
refence:
https://www.datastax.com/dev/blog/we-shall-have-order
I'm using Cassandra 2.1 and have a model that roughly looks as follows:
CREATE TABLE events (
client_id bigint,
bucket int,
timestamp timeuuid,
...
ticket_id bigint,
PRIMARY KEY ((client_id, bucket), timestamp)
);
CREATE INDEX events_ticket ON events(ticket_id);
As you can see, I've created a secondary index on ticket_id. This index works ok. events contains around 100 million rows, while only 5 million of these rows have around 50,000 distinct tickets. So a ticket - on average - has 100 events.
Querying the secondary index works without supplying the partition key, which is convenient in our situation. As the bucket column is sometimes hard to determine beforehand (i.e. you should know the date of the events, bucket is currently the date).
cqlsh> select * from events where ticket_id = 123;
client_id | bucket | timestamp | ... | ticket_id
-----------+--------+-----------+-----+-----------
(0 rows)
How do I solve the problem when all events of a ticket should be moved to another ticket? I.e. the following query won't work:
cqlsh> UPDATE events SET ticket_id = 321 WHERE ticket_id = 123;
InvalidRequest: code=2200 [Invalid query] message="Non PRIMARY KEY ticket_id found in where clause"
Does this imply secondary indexes cannot be used in UPDATE queries?
What model should I use to support these changes?
First of all, UPDATE and INSERT operations are treated the same in Cassandra. They are colloquially known as "UPSERTs."
Does this imply secondary indexes cannot be used in UPDATE queries?
Correct. You cannot perform an UPSERT in Cassandra without specifying the complete PRIMARY KEY. Even UPSERTs with a partial PRIMARY KEY will not work. And (as you have discovered) UPSERTing by an indexed value does not work, either.
How do I solve the problem when all events of a ticket should be moved to another ticket?
Unfortunately, the only way to accomplish this, is to query the keys of each row in events (with a particular ticket_id) and UPSERT ticket_id by those keys. The nice thing, is that you don't have to first DELETE them, because ticket_id is not part of the PRIMARY KEY.
How do I solve the problem when all events of a ticket should be moved to another ticket?
I think your best plan here would be to forego a secondary index all together, and create a query table to work alongside your events table:
CREATE TABLE eventsbyticketid (
client_id bigint,
bucket int,
timestamp timeuuid,
...
ticket_id bigint,
PRIMARY KEY ((ticket_id), timestamp)
) WITH CLUSTERING ORDER BY (timestamp DESC);
This would allow you to query by ticket_id quickly (to obtain your client_id, bucket, and timestamp. This would give you the information you need to UPSERT the new ticket_id on your events table.
You could also then perform a DELETE by ticket_id (on the eventsbyticketid table). Cassandra does allow a DELETE operation with a partial PRIMARY KEY, as long as you have the full partition key (ticket_id). So removing old ticket_ids from the query table would be easy. And to ensure write atomicity, you could batch the UPSERTs together:
BEGIN BATCH
UPDATE events SET ticket_id = 321 WHERE client_id=2112 AND bucket='2015-04-22 14:53' AND timestamp=4a7e2730-e929-11e4-88c8-21b264d4c94d;
UPDATE eventsbyticketid SET client_id=2112, bucket='2015-04-22 14:53' WHERE ticket_id=321 AND timestamp=4a7e2730-e929-11e4-88c8-21b264d4c94d
APPLY BATCH;
Which is actually the same as performing:
BEGIN BATCH
INSERT INTO events (client_id,bucket,timestamp,ticketid) VALUES(2112,'2015-04-22 14:53',4a7e2730-e929-11e4-88c8-21b264d4c94d,321);
INSERT INTO eventsbyticketid (client_id,bucket,timestamp,ticketid) VALUES(2112,'2015-04-22 14:53',4a7e2730-e929-11e4-88c8-21b264d4c94d,321);
APPLY BATCH;
Side note: timestamp is actually a (reserved word) data type in Cassandra. This makes it a pretty lousy name for a timeuuid column.
You can use the secondary index to query the events for the old ticket, and then use the primary key from those retrieved events to update the events.
I'm not sure why you need to do this manually, seems like something Cassandra should be able to do under the hood.
Here I am again asking similar question after getting really a great explanation on
How do secondary indexes work in Cassandra?
CREATE TABLE update_audit (
scopeid bigint,
formid bigint,
time timestamp,
operation int,
record_id bigint,
ipaddress text,
user_id bigint,
value text,
PRIMARY KEY ((scopeid), formid, time)
) WITH CLUSTERING ORDER BY (formid ASC, time DESC)
FYI,
operation Column possible values are 1,2 and 3. Low cardinality.
record_link_id high-cardinality. every entry can be unique.
user_id is the best candidate for Index according to How do secondary indexes work in Cassandra? and The sweet spot for cassandra secondary indexing.
Search should work based on
time with limit 100.
operation and time with limit 100.
user_id and time with limit 100.
record_id and time with limit 100.
Problems
total records more than 10,000M
which One is best
- creating Index over operation, user_id and record_id and applying limit 100.
1) Does Hidden columnfamily for index operation Will return only 100 results?
2) More seeks will slow down the fetch operation?
OR Create a new columnfamily with definition like
CREATE TABLE audit_operation_idx (
scopeid bigint,
formid bigint,
operation int,
time timeuuid,
PRIMARY KEY ((scopeid), formid, operation, time)
) WITH CLUSTERING ORDER BY (formid ASC, operation ASC, time DESC)
required two select query for single select operation.
So, if I will create new columnfamily for operation, user_id and record_id
I have to make a batch query to insert into these four columnfamilies.
3) Does TCP problems will come? while executing batch query.because writes will be huge.
4) what else should I cover to avoid unnecessary problems.
There are three options.
Create a new table and use bulk insert. If the size of insert query becomes huge you'll have to configure its related parameter. Don't worry about writes in Cassandra.
Create a materialized View with required columns of where clause.
Create secondary index if cardinality is low. (Not recommended)
I need to store latest updates that needs to be pushed to users' newsfeed page in Cassandra table for later retrieval and my table's schema is as follow:
CREATE TABLE newsfeed (user_name text,
post_id bigint,
post_type text,
favorited boolean,
shared boolean,
own boolean,
date timestamp,
PRIMARY KEY (user_name,date,post_id,post_type) );
The first three column (username, postid, and posttype) in combination will build the actual primary-key of the table, however since I wanted to ORDER the SELECT queries on this table based on "date"s of rows I placed the date-column into the primary key fields as the "second" entry (did I have to do this?).
When I want to delete a row by giving only "user_name, post_id, and post_type" as follow:
DELETE FROM newsfeed WHERE user_name='pooria' and post_id=36 and post_type='p';
I will get the following error:
Bad Request: Missing PRIMARY KEY part date since post_id is set
I need the date-column to be part of the primary key since I want to use it in my ORDER BY clauses and on the other hand I have to delete some rows without knowing their "date" values!
So how such problems are tackled in Cassandra? should I be fixing my Data Model and have different schema for job?
DataStax's Chief Evangelist Patrick McFadden posted an article demonstrating a few time series modeling patterns. Definitely makes for a good read, and should be of some help to you: Getting Started with Time Series Data Modeling.
I think your table is just fine. Although, with the way that composite primary keys work in Cassandra, if you cannot skip primary key components in a query. So if you do end up needing to query data by user_name, post_id, and/or post_type differently (without date), you should create a table specifically for that query (which does not include date in the primary key).
I will however say that in-general, creating a table which will process regular delete operations is not a good idea. In fact, I'm pretty sure that has been classified as a Cassandra "anti-pattern." Data really isn't deleted from Cassandra; it is tombstoned. Tombstones are reconciled at compaction time (assuming that the tombstone threshold time has been met), and having too many of them has been known to cause performance issues.
If you read the article I linked above, go down to the section named "Time Series Pattern 3." You will notice that the INSERT statements are run with the USING TTL clause. This gives the data a time-to-live in seconds, after which it will "quietly disappear." For instance, if you wanted to keep your data around for 24 hours (86400 seconds) you could do something like this:
INSERT INTO newsfeed (...) VALUES (...) USING TTL 86400
Using the TTL feature is a preferable alternative to regular cleansing by DELETE.