Way to handle autoincrement ID with counter on Cassandra? - cassandra

This is not a question about using an autincrement integer for primary key instead of UUIDs on Cassandra, in this case I want to generate an autoincrement effect like PostgreSQL on Cassandra that doesn't need to be necessarily scalable. I'm using UUID as primary key for entries in a table, but I need to generate a shortid like bitly for those entries. So I came up trying to make an application that grabs an index for a specific entry and generates a shortid based on that index and then set the shortid to the entry.
So I'm trying to do something like this on Cassandra:
CREATE TABLE photo (
id uuid,
shortid text,
title text,
PRIMARY KEY (id)
);
CREATE TABLE shortid (
shortid text,
family text,
longid uuid,
index bigint,
created_at timestamp,
PRIMARY KEY ((shortid, family))
) WITH COMPACT STORAGE;
CREATE TABLE shortid_reverse (
longid uuid,
family text,
shortid text
PRIMARY KEY ((longid, family))
) WITH COMPACT STORAGE;
CREATE TABLE shortid_last_index (
family text,
last_index counter,
last_long_id uuid,
PRIMARY KEY (family)
);
So in this application that will handle the shortid, when the application initiates It'll get the last index for that family, and then it'll increase the value on the application itself, as this application will run on Nodejs and Nodejs can scale that.
Application.js
var index = lastIndexFromCassandra++ //5
, hashids = new Hashids("this is my salt")
, shortid = hashids.encrypt(index); //dDae3KDDj4Q
After the application increase the index and generate the shortid, It'll persist on Cassandra:
UPDATE shortid_last_index SET last_index = last_index+1, last_long_id = fabac1f0-7f88-11e3-baa7-0800200c9a66 WHERE family = 'photo';
INSERT INTO shortid (shortid, family, longid, index, created_at) VALUES ('dDae3KDDj4Q', 'photo', fabac1f0-7f88-11e3-baa7-0800200c9a66, 5, NOW());
INSERT INTO shortid_reverse (longid, family, shortid) VALUES (fabac1f0-7f88-11e3-baa7-0800200c9a66, 'photo', 'dDae3KDDj4Q');
UPDATE photo SET shortid = 'dDae3KDDj4Q' WHERE id = fabac1f0-7f88-11e3-baa7-0800200c9a66;
So, it really there isn't a better way to do this in Cassandra without creating an application that will just do that? Couldn't I just do something like PostgreSQL on Cassandra:
UPDATE shortid_last_index SET last_index = last_index+1, last_long_id = ? WHERE family = 'photo' RETURNING last_index;
In comparison, if the statement above worked it would probably lock the row, but increasing and grabbing the index in the application itself and then safely increase the counter in Cassandra wouldn't lock the row too? How scale would be the application?

If you need short incremental id generation please take a look at Snowflake or one of the other countless clones/inspirations.
What you are attempting to do is a bad idea on multiple counts.

Related

Yugabyte YCQL check if a set contain a value?

Is there there any way to query on a SET type(or MAP/LIST) to find does it contain a value or not?
Something like this:
CREATE TABLE test.table_name(
id text,
ckk SET<INT>,
PRIMARY KEY((id))
);
Select * FROM table_name WHERE id = 1 AND ckk CONTAINS 4;
Is there any way to reach this query with YCQL api?
And can we use a SET type in SECONDRY INDEX?
Is there any way to reach this query with YCQL api?
YCQL does not support the CONTAINS keyword yet (feel free to open an issue for this on the YugabyteDB GitHub).
One workaround can be to use MAP<INT, BOOLEAN> instead of SET<INT> and the [] operator.
For instance:
CREATE TABLE test.table_name(
id text,
ckk MAP<int, boolean>,
PRIMARY KEY((id))
);
SELECT * FROM table_name WHERE id = 'foo' AND ckk[4] = true;
And can we use a SET type in SECONDRY INDEX?
Generally, collection types cannot be part of the primary key, or an index key.
However, "frozen" collections (i.e. collections serialized into a single value internally) can actually be part of either primary key or index key.
For instance:
CREATE TABLE table2(
id TEXT,
ckk FROZEN<SET<INT>>,
PRIMARY KEY((id))
) WITH transactions = {'enabled' : true};
CREATE INDEX table2_idx on table2(ckk);
Another option is to use with compound primary key and defining ckk as clustering key:
cqlsh> CREATE TABLE ybdemo.tt(id TEXT, ckk INT, PRIMARY KEY ((id), ckk)) WITH CLUSTERING ORDER BY (ckk DESC);
cqlsh> SELECT * FROM ybdemo.tt WHERE id='foo' AND ckk=4;

Cassandra - how to update a record with a compound key

In the process of learning Cassandra and using it on a small pilot project at work. I've got one table that is filtered by 3 fields:
CREATE TABLE webhook (
event_id text,
entity_type text,
entity_operation text,
callback_url text,
create_timestamp timestamp,
webhook_id text,
last_mod_timestamp timestamp,
app_key text,
status_flag int,
PRIMARY KEY ((event_id, entity_type, entity_operation))
);
Then I can pull records like so, which is exactly the query I need for this:
select * from webhook
where event_id = '11E7DEB1B162E780AD3894B2C0AB197A'
and entity_type = 'user'
and entity_operation = 'insert';
However, I have an update query to set the record inactive (soft delete), which would be most convenient by partition key in the same table. Of course, this isn't possible:
update webhook
set status_flag = 0
where webhook_id = '11e8765068f50730ac964b31be21d64e'
An example of why I'd want to do this, is a simple DELETE from an API endpoint:
http://myapi.com/webhooks/11e8765068f50730ac964b31be21d64e
Naturally, if I update based on the composite key, I'd potentially inactivate more records than I intend to.
Seems like my only choice, doing it the "Cassandra Way", is to use two tables; the one I already have and one to track status_flag by webhook_id, so I can update based on that id. I'd then have to select by webhook_id in the first table and disable it there as well? Otherwise, I'd have to force users to pass all the compound key values in the URL of the API's DELETE request.
Simple things you take for granted in relational data, seem to get complex very quickly in Cassandraland. Is this the case or am I making it more complicated than it really is?
You can add webhook to your primary key.
So your table defination becomes somethign like this.
CREATE TABLE webhook (
event_id text,
entity_type text,
entity_operation text,
callback_url text,
create_timestamp timestamp,
webhook_id text,
last_mod_timestamp timestamp,
app_key text,
status_flag int,
PRIMARY KEY ((event_id, entity_type, entity_operation),webhook_id)
Now lets say you insert 2 records.
INSERT INTO dev_cybs_rtd_search.webhook(event_id,entity_type,entity_operation,status_flag,webhook_id) VALUES('11E7DEB1B162E780AD3894B2C0AB197A','user','insert',1,'web_id');
INSERT INTO dev_cybs_rtd_search.webhook(event_id,entity_type,entity_operation,status_flag,webhook_id) VALUES('12313131312313','user','insert',1,'web_id_1');
And you can update like following
update webhook
set status_flag = 0
where webhook_id = 'web_id' AND event_id = '11E7DEB1B162E780AD3894B2C0AB197A' AND entity_type = 'user'
AND entity_operation = 'insert';
It will only update 1 record.
However you have to send all the things defined in your primary key.

nested map in cassandra data modelling

I have following requirement of my dataset, need to unserstand what datatype should I use and how to save my data accordingly :-
CREATE TABLE events (
id text,
evntoverlap map<text, map<timestamp,int>>,
PRIMARY KEY (id)
)
evntoverlap = {
'Dig1': {{'2017-10-09 04:10:05', 0}},
'Dig2': {{'2017-10-09 04:11:05', 0},{'2017-10-09 04:15:05', 0}},
'Dig3': {{'2017-10-09 04:11:05', 0},{'2017-10-09 04:15:05', 0},{'2017-10-09 04:11:05', 0}}
}
This gives an error :-
Error from server: code=2200 [Invalid query] message="Non-frozen collections are not allowed inside collections: map<text, map<timestamp, int>>"
How should I store this type of data in single column . Please suggest datatype and insert command for the same.
Thanks,
There is limitation of Cassandra - you can't nest collection (or UDT) inside collection without making it frozen. So you need to "froze" one of the collections - either nested:
CREATE TABLE events (
id text,
evntoverlap map<text, frozen<map<timestamp,int>>>,
PRIMARY KEY (id)
);
or top-level:
CREATE TABLE events (
id text,
evntoverlap frozen<map<text, map<timestamp,int>>>,
PRIMARY KEY (id)
);
See documentation for more details.
CQL collections limited to 64kb, if putting things like maps in maps you might push that limit. Especially with frozen maps you are deserializing the entire map, modifying it, and re inserting. Might be better off with a
CREATE TABLE events (
id text,
evnt_key, text
value map<timestamp, int>,
PRIMARY KEY ((id), evnt_key)
)
Or even a
CREATE TABLE events (
id text,
evnt_key, text
evnt_time timestamp
value int,
PRIMARY KEY ((id), evnt_key, evnt_time)
)
It would be more efficient and safer while giving additional benefits like being able to order the event_time's in ascending or descending order.

Does using all fields as a partitioning keys in a table a drawback in cassandra?

my aim is to get the msgAddDate based on below query :
select max(msgAddDate)
from sampletable
where reportid = 1 and objectType = 'loan' and msgProcessed = 1;
Design 1 :
here the reportid, objectType and msgProcessed may not be unique. To add the uniqueness I have added msgAddDate and msgProcessedDate (an additional unique value).
I use this design because I don't perform range query.
Create table sampletable ( reportid INT,
objectType TEXT,
msgAddDate TIMESTAMP,
msgProcessed INT,
msgProcessedDate TIMESTAMP,
PRIMARY KEY ((reportid ,msgProcessed,objectType,msgAddDate,msgProcessedDate));
Design 2 :
create table sampletable (
reportid INT,
objectType TEXT,
msgAddDate TIMESTAMP,
msgProcessed INT,
msgProcessedDate TIMESTAMP,
PRIMARY KEY ((reportid ,msgProcessed,objectType),msgAddDate, msgProcessedDate))
);
Please advice which one to use and what will be the pros and cons between two based on performance.
Design 2 is the one you want.
In Design 1, the whole primary key is the partition key. Which means you need to provide all the attributes (which are: reportid, msgProcessed, objectType, msgAddDate, msgProcessedDate) to be able to query your data with a SELECT statement (which wouldn't be useful as you would not retrieve any additional attributes than the one you already provided in the WHERE statemenent)
In Design 2, your partition key is reportid ,msgProcessed,objectType which are the three attributes you want to query by. Great. msgAddDate is the first clustering column, which will be automatically sorted for you. So you don't even need to run a max since it is sorted. All you need to do is use LIMIT 1:
SELECT msgAddDate FROM sampletable WHERE reportid = 1 and objectType = 'loan' and msgProcessed = 1 LIMIT 1;
Of course, make sure to define a DESC sorted order on msgAddDate (I think by default it is ascending...)
Hope it helps!

Updating denormalized data in Cassandra

I'm trying to build a news feed system using Cassandra, I was thinking of using a fan out approach wherein if a user posts a new post, I'll write a new record in all of his friends' feed table. The table structure looks like:
CREATE TABLE users (
user_name TEXT,
first_name TEXT,
last_name TEXT,
profile_pic TEXT,
PRIMARY KEY (user_name)
);
CREATE TABLE user_feed (
user_name TEXT,
posted_time TIMESTAMP,
post_id UUID,
posted_by TEXT, //posted by username
posted_by_profile_pic TEXT,
post_content TEXT,
PRIMARY KEY ((user_name), posted_time)
) WITH CLUSTERING ORDER BY(posted_time desc);
Now, I can get a feed for a particular user in a single query all fine. What if the user who has posted a feed updates his profile pic. How do I go about updating the data in user_feed table?
You can use batch statements to achieve atomicity at your updates. So in this case you can create a batch with the update on tables users and user_feed using the same user_name partition key:
BEGIN BATCH
UPDATE users SET profile_pic = ? WHERE user_name = ?;
UPDATE user_feed SET posted_by_profile_pic = ? WHERE user_name = ?;
APPLY BATCH;
Take a look at CQL Batch documentation

Resources