I have a table which maintains data for products.
I update this table from after making an API call every few mins.
Say I have this table.
create table products(
id int,
name text,
created timestamp,
updated timestamp,
primary_key(id)
);
So I make an API call at time 00:00:00 and I get list of products
[
{id:1, name:'product1'},
{id:2, name:'product2'}
]
I insert it to my cassandra table
id | name | created | updated
1 | Product1 | 1/1/2018 00:00:00| 1/1/2018 00:00:00
2 | Product2 | 1/1/2018 00:00:00| 1/1/2018 00:00:00
Now in the second API call after 10 mins the user has deleted Product1, but created Product3 so I get the API call output as
[
{id:2, name:'product2'},
{id:3, name:'product3'}
]
I will upsert this in my cassandra table(since I don't want to check if the record exists before deciding whether to insert or update) but I want to maintain the created time of product2 in my upset query so that I know when the product was created and what was the lifetime of the product.
Is it possible for me to maintain the created timestamp by setting it now() without specifying it in the query and then I keep updating the updatetime value?
My table should look like this
id | name | created | updated
1 | Product1 | 1/1/2018 00:00:00 | 1/1/2018 00:00:00
2 | Product2 | 1/1/2018 00:00:00 | 1/1/2018 00:00:10
3 | Product3 | 1/1/2018 00:00:10 | 1/1/2018 00:00:10
TL;DR; Yes.
You'll have to know what columns to invoke during the upsert, that will be the only logic that will be pushed to the application side, but the data model can work exactly as you expected.
CREATE TABLE KS.products (
id int PRIMARY KEY,
created timestamp,
name text,
updated timestamp
)
INSERT INTO products (id, name, created, updated)
VALUES (1, 'TTzV2', toTimeStamp(now()), toTimeStamp(now()));
INSERT INTO products (id, name, created, updated)
VALUES (2, 'smLL301', toTimeStamp(now()), toTimeStamp(now()));
SELECT * FROM KS.products;
id | created | name | updated
----+---------------------------------+---------+---------------------------------
1 | 2018-09-28 17:22:38.502000+0000 | TTzV2 | 2018-09-28 17:22:38.502000+0000
2 | 2018-09-28 17:22:39.180000+0000 | smLL301 | 2018-09-28 17:22:39.180000+0000
Now after the product is updated you'll need to specialize your API to call something else, but with the following CQL you can get what you expected:
INSERT INTO products (id, updated) VALUES (2, toTimeStamp(now()));
INSERT INTO products (id, name, created, updated)
VALUES (3, 'Gt3X', toTimeStamp(now()), toTimeStamp(now()));
SELECT * FROM KS.products;
id | created | name | updated
----+---------------------------------+---------+---------------------------------
1 | 2018-09-28 17:22:38.502000+0000 | TTzV2 | 2018-09-28 17:22:38.502000+0000
2 | 2018-09-28 17:22:39.180000+0000 | smLL301 | 2018-09-28 17:25:38.208000+0000
3 | 2018-09-28 17:25:38.774000+0000 | Gt3X | 2018-09-28 17:25:38.774000+0000
Related
I want to partition by three columns in my query :
user id
cancelation month year.
retention month year.
I used row number and partition by as follows
row_number() over (partition by user_id ,cast ( date_format(cancelation_date,'yyyyMM') as integer),cast ( date_format(retention_date,'yyyyMM') as integer) order by cast ( date_format(cancelation_date,'yyyyMM') as integer) asc, cast ( date_format(retention_date,'yyyyMM') as integer) asc) as row_count
example of the output I got :
| user_id |cancelation_date |cancelation_month_year|retention_date|retention_month_year|row_count|
| -------- | -------------- |----------------------|--------------|--------------------|---------|
| 566 | 28-5-2020 | 202005 | 20-7-2020 | 202007 |1 |
| 566 | 28-5-2020 | 202005 | 30-7-2-2020 | 202007 |2 |
example of the output I want to get:
user_id
cancelation_date
cancelation_month_year
retention_date
retention_month_year
row_count
566
28-5-2020
202005
20-7-2020
202007
1
566
28-5-2020
202005
30-7-2-2020
202007
1
note that user may have more than cancelation months, for example f he has canceled in August , I want row count =2 for all dates in August and so on.
it's not obvious why partition by is partitioning by retention date instead of partitioning by retention month year.
I get the impression that row_number is not what you want, rather you are interested in dense_rank, wherein you would get your expected output.
I has a table with schema:
create table last_message_by_group
(
date date,
created_at timestamp,
message text,
group_id bigint,
primary key (date, created_at, message_id)
)
with clustering order by (created_at desc)
and data should be:
| date | created_at | message | group_id |
| 2021-05-11 | 7:23:54 | ddd | 1 |
| 2021-05-11 | 6:21:43 | ccc | 1 |
| 2021-05-11 | 5:35:16 | bbb | 2 |
| 2021-05-11 | 4:38:23 | aaa | 2 |
It will show messages order by created_at desc partition by date.
But the problem is it can not get last message each group likes:
| date | created_at | message | group_id |
| 2021-05-11 | 7:23:54 | ddd | 1 |
| 2021-05-11 | 5:35:16 | bbb | 2 |
created_at is cluster key, so it cant be updated, so I delete and insert new row every new message by group_id, this way make low performance
Is there any way to do that?
I was able to get this to work by making one change to your primary key definition. I added group_id as the first clustering key:
PRIMARY KEY (date, group_id, created_at, message_id)
After inserting the same data, this works:
> SELECT date, group_id, max(created_at), message
FROM last_message_by_group
WHERE date='2021-05-11'
GROUP BY date,group_id;
date | group_id | system.max(created_at) | message
------------+----------+---------------------------------+---------
2021-05-11 | 1 | 2021-05-11 12:23:54.000000+0000 | ddd
2021-05-11 | 2 | 2021-05-11 10:35:16.000000+0000 | bbb
(2 rows)
There's more detail on using CQL's GROUP BY clause in the official docs.
there is one problem, because you changed clustering key, so message will be ordered by group_id first. Any idea for still order by created_at and 1 message each group?
From the document linked above:
the GROUP BY option only accept as arguments primary key column names in the primary key order.
Unfortunately, if we were to adjust the primary key definition to put created_at before group_id, we would also have to group by created_at. That would create a "group" for each unique created_at, which negates the idea behind group_id.
In this case, you may have to decide between having the grouped results in a particular order vs. having them grouped at all. It might also be possible to group the results, but then re-order them appropriately on the application side.
I learn Cassandra through its documentation. Now I'm learning about batch and static fields.
In their example at the end of the page, they somehow managed to make balance have two different values (-200, -208) even though it's a static field.
Could someone explain to me how this is possible? I've read the whole page but I did not catch on.
In Cassandra static field is static under a partition key.
Example : Let's define a table
CREATE TABLE static_test (
pk int,
ck int,
d int,
s int static,
PRIMARY KEY (pk, ck)
);
Here pk is the partition key and ck is the clustering key.
Let's insert some data :
INSERT INTO static_test (pk , ck , d , s ) VALUES ( 1, 10, 100, 1000);
INSERT INTO static_test (pk , ck , d , s ) VALUES ( 2, 20, 200, 2000);
If we select the data
pk | ck | s | d
----+----+------+-----
1 | 10 | 1000 | 100
2 | 20 | 2000 | 200
here for partition key pk = 1 static field s value is 1000 and for partition key pk = 2 static field s value is 2000
If we insert/update static field s value of partition key pk = 1
INSERT INTO static_test (pk , ck , d , s ) VALUES ( 1, 11, 101, 1001);
Then static field s value will change for all the rows of the partition key pk = 1
pk | ck | s | d
----+----+------+-----
1 | 10 | 1001 | 100
1 | 11 | 1001 | 101
2 | 20 | 2000 | 200
In a table that uses clustering columns, non-clustering columns can be declared static in the table definition. Static columns are only static within a given partition.
Example:
CREATE TABLE test (
partition_column text,
static_column text STATIC,
clustering_column int,
PRIMARY KEY (partition_column , clustering_column)
);
INSERT INTO test (partition_column, static_column, clustering_column) VALUES ('key1', 'A', 0);
INSERT INTO test (partition_column, clustering_column) VALUES ('key1', 1);
SELECT * FROM test;
Results:
primary_column | clustering_column | static_column
----------------+-------------------+--------------
key1 | 0 | A
key1 | 1 | A
Observation:
Once declared static, the column inherits the value from given partition key
Now, lets insert another record
INSERT INTO test (partition_column, static_column, clustering_column) VALUES ('key1', 'C', 2);
SELECT * FROM test;
Results:
primary_column | clustering_column | static_column
----------------+-------------------+--------------
key1 | 0 | C
key1 | 1 | C
key1 | 2 | C
Observation:
If you update the static key, or insert another record with updated static column value, the value is reflected across all the columns ==> static column values are static (constant) across given partition column
Restriction (from the DataStax reference documentation below):
A table that does not define any clustering columns cannot have a static column. The table having no clustering columns has a one-row partition in which every column is inherently static.
A table defined with the COMPACT STORAGE directive cannot have a static column.
A column designated to be the partition key cannot be static.
Reference : DataStax Reference
In the example on the page you've linked they don't have different values at the same point in time.
They first have the static balance field set to -208 for the whole user1 partition:
user | expense_id | balance | amount | description | paid
-------+------------+---------+--------+-------------+-------
user1 | 1 | -208 | 8 | burrito | False
user1 | 2 | -208 | 200 | hotel room | False
Then they apply a batch update statement that sets the balance value to -200:
BEGIN BATCH
UPDATE purchases SET balance=-200 WHERE user='user1' IF balance=-208;
UPDATE purchases SET paid=true WHERE user='user1' AND expense_id=1 IF paid=false;
APPLY BATCH;
This updates the balance field for the whole user1 partition to -200:
user | expense_id | balance | amount | description | paid
-------+------------+---------+--------+-------------+-------
user1 | 1 | -200 | 8 | burrito | True
user1 | 2 | -200 | 200 | hotel room | False
The point of a static fields is that you can update/change its value for the whole partition at once. So if I would execute the following statement:
UPDATE purchases SET balance=42 WHERE user='user1'
I would get the following result:
user | expense_id | balance | amount | description | paid
-------+------------+---------+--------+-------------+-------
user1 | 1 | 42 | 8 | burrito | True
user1 | 2 | 42 | 200 | hotel room | False
I'm trying to understand what exactly happens internally in storage engine level when a row(columns) is inserted in a CQL style table.
CREATE TABLE log_date (
userid bigint,
time timeuuid,
category text,
subcategory text,
itemid text,
count int,
price int,
PRIMARY KEY ((userid), time) - #1
PRIMARY KEY ((userid), time, category, subcategory, itemid, count, price) - #2
);
Suppose that I have a table like above.
In case of #1, a CQL row will generate 6(or 5?) columns in storage.
In case of #2, a CQL row will generate a very composite column in storage.
I'm wondering what's more effective way for storing logs into Cassandra.
Please focus on those given two situations.
I don't need any real-time reads. Just writings.
If you want to suggest other options please refer to the following.
The reasons I chose Cassandra for storing logs are
Linear scalability and good for heavy writing.
It has schema in CQL. I really prefer having a schema.
Seems to support Spark well enough. Datastax's cassandra-spark connector seems to have data locality awareness.
I'm trying to understand what exactly happens internally in storage engine level when a row(columns) is inserted in a CQL style table.
Let's say that I build tables with both of your PRIMARY KEYs, and INSERT some data:
aploetz#cqlsh:stackoverflow2> SELECT userid, time, dateof(time), category, subcategory, itemid, count, price FROM log_date1;
userid | time | dateof(time) | category | subcategory | itemid | count | price
--------+--------------------------------------+--------------------------+----------+----------------+-------------------+-------+-------
1002 | e2f67ec0-f588-11e4-ade7-21b264d4c94d | 2015-05-08 08:48:20-0500 | Books | Novels | 678-2-44398-312-9 | 1 | 798
1002 | 15d0fd20-f589-11e4-ade7-21b264d4c94d | 2015-05-08 08:49:45-0500 | Audio | Headphones | 228-5-44343-344-5 | 1 | 4799
1001 | 32671010-f588-11e4-ade7-21b264d4c94d | 2015-05-08 08:43:23-0500 | Books | Computer Books | 978-1-78398-912-6 | 1 | 2200
1001 | 74ad4f70-f588-11e4-ade7-21b264d4c94d | 2015-05-08 08:45:14-0500 | Books | Novels | 678-2-44398-312-9 | 1 | 798
1001 | a3e1f750-f588-11e4-ade7-21b264d4c94d | 2015-05-08 08:46:34-0500 | Books | Computer Books | 977-8-78998-466-4 | 1 | 599
(5 rows)
aploetz#cqlsh:stackoverflow2> SELECT userid, time, dateof(time), category, subcategory, itemid, count, price FROM log_date2;
userid | time | dateof(time) | category | subcategory | itemid | count | price
--------+--------------------------------------+--------------------------+----------+----------------+-------------------+-------+-------
1002 | e2f67ec0-f588-11e4-ade7-21b264d4c94d | 2015-05-08 08:48:20-0500 | Books | Novels | 678-2-44398-312-9 | 1 | 798
1002 | 15d0fd20-f589-11e4-ade7-21b264d4c94d | 2015-05-08 08:49:45-0500 | Audio | Headphones | 228-5-44343-344-5 | 1 | 4799
1001 | 32671010-f588-11e4-ade7-21b264d4c94d | 2015-05-08 08:43:23-0500 | Books | Computer Books | 978-1-78398-912-6 | 1 | 2200
1001 | 74ad4f70-f588-11e4-ade7-21b264d4c94d | 2015-05-08 08:45:14-0500 | Books | Novels | 678-2-44398-312-9 | 1 | 798
1001 | a3e1f750-f588-11e4-ade7-21b264d4c94d | 2015-05-08 08:46:34-0500 | Books | Computer Books | 977-8-78998-466-4 | 1 | 599
(5 rows)
Looks pretty much the same via cqlsh. So let's have a look from the cassandra-cli, and query all rows foor userid 1002:
RowKey: 1002
=> (name=e2f67ec0-f588-11e4-ade7-21b264d4c94d:, value=, timestamp=1431092900008568)
=> (name=e2f67ec0-f588-11e4-ade7-21b264d4c94d:category, value=426f6f6b73, timestamp=1431092900008568)
=> (name=e2f67ec0-f588-11e4-ade7-21b264d4c94d:count, value=00000001, timestamp=1431092900008568)
=> (name=e2f67ec0-f588-11e4-ade7-21b264d4c94d:itemid, value=3637382d322d34343339382d3331322d39, timestamp=1431092900008568)
=> (name=e2f67ec0-f588-11e4-ade7-21b264d4c94d:price, value=0000031e, timestamp=1431092900008568)
=> (name=e2f67ec0-f588-11e4-ade7-21b264d4c94d:subcategory, value=4e6f76656c73, timestamp=1431092900008568)
=> (name=15d0fd20-f589-11e4-ade7-21b264d4c94d:, value=, timestamp=1431092985326774)
=> (name=15d0fd20-f589-11e4-ade7-21b264d4c94d:category, value=417564696f, timestamp=1431092985326774)
=> (name=15d0fd20-f589-11e4-ade7-21b264d4c94d:count, value=00000001, timestamp=1431092985326774)
=> (name=15d0fd20-f589-11e4-ade7-21b264d4c94d:itemid, value=3232382d352d34343334332d3334342d35, timestamp=1431092985326774)
=> (name=15d0fd20-f589-11e4-ade7-21b264d4c94d:price, value=000012bf, timestamp=1431092985326774)
=> (name=15d0fd20-f589-11e4-ade7-21b264d4c94d:subcategory, value=4865616470686f6e6573, timestamp=1431092985326774)
Simple enough, right? We see userid 1002 as the RowKey, and our clustering column of time as a column key. Following that, are all of our columns for each column key (time). And I believe your first instance generates 6 columns, as I'm pretty sure that includes the placeholder for the column key, because your PRIMARY KEY could point to an empty value (as your 2nd example key does).
But what about the 2nd version for userid 1002?
RowKey: 1002
=> (name=e2f67ec0-f588-11e4-ade7-21b264d4c94d:Books:Novels:678-2-44398-312-9:1:798:, value=, timestamp=1431093011349994)
=> (name=15d0fd20-f589-11e4-ade7-21b264d4c94d:Audio:Headphones:228-5-44343-344-5:1:4799:, value=, timestamp=1431093011360402)
Two columns are returned for RowKey 1002, one for each unique combination of our column (clustering) keys, with an empty value (as mentioned above).
So what does this all mean for you? Well, a few things:
This should tell you that PRIMARY KEYs in Cassandra ensure uniqueness. So if you decide that you need to update key values like category or subcategory (2nd example) that you really can't unless you DELETE and recreate the row. Although from a logging perspective, that's probably ok.
Cassandra stores all data for a particular partition/row key (userid) together, sorted by the column (clustering) keys. If you were concerned about querying and sorting your data, it would be important to understand that you would have to query for each specific userid for sort order to make any difference.
The biggest issue I see, is that right now you are setting yourself up for unbounded column growth. Partition/row keys can support a maximum of 2 billion columns, so your 2nd example will help you out the most there. If you think some of your userids might exceed that, you could implement a "date bucket" as an additional partition key (say, if you knew that a userid would never exceed more than 2 billion in a year, or whatever).
It looks to me like your 2nd option might be the better choice. But honestly for what you're doing, either of them will probably work ok.
I am very new to cassandra so it might sound like a newbie question.
I am running cqlsh 5.0.1 | Cassandra 2.1.4 on local.
I have a table like below:
CREATE TABLE master (
id uuid,
creation timestamp,
event_type text,
name text,
PRIMARY KEY(id,creation)
);
...and the records are:
id | creation | event_type | name
--------------------------------------+--------------------------+------------+------------------
305abd6d-34b8-4f36-96c6-9ea0c11be952 | 2015-04-15 14:01:54-0400 | create | test2
305abd6d-34b8-4f36-96c6-9ea0c11be952 | 2015-04-15 14:03:03-0400 | update | test2 update
7440c51c-6441-44fb-833b-6140fbe822eb | 2015-04-15 14:01:54-0400 | create | test3
7440c51c-6441-44fb-833b-6140fbe822eb | 2015-04-15 14:03:44-0400 | update | test3 update
7440c51c-6441-44fb-833b-6140fbe822eb | 2015-04-15 14:04:34-0400 | update | test3 2nd update
bf42a120-dec1-47d8-bde2-c0d76f1c93a5 | 2015-04-15 14:01:54-0400 | create | test1
How can i select all the records with distinct ids and last modified timestamp.
the result should be like:
305abd6d-34b8-4f36-96c6-9ea0c11be952 | 2015-04-15 14:03:03-0400 | update | test2 update
7440c51c-6441-44fb-833b-6140fbe822eb | 2015-04-15 14:04:34-0400 | update | test3 2nd update
bf42a120-dec1-47d8-bde2-c0d76f1c93a5 | 2015-04-15 14:01:54-0400 | create | test1
Given your current structure, you won't be able to select any other columns aside from id with a DISTINCT query. You can create another query table with just id as the PK, then run a basic SELECT on that (it should always keep the last modified date)
CREATE TABLE querytable (
id uuid,
creation timestamp,
event_type text,
name text,
PRIMARY KEY(id)
);
SELECT * from querytable --should only contain unique ID's and the last updated creation date.
You'll have to update this table as you update the master as well.