cassandra TTL for table behaviour - cassandra

Suppose I inserted a column at second-1 and another column at second-2. Default TTL for table is set to 10 seconds for example:
Question 1: Is data1 and data2 going to be deleted after 10 seconds or data 1 will be deleted after 10 seconds and data-2 after 11 seconds ( as it was inserted in second-2)?
Question 2: Is it possible to set a TTL at a table level in such a way that each entry in the table will expire based on the TTL in a FIFO fashion ? (data-1 will expire at second-10 and data-2 at second-11), without specifying TTL while inserting for each data point? (Should be able to specify at a table level ?)
Thanks for the help :)
EDIT:
the page at https://docs.datastax.com/en/cql/3.1/cql/cql_using/use_expire_c.html says
Setting a TTL for a table
The CQL table definition supports the default_time_to_live property,
which applies a specific TTL to each column in the table. After the
default_time_to_live TTL value has been exceed, Cassandra tombstones
the entire table. Apply this default TTL to a table in CQL using
CREATE TABLE or ALTER TABLE
they say "entire table" which confused me.

TTL at table level is by no means different than TTL at values level: it specifies the default TTL time for each row.
The TTL specifies after how many seconds the values must be considered outdated and thus deleted. The reference point is the INSERT/UPDATE timestamp, so if you insert/update a row at 09:53:01:
with a TTL of 10 seconds, it will expire at 09:53:11
with a TTL of 15 seconds, it will expire at 09:53:16
with a TTL of 0 seconds, it will never expire
You can override the default TTL time by specifying USING TTL X clause in your queries, where X is your new TTl value.
Please note that using TTL not wisely can cause tombstones problems. And note also that the TTL usage have some quirks. Have a look at this recent answer for further details.

Question 1 Ans : data1 will deleted after 10 and data2 will deleted after 11 seconds
Question 2 Ans : Cassandra insert every column with the table's ttl, So Every column will expire on insertion time + ttl.

I read this topic and a lot of anothers but I'm still confused because at https://docs.datastax.com/en/cql-oss/3.3/cql/cql_using/useExpire.html
they say exactly this:
If any column exceeds TTL, the entire table is tombstoned.
What do they mean? I understand that there is no any sence to tombstone all columns in table when only one exceeded default_time_to_live but they wrote exactly this!
UPD: I did several tests. default_time_to_live means just default TTL on column level. When this TTL expires just concrete columns with expired TTL are tombstoned.
They used very strange sentence in that article.

Related

Cassandra TTL data not working

I have old data (last 1 year) in Cassandra. I then alter the table structure adding TTL of 30 days. Will TTL (default_time_to_live = 2592000) delete my one year back old data or not?
From documentation:
If the value is greater than zero, TTL is enabled for the entire table and an expiration timestamp is added to each column. A new TTL timestamp is calculated each time the data is updated and the row is removed after all the data expires.
So the TTL for data will be set only if you update them, but will not touch the old data.
This description of how data is deleted would be also helpful.

How does the data overhead of multiple columns with TTL in Cassandra work?

In the documentation for expiring data for Cassandra (here) it is mentioned that
Expiring data has an additional overhead of 8 bytes in memory and on disk (to record the TTL and expiration time) compared to standard data.
If one sets a TTL (time-to-live) on a table level, does that mean that for each data entry there is an overhead of 8 bytes more in memory and on disk multiplied by the number of columns, or it's independent of the number of columns?
For example, in the documentation one also finds the example here to determine the TTL for a column, even though data is inserted on more than 1 column and TTL is defined for the actual data entry being inserted, not on a per-column basis.
No, not anymore at least. That documentation is outdated and only relevant pre 3.0.
Currently if all the columns in a partition or a row in a partition have same TTL set at insertion it is just set the once for it. If they are stored they are written delta encoded from sstables minTimestamp as an unsigned variable int, not 8 bytes.
According to Cassandra documentation, on the create table section it says:
default_time_to_live
TTL (Time To Live) in seconds, where zero is
disabled. When specified, the value is set for the Time To Live (TTL)
marker on each column in the table; default value: 0. When the table
TTL is exceeded, the table is tombstoned.
Meaning that when you define a TTL for the table, it is valid for each column (except the primary key).

Using default TTL columns but high number of tombstones in Cassandra

I use Cassandra 3.0.12.
And I have a cassandra Column Family, or CQL table with the following schema:
CREATE TABLE win30 (
cust_id text,
tid timeuuid,
info text,
PRIMARY KEY (cust_id , tid )
) WITH CLUSTERING ORDER BY (tid DESC)
and compaction = {'class': 'DateTieredCompactionStrategy', 'max_sstable_age_days': 31 };
alter table win30 with default_time_to_live = '2592000';
I have set the default_time_to_live property for the entire table, but when I query the table,
select * from win30 order by tid desc limit 9999
Cassandra WARN that
Read xx live rows and xxxx tombstone for query xxxxxx (see tombstone_warn_threshold).
According to this doc How is data deleted,
Cassandra allows you to set a default_time_to_live property for an
entire table. Columns and rows marked with regular TTLs are processed
as described above; but when a record exceeds the table-level TTL,
Cassandra deletes it immediately, without tombstoning or compaction.
"but when a record exceeds the table-level TTL,Cassandra deletes it immediately, without tombstoning or compaction."
Why Cassandra still WARN for tombstone since I have set a default_time_to_live?
I insert data using some CQL like, without using TTL.
insert into win30 (cust_id, tid, info ) values ('123', now(), 'sometext');
a similar question but it does not use default_time_to_live
And it seems that I could set the unchecked_tombstone_compaction to true?
Another question, I select data with ordering the same as the CLUSTERING ORDER,
why Cassandra hit so many tombstones?
Why Cassandra still WARN for tombstone since I have set a default_time_to_live?
The way TTL works in Cassandra is that once the record is expired, its marked as tombstone (the same process of deletion of a record). So instead of manually having a purge job in RDBMS world, Cassandra enables you to cleanup old records based on their TTL. But it still follows through the same process as DELETE and hence the tombstone. Since your TTL value is '2592000' (30days), anything older than 30 days in the table gets expired (marked as tombstone - deleted).
Now the reason for the warning is that your SELECT statement is looking for records that are alive (non-deleted) and the warning message is for how many tombstoned (expired / deleted) records were encountered in the process. So while trying to serve 9999 alive records, the table hit X number of tombstones along the way.
Since the TTL is set at table level, any inserted record to this table will have a default TTL of 30days.
Here is the documentation reference, in case you want to read more.
After the number of seconds since the column's creation exceeds the TTL value, TTL data is considered expired and is included in results. Expired data is marked with a tombstone after on the next read on the read path, but it remains for a maximum of gc_grace_seconds.
Above reference is from this link
And it seems that I could set the unchecked_tombstone_compaction to true?
Its nothing related to the warning that you are getting. You could think about reducing gc_grace_seconds value (default 10 days) to get rid of tombstones quicker. But there is a reason for this value to be 10days.
Note that DateTieriedCompactionStrategy is depcreated and once you upgrade to 3.11 Apache Cassandra or DSE 5.1.2 there is TimeWindowCompactionStrategy which does a better job with handling tombstones.

Update TTL for entire row when doing CQL update statement

Assume you have a row with 4 columns, that when you created it, you set a TTL of 1 hour.
I need to occasionally update the date column of the row, and at the same time update the TTL of the entire row.
Asusming this doesn't work, whats the correct way to achieve this?
update mytable using ttl 3600
set accessed_on=?
Cassandra supports TTL per column only, which is a nice flexible features, but the ability to TTL a row is a feature that has been requested many times.
Your only option is to update all columns on the row, thereby updating the TTL on all the columns.

TTL field for a set of columns in CQL3 - Cassandra

Consider the following Insert statement.
INSERT INTO NerdMovies (movie, director, main_actor, year)
VALUES ('Serenity', 'Joss Whedon', 'Nathan Fillion', 2005)
USING TTL 86400;
Does the TTL field specify the time to live for the whole set of columns for a particular primary key or just one particular column. Because i would want to specify a TTL for a whole set of columns that should get deleted after the TTL expires.
Ok, I figured it out my self. It sets the TTL for the whole set of columns. so, all the columns for a particular primary key will be deleted once the TTL expires.
#sayed-jalil
To be more precise, it will set TTL for the columns that you mentioned in the INSERT/UPDATE statement.
So for instance, if at time t you do
INSERT INTO NerdMovies (movie, director, main_actor, year)
VALUES ('Serenity', 'Joss Whedon', 'Nathan Fillion', 2005)
USING TTL 86400;
if you then do the following at time t + 10
UPDATE USING TTL 86400 NerdMovies SET year = 2004;
then columns movie, director, main_actor will have TTL of t+86400 and column year will have TTL of t+10+86400
Hope that makes sense.

Resources