Max value for TTL in cassandra - cassandra

What is the maximum value we can assign to TTL ?
In the java driver for cassandra TTL is set as a int. Does that mean it is limited to Integer.MAX (2,147,483,647 secs) ?

The maximum TTL is actually 20 years. From org.apache.cassandra.db.ExpiringCell:
public static final int MAX_TTL = 20 * 365 * 24 * 60 * 60; // 20 years in seconds
I think this is verified along both the CQL and Thrift query paths.

I don't know why do you need this but default TTL is null in Cassandra which means it won't be deleted until you force.
One very powerful feature that Cassandra provides is the ability to
expire data that is no longer needed. This expiration is very flexible
and works at the level of individual column values. The time to live
(or TTL) is a value that Cassandra stores for each column value to
indicate how long to keep the value.
The TTL value defaults to null, meaning that data that is written will
not expire.
https://www.oreilly.com/library/view/cassandra-the-definitive/9781491933657/ch04.html

20 years max TTL is not correct anymore. As per Cassandra news read this notice:
PLEASE READ: MAXIMUM TTL EXPIRATION DATE NOTICE (CASSANDRA-14092)
The maximum expiration timestamp that can be represented by the
storage engine is 2038-01-19T03:14:06+00:00, which means that inserts
with TTL thatl expire after this date are not currently supported. By
default, INSERTS with TTL exceeding the maximum supported date are
rejected, but it's possible to choose a different expiration overflow
policy. See CASSANDRA-14092.txt for more details.
Prior to 3.0.16 (3.0.X) and 3.11.2 (3.11.x) there was no protection
against INSERTS with TTL expiring after the maximum supported date,
causing the expiration time field to overflow and the records to
expire immediately. Clusters in the 2.X and lower series are not
subject to this when assertions are enabled. Backed up SSTables can be
potentially recovered and recovery instructions can be found on the
CASSANDRA-14092.txt file.
If you use or plan to use very large TTLS (10 to 20 years), read
CASSANDRA-14092.txt for more information.

Related

DateTieredCompactionStrategy sub-properties in Cassandra

Have some questions on DateTieredCompactionStrategy sub-properties in Cassandra
The blog http://www.datastax.com/dev/blog/datetieredcompactionstrategy, says:
Base_time_seconds: “This is the size of the first window, defaults to 3600 seconds (1 hour). The rest of the windows will be min_threshold (default 4) times the size of the previous window.”
With default value of 3600 i.e., 1hr for base_time_seconds does it mean first compaction triggers at 1st hour, next at 4,16, 64 hours and so on?
max_window_size_seconds: Default 1 day. Does it mean my compaction is run atleast once in a day?
tombstone_compaction_interval: Default 10 days.
If my sstable is say 7 day old, but is full of expired data due to ttl 1 day and GC_grace_sec of 1 day. Does it mean that still my sstables are not removed?
Does tombstone_compaction_interval take priority over ttl and GC_grace_sec
min_threshold: When compaction is run, and no, of sstables is < min_threshold, then my compaction is not run?
No - DTCS finds the sstables within one of those windows (1h, 4h, ..) and if it thinks it needs to compact them together (iirc for the first window it has to be more than min_threshold, for the rest 2 or more), it will.
No. The number of compactions is only depending on the number of flushed/streamed sstables. max window size is just to make sure we don't get huge older windows which hurt when bootstrapping/streaming etc.
No, with DTCS you should not touch the tombstone_compaction_interval - the whole idea is that once the whole sstable is expired, the entire thing will get dropped automatically without compaction.
Correct, but it is per window, so you could have 100 sstables in separate windows with DTCS
Note that DTCS is deprecated and you should really be using TWCS instead. If you use cassandra < 3.0, you can just build the jar file and drop it in the lib directory to use it. https://github.com/jeffjirsa/twcs https://issues.apache.org/jira/browse/CASSANDRA-9666

Why will timeuuid not have any collisions?

I was reading the Datastax CQL reference:
Collisions that would potentially overwrite data that was not intended
to be overwritten cannot occur.
Can someone explain to me why a collision will never occur? Is it impossible or "highly" unlikely?
Cassandra's timeuuid is a Version 1 UUID which is based on the time and the MAC address of the machine generating the UUID.
The time used is accurate down to 100ns, so the chance of a collision is incredibly slim (a nano second is a millionth of a millisecond).
Cassandra timeuuid is a Version 1 UUID(Type 1 UUID) which is based on:
A timestamp consisting of a count of 100-nanosecond intervals since
15 October 1582 (the date of Gregorian reform to the Christian
calendar).
A version (which should have a value of 1).
A variant(which should have a value of 2).
A sequence number, which can be a counter or a pseudo-random number.
A "node" which will be the machine's MAC address (which should make the UUID unique across machines).
Using a pseudo-random number for the sequence number provides a 1 in a 16,384 chance that each UUID Class will have a unique id.
if you generate more than 10000 UUID per msec then they may collide.
1 msec = 10^6 ns
By this you can generate 10^6 UUID if we take ns level timestamp but
as we take timestamp as 100ns count.
we will be have at most 10000 unique timestamps in one millisecond.
Now generating more than that on a single machine(which will have same MAC address), there is a chance to collide ass we also need to take sequence number into account.
If your application generates more than 10000 per ms, use another column to make a compound key which helps to avoid collisions.

Hazelcast c++ client, map and TTL

I have an entry (k1, v1) in map with ttl say 60 secs.
If I do map.set(k1, v2), the ttl is not impacted, i.e. the entry will get removed after 60 seconds.
However, if I do map.put(k1, v2), the ttl will seize to exist, i.e. entry will not be removed after 60 seconds.
Is this understanding correct? I guess it this way, but could not find it clearly mentioned in documentations.
You are correct. There was a bug for map.put when using configured ttl time. I just submitted the PR for the fix here with the additional tests: https://github.com/hazelcast/hazelcast-cpp-client/pull/164
We mistakenly sent 0 instead of -1 for the ttl. -1 means to use the configured ttl. This was correct for set API already, the problem was only with the put API.
Thanks for reporting this.
No, both put and set operation have same under lying implementation except that set operation does not return the oldValue.
You can take a look at the PutOperation & SetOperation classes, both are extending BasePutOperation.
Unless you are setting the ttl for every put/set operation, eviction should be based on the latest ttl value of entry.

Cassandra TTL not working

I am using datastax with cassandra. I want a row to be automatically deleted after 15 minutes of its insertion. But the row still remains.
My code is below:
Insert insertStatement = QueryBuilder.insertInto(keySpace, "device_activity");
insertStatement.using(QueryBuilder.ttl(15* 60));
insertStatement.value("device", UUID.fromString(persistData.getSourceId()));
insertStatement.value("lastupdatedtime", persistData.getLastUpdatedTime());
insertStatement.value("devicename", persistData.getDeviceName());
insertStatement.value("datasourcename", persistData.getDatasourceName());
The table consist of 4 columns : device (uuid), datasourcename(text), devicename(text), lastupdatedtime (timestamp).
If I query the TTL of some field it shows me 4126 seconds which is wrong.
//Select TTL(devicename) from device_activity; // Gives me 4126 seconds
In the below link, the explanation of TTL is provided.
https://docs.datastax.com/en/cql/3.1/cql/cql_using/use_expire_c.html
"TTL data has a precision of one second, as calculated on the server. Therefore, a very small TTL probably does not make much sense. Moreover, the clocks on the servers should be synchronized; otherwise reduced precision could be observed because the expiration time is computed on the primary host that receives the initial insertion but is then interpreted by other hosts on the cluster."
After reading this i could resolve by setting proper time on the corresponding node(machine.)

Cassandra sstables accumulating

I've been testing out Cassandra to store observations.
All "things" belong to one or more reporting groups:
CREATE TABLE observations (
group_id int,
actual_time timestamp, /* 1 second granularity */
is_something int, /* 0/1 bool */
thing_id int,
data1 text, /* JSON encoded dict/hash */
data2 text, /* JSON encoded dict/hash */
PRIMARY KEY (group_id, actual_time, thing_id)
)
WITH compaction={'class': 'DateTieredCompactionStrategy',
'tombstone_threshold': '.01'}
AND gc_grace_seconds = 3600;
CREATE INDEX something_index ON observations (is_something);
All inserts are done with a TTL, and should expire 36 hours after
"actual_time". Something that is beyond our control is that duplicate
observations are sent to us. Some observations are sent in near real
time, others delayed by hours.
The "something_index" is an experiment to see if we can slice queries
on a boolean property without having to create separate tables, and
seems to work.
"data2" is not currently being written-- it is meant to be written by
a different process than writes "data1", but will be given the same
TTL (based on "actual_time").
Particulars:
Three nodes (EC2 m3.xlarge)
Datastax ami-ada2b6c4 (us-east-1) installed 8/26/2015
Cassandra 2.2.0
Inserts from Python program using "cql" module
(had to enable "thrift" RPC)
Running "nodetool repair -pr" on each node every three hours (staggered).
Inserting between 1 and 4 million rows per hour.
I'm seeing large numbers of data files:
$ ls *Data* | wc -l
42150
$ ls | wc -l
337201
Queries don't return expired entries,
but files older than 36 hours are not going away!
The large number SSTables is probably caused by the frequent repairs you are running. Repair would normally only be run once a day or once a week, so I'm not sure why you are running repair every three hours. If you are worried about short term downtime missing writes, then you could set the hint window to three hours instead of running repair so frequently.
You might have a look at CASSANDRA-9644. This sounds like it is describing your situation. Also CASSANDRA-10253 might be of interest.
I'm not sure why your TTL isn't working to drop old SSTables. Are you setting the TTL on a whole row insert, or individual column updates? If you run sstable2json on a data file, I think you can see the TTL values.
Full disclosure: I have a love/hate relationship with DTCS. I manage a cluster with hundreds of terabytes of data in DTCS, and one of the things it does absolutely horribly is streaming of any kind. For that reason, I've recommended replacing it ( https://issues.apache.org/jira/browse/CASSANDRA-9666 ).
That said, it should mostly just work. However, there are parameters that come into play, such as timestamp_resolution, that can throw things off if set improperly.
Have you checked the sstable timestamps to ensure they match timestamp_resolution (default: microseconds)?

Resources