Redis expiration, memory leak by specification? - memory-leaks

Redis provides us with the EXPIRE and TTL functions. According to the documentation, the TTL command can be used to distinguish between a non-existent and an expired key:
> SET foo 3
OK
> GET foo
"3"
> EXPIRE foo 5
(integer) 1
> TTL foo
(integer) 3
> TTL foo
(integer) 2
> TTL foo
(integer) 1
> TTL foo
(integer) 0
> TTL foo
(integer) -2
According to the EXPIRE specification, expired objects are actually removed from the store either when they are accessed, or through a periodic random selection of expired keys:
Specifically this is what Redis does 10 times per second:
Test 20 random keys from the set of keys with an associated expire.
Delete all the keys found expired.
If more than 25 keys were expired, start again from step 1.
But what about the -2 (or the information which allows to generate it in place of the -1)? Is it kept forever or is there a garbage collection policy?
Also notice how if we set and delete a new value for the same key, the -2 will survive:
> SET foo 3
OK
> ttl foo
(integer) -1
> del foo
(integer) 1
> ttl foo
(integer) -2
So, for instance, let's say we have a scripts which keeps setting keys with incremental names and makes them expire after 1 second. After an arbitrarly long time, are we going to exhaust the memory?

But what about the -2 (or the information which allows to generate it in place of the -1)? Is it kept forever or is there a garbage collection policy?
-2 indicates that the key's not in the database, e.g.:
127.0.0.1:6379> flushall
OK
127.0.0.1:6379> ttl somekey
(integer) -2

See the page on Redis as a Least-Recently-Used Cache -- you can tell Redis to not exceed a set amount of memory, and select one of several key expiration and purging policies.
A subtle issue is that not all data types in Redis play nice with this, but sets with ttl set should work. There is also support for enough logging for you to keep track of what happens why and how as you tune this.

Related

How long will it take to delete a row?

I have a 2 node cassandra cluster with RF=2.
When a delete from x where y cql statement is issued - is it known how long it will take all nodes to delete the row?
What I see in one of the integration tests:
A row is deleted, the result of the deletion is tested with a select * from y where id = xxx statement. What I see is that sometimes the result is not null as expected and the deleted row is still found.
Is the correct approch to read with CL=2 to get the result I am expecting?
make sure that the servers time are in synch if you are using server side timestamp.
Better use client side timestamp.
Is the correct approch to read with CL=2 to get the result I am
expecting?
i assume you are using default consistecy while delete ie 1 and as 1+2 > 2 (ie W+R > N) in your case hence it is ok.
Local to the replica it will be sub ms. The time is dominated in the time from app->coordinator->replica->coordinator->app network hops. Use quorum or local_quorum for consistency across sequential requests like that on both write and read.

invalid expire time in SETEX, sPort: 12702 in Redis

I am getting error while using Redis cache time=0. Same time using with Redis cache time=1, it is working as expected.
How to set Redis cache time value is 0. Please help
ErrorMessage
"Message":"An error has occurred.","ExceptionMessage":"invalid expire time in SETEX, sPort: 12702, LastCommand: ","ExceptionType":"ServiceStack.Redis.RedisResponseException"
I want set 0 as expire time, why because i am using dynamic page and it has many chunk. The Redis cache time coming from config file. Example: Chunk 1 with redis cache time of 2 minute. In particular time i dont want redis cache, that time i go and change 0 as redis cache time in configuration file.
In particular time i dont want redis cache, that time i go and change 0 as redis cache time in configuration file.
It seems that you don't want to store a key by commanding redis to store a key. Which is very inconvenient.
If you don't want to change your application code than you could save it just for 1 sec, which is minimal. As setex command expect the time is positive means greater than zero.
Otherwise, you can tweak your code by ignoring to store in cache while ttl is zero. Or you can save it for 1 milisecond in redis using psetex insted of setex.

Hazelcast c++ client, map and TTL

I have an entry (k1, v1) in map with ttl say 60 secs.
If I do map.set(k1, v2), the ttl is not impacted, i.e. the entry will get removed after 60 seconds.
However, if I do map.put(k1, v2), the ttl will seize to exist, i.e. entry will not be removed after 60 seconds.
Is this understanding correct? I guess it this way, but could not find it clearly mentioned in documentations.
You are correct. There was a bug for map.put when using configured ttl time. I just submitted the PR for the fix here with the additional tests: https://github.com/hazelcast/hazelcast-cpp-client/pull/164
We mistakenly sent 0 instead of -1 for the ttl. -1 means to use the configured ttl. This was correct for set API already, the problem was only with the put API.
Thanks for reporting this.
No, both put and set operation have same under lying implementation except that set operation does not return the oldValue.
You can take a look at the PutOperation & SetOperation classes, both are extending BasePutOperation.
Unless you are setting the ttl for every put/set operation, eviction should be based on the latest ttl value of entry.

Max value for TTL in cassandra

What is the maximum value we can assign to TTL ?
In the java driver for cassandra TTL is set as a int. Does that mean it is limited to Integer.MAX (2,147,483,647 secs) ?
The maximum TTL is actually 20 years. From org.apache.cassandra.db.ExpiringCell:
public static final int MAX_TTL = 20 * 365 * 24 * 60 * 60; // 20 years in seconds
I think this is verified along both the CQL and Thrift query paths.
I don't know why do you need this but default TTL is null in Cassandra which means it won't be deleted until you force.
One very powerful feature that Cassandra provides is the ability to
expire data that is no longer needed. This expiration is very flexible
and works at the level of individual column values. The time to live
(or TTL) is a value that Cassandra stores for each column value to
indicate how long to keep the value.
The TTL value defaults to null, meaning that data that is written will
not expire.
https://www.oreilly.com/library/view/cassandra-the-definitive/9781491933657/ch04.html
20 years max TTL is not correct anymore. As per Cassandra news read this notice:
PLEASE READ: MAXIMUM TTL EXPIRATION DATE NOTICE (CASSANDRA-14092)
The maximum expiration timestamp that can be represented by the
storage engine is 2038-01-19T03:14:06+00:00, which means that inserts
with TTL thatl expire after this date are not currently supported. By
default, INSERTS with TTL exceeding the maximum supported date are
rejected, but it's possible to choose a different expiration overflow
policy. See CASSANDRA-14092.txt for more details.
Prior to 3.0.16 (3.0.X) and 3.11.2 (3.11.x) there was no protection
against INSERTS with TTL expiring after the maximum supported date,
causing the expiration time field to overflow and the records to
expire immediately. Clusters in the 2.X and lower series are not
subject to this when assertions are enabled. Backed up SSTables can be
potentially recovered and recovery instructions can be found on the
CASSANDRA-14092.txt file.
If you use or plan to use very large TTLS (10 to 20 years), read
CASSANDRA-14092.txt for more information.

How to have an EXPIRE that auto renews when TTL reaches 0

I'm building an app where I need to have three scoreboards which I'm implementing with sorted sets and lists. The app is running on node.js using the node_redis (https://github.com/mranney/node_redis) module for the redis client.
The first scoreboard is a 'latest scores' which I'm using a list and LPUSH for. The second is an all time high score which I'm using a sorted list for with the ZADD command.
I'm having trouble implementing a 'high scores this week'. I was thinking that I should use another sorted list using ZADD with an EXPIRE set for one week. That all works fine, but after the list has expired for the first time, it'll continue to add to a new list forever.
Is there a redis command to have an expire to auto renew? (I've been searching for an answer for a couple of hours now but the answer appears to be no). I'm coming to the conclusion that I'll need to do this programmatically. During a function call that uses the set, I could check to see if the TTL is -1 and reset it there and then. Is this best practice? Am I missing a clever trick somewhere? Do I need to be concerned about the extra database requests?
--EDIT--
I've had a reply to this question on twitter https://twitter.com/redsmin/status/302177241167691777
The suggested solution (if I understand correctly) is to use the EXPIREAT command along with each ZADD
expireat myscoreboard {{timestamp of the end of the week}}
zadd myscoreboard 1 "one"
This "feels" right to me but I'm new to redis so would appreciate some discussion on this technique or any other ways of solving the problem.
It depends on how you define "one week". There are several ways to use it, for example:
"The last 7 days"
"Week of the year"
"This week starting on sunday and ending on Saturday"
The simplest to implement are 2 & 3.
You specify a set which includes in it's keyname the date/time to start on, using an expire of one week. You then simply determine on the client side which day you want and grab the data.
For example
zadd scoreboard:weekly:03:March:2013 1 "bob"
Then the following week your keyname would be
zadd scoreboard:weekly:10:March:2013 1 "bob"
When you first create the key you set the expires, and that is all. No need to re-set it every time. Pseudocode follows:
if (ttl scoreboard:weekly:03:March:2013) == 0:
expire scoreboard:weekly:03:March:2013 604800
This way you only set the expiration once, get auto-expiration, and can easily pull the weekly scoreboard.
You could implement a rolling week using the same method but you would need to go to a daily key name and calculate what keys to get, then merge them. You could do this using zunionstore.

Resources