Setting TTL/Record Expiry in hazelcast - hazelcast

Is there any way to set TTL per record in hazelcast DB?Preferably in Map or Ringbuffer.

I guess you're looking for that:
IMap::put(Key, Value, TTL, TimeUnit)
IMap:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#evicting-specific-entries
http://docs.hazelcast.org/docs/3.6/javadoc/com/hazelcast/core/IMap.html#put(K,%20V,%20long,%20java.util.concurrent.TimeUnit)
http://docs.hazelcast.org/docs/3.6/javadoc/com/hazelcast/core/IMap.html#putAsync(K,%20V,%20long,%20java.util.concurrent.TimeUnit)
http://docs.hazelcast.org/docs/3.6/javadoc/com/hazelcast/core/IMap.html#putIfAbsent(K,%20V,%20long,%20java.util.concurrent.TimeUnit)
http://docs.hazelcast.org/docs/3.6/javadoc/com/hazelcast/core/IMap.html#putTransient(K,%20V,%20long,%20java.util.concurrent.TimeUnit)
http://docs.hazelcast.org/docs/3.6/javadoc/com/hazelcast/core/IMap.html#set(K,%20V,%20long,%20java.util.concurrent.TimeUnit)

Related

Anyway to view Cache entry metadata from Hazelcast (ie: date added, last accessed, etc)?

I'm using Hazelcast with a distributed cache in embedded mode. My cache is defined with an Eviction Policy and an Expiry Policy.
CacheSimpleConfig cacheSimpleConfig = new CacheSimpleConfig()
.setName(CACHE_NAME)
.setKeyType(UserRolesCacheKey.class.getName())
.setValueType((new String[0]).getClass().getName())
.setStatisticsEnabled(false)
.setManagementEnabled(false)
.setReadThrough(true)
.setWriteThrough(true)
.setInMemoryFormat(InMemoryFormat.OBJECT)
.setBackupCount(1)
.setAsyncBackupCount(1)
.setEvictionConfig(new EvictionConfig()
.setEvictionPolicy(EvictionPolicy.LRU)
.setSize(1000)
.setMaximumSizePolicy(EvictionConfig.MaxSizePolicy.ENTRY_COUNT))
.setExpiryPolicyFactoryConfig(
new ExpiryPolicyFactoryConfig(
new TimedExpiryPolicyFactoryConfig(ACCESSED,
new DurationConfig(
1800,
TimeUnit.SECONDS))));
hazelcastInstance.getConfig().addCacheConfig(cacheSimpleConfig);
ICache<UserRolesCacheKey, String[]> userRolesCache = hazelcastInstance.getCacheManager().getCache(CACHE_NAME);
MutableCacheEntryListenerConfiguration<UserRolesCacheKey, String[]> listenerConfiguration =
new MutableCacheEntryListenerConfiguration<>(
new UserRolesCacheListenerFactory(), null, false, false);
userRolesCache.registerCacheEntryListener(listenerConfiguration);
The problem I am having is that my Listener seems to be firing prematurely in a production environment; the listener is executed (REMOVED) even though the cache has been recently queried.
As the expiry listener fires, I get the CacheEntry, but I'd like to be able to see the reason for the Expiry, whether it was Evicted (due to MaxSize policy), or Expired (Duration). If Expired, I'd like to see the timestamp of when it was last accessed. If Evicted, I'd like to see the number of entries in the cache, etc.
Are these stats/metrics/metadata available anywhere via Hazelcast APIs?
Local cache statistics (entry count, eviction count) are available using ICache#getLocalCacheStatistics(). Notice that you need to setStatisticsEnabled(true) in your cache configuration for statistics to be available. Also, notice that the returned CacheStatistics object only reports statistics for the local member.
When seeking info on a single cache entry, you can use the EntryProcessor functionality to unwrap the MutableEntry to the Hazelcast-specific class com.hazelcast.cache.impl.CacheEntryProcessorEntry and inspect that one. The Hazelcast-specific implementation provides access to the CacheRecord that provides metadata like creation/accessed time.
Caveat: the Hazelcast-specific implementation may change between versions. Here is an example:
cache.invoke(KEY, (EntryProcessor<String, String, Void>) (entry, arguments) -> {
CacheEntryProcessorEntry hzEntry = entry.unwrap(CacheEntryProcessorEntry.class);
// getRecord does not update the entry's access time
System.out.println(hzEntry.getRecord().getLastAccessTime());
return null;
});

Hazelcast absolute expiration of items

I am using Hazelcast "3.6.3" on scala 2.11.8
I have written this code.
val config = new Config("mycluster")
config.getNetworkConfig.getJoin.getMultcastConfig.setEnabled(false)
config.getNetworkConfig.getJoin.getMulticastConfig.setEnabled(false)
config.getNetworkConfig.getJoin.getAwsConfig.setEnabled(false)
config.getNetworkConfig.getJoin.getTcpIpConfig.setMembers(...)
config.getNetworkConfig.getJoin.getTcpIpConfig.setEnabled(true)
val hc = Hazelcast.newHazelcastInstance(config)
hc.getConfig.addMapConfig(new MapConfig()
.setName("foo")
.setBackupCount(1)
.setTimeToLiveSeconds(3600)
.setAsyncBackupCount(1)
.setInMemoryFormat(InMemoryFormat.BINARY)
.setMaxSizeConfig(new MaxSizeConfig(1, MaxSizePolicy.USED_HEAP_SIZE))
)
hc.putValue[(String, Int)]("foo", "1", ("foo", 10))
I notice that when 1 hour is over hazelcast does not remove the items from the cache. The items seem to be living forever in the cache.
I don't want sliding expiration. I want absolute expiration this means that after 1 hour the item has to be kicked out no matter how many times it was accessed during the hour.
I have done required googling and I think my code above is correct. But when I look at my server logs, I am pretty sure that nothing is removed from the cache.
Sorry I am not a scala guy. But can you explain what does hc.addTimeToLiveMapConfig does?
Normally you need to add the TTL config into Config object before starting Hazelcast.
I believe in your case, you are starting Hazelcast and only then updating the config with TTL. Please try with reverse order.
If you don't like to add this to the configuration, there is an overloaded map.put method that takes TTL as an input. This way you can specify TTL per entry.

setting TTL using timestamp field of record during insertion in cassandra

I want to set TTL 30 days from the time field of table record(id,name,time) during insertion. For this I am creating a User Defined Function(UDF) bigint fun(rTime,cTime) as
CREATE FUNCTION fun(rtime timestamp,ctime timestamp) CALLED ON NULL INPUT
RETURNS bigint LANGUAGE java as 'return 2592000-((ctime.toTime() -rtime.toTime())/1000);';
here,function fun is calculating the time in seconds this data should live.
2592000 is the time in seconds for 30 days.
Now I am trying to use above function for setting TTL as
INSERT INTO record(id,name,time) VALUES (123,'data123','2016-08-08 06:06:00')
USING TTL fun('2016-08-08 06:06:00',totimestamp(now()));
getting error as
Syntax Exception: ErrorMessage code=2000 ........
Is there any other way to set ttl based on record time field. What is problem with above approach?
Function call is not supported in USING clause.
In you case, your client have to calculate the appropriate TTL and pass it in query as second.

Set ttl to Imap

I have an IMap in hazelcast (key, value) with no ttl set at the time of imap.put(). Now after an event is triggered I want to set ttl to this particular key in the IMap. Since, at the time of this event I don't want to call value = imap.get(key) and then imap.put(key, value, 10, TimeUnit.SECONDS) .
So how can I set ttl to that particular key ?
There is no straight forward way to do it other than using IMap methods. However, I would like to know the reason to avoid the following calls.
value = imap.get(key);
imap.put(key, value, 10, TimeUnit.SECONDS)
If you want to still achieve the result, you can resort to one of the following.
call imap.set(key, value, 10, TimeUnit.SECONDS), if you already have value with you. imap.set() is more efficient than imap.put() as it doesn't return the old value.
If you can accommodate to use one more IMap: Use an additional map ttlMap<key, Boolean>. Whenever you need to set the ttl value for an entry in the actual imap, set an entry in ttlMap.set(key, true, 10, TimeUnit.SECONDS); . Now, add a MapListener to ttlMap using addEntryListener() method. Whenver an entry from ttlMap is evicted, entryEvicted(EntryEvent<String, String> arg0) method will get called. Evict your entry from the actual imap inside this method.
If you are ready to get your hands dirty, you can modify the source in such a way that process() method of the EntryProcessor method will receive a custom Map.Entry with a new method to set ttlValue of the key.
Hope this helps.
Starting from version 3.11, Hazelcast IMap has setTtl(K key,long ttl, TimeUnit timeunit) method that does exactly this:
Updates TTL (time to live) value of the entry specified by key with a new TTL value. New TTL value is valid starting from the time this operation is invoked, not since the time the entry was created. If the entry does not exist or is already expired, this call has no effect.

Can the TTL of tupule be updated without updating the tupule itself in cassandra

I am sort of noob to cassandra. I was wondering if it is possible to add expiry to a tupule without actually updating the tupule. I have not specified the TTL during INSERT of the tupule. Now I just want to update the TTL.
Is this possible?
Regards
Rajesh
As far as I can tell there's no way to set only the ttl. You could probably re-set one of the values to allow you to pass in the ttl:
UPDATE TABLE USING TTL 10 SET a_col = a_col WHERE key = key;
See the syntax: here
Note: keep in mind that this will set the TTL for the a_col column and will result in an write operation.
Update: this answer is also a valid option.

Resources