I have an IMap in hazelcast (key, value) with no ttl set at the time of imap.put(). Now after an event is triggered I want to set ttl to this particular key in the IMap. Since, at the time of this event I don't want to call value = imap.get(key) and then imap.put(key, value, 10, TimeUnit.SECONDS) .
So how can I set ttl to that particular key ?
There is no straight forward way to do it other than using IMap methods. However, I would like to know the reason to avoid the following calls.
value = imap.get(key);
imap.put(key, value, 10, TimeUnit.SECONDS)
If you want to still achieve the result, you can resort to one of the following.
call imap.set(key, value, 10, TimeUnit.SECONDS), if you already have value with you. imap.set() is more efficient than imap.put() as it doesn't return the old value.
If you can accommodate to use one more IMap: Use an additional map ttlMap<key, Boolean>. Whenever you need to set the ttl value for an entry in the actual imap, set an entry in ttlMap.set(key, true, 10, TimeUnit.SECONDS); . Now, add a MapListener to ttlMap using addEntryListener() method. Whenver an entry from ttlMap is evicted, entryEvicted(EntryEvent<String, String> arg0) method will get called. Evict your entry from the actual imap inside this method.
If you are ready to get your hands dirty, you can modify the source in such a way that process() method of the EntryProcessor method will receive a custom Map.Entry with a new method to set ttlValue of the key.
Hope this helps.
Starting from version 3.11, Hazelcast IMap has setTtl(K key,long ttl, TimeUnit timeunit) method that does exactly this:
Updates TTL (time to live) value of the entry specified by key with a new TTL value. New TTL value is valid starting from the time this operation is invoked, not since the time the entry was created. If the entry does not exist or is already expired, this call has no effect.
Related
I have a simple Core Data entity Story that occasionally I update with the latest data from a network call. This network call sometimes updates many, many stories instances, so I run an NSBatchInsertRequest, shown below. (The other reason I'm using a batch insert is that many stories might need to be added to the persistent store.)
The problem is a user can have already marked a Story as a favorite. When they do that, I set story.isFavorite = true on the main thread and save viewContext.
However, when the batch insert occurs it overwrites story.isFavorite, setting it back to false, even though I'm using NSMergeByPropertyObjectTrumpMergePolicy on both the batch insert and view contexts. I am not touching story.isFavorite in the batch insert handler either so I don't expect that property to be overwritten.
I thought the benefit of a batch insert with this merge policy was to avoid first fetching + then manually updating changed properties + finally saving. What is the right way to avoid changing property values in an NSBatchInsertRequest?
Story
#objc(Story)
public class Story: NSManagedObject {
#NSManaged public var title: String?
#NSManaged public var storyURL: URL?
#NSManaged public var updatedTime: Date?
#NSManaged public var isFavorite: Bool // <- the problem property
}
Batch insert
container.viewContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
container.viewContext.automaticallyMergesChangesFromParent = false
let context = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType)
context.parent = container.viewContext
context.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
context.perform {
let batchInsert = NSBatchInsertRequest(entity: Story.entity(), managedObjectHandler: { managedObject in
let story = managedObject as! Story
let storyResponse = downloadedStories[I]
// Update story with latest response data BUT don't modify story.isFavorite.
story.title = storyResponse.title
story.storyURL = storyResponse.storyURL
story.updatedTime = storyResponse.updatedTime
// ...
})
let result = try context.execute(batchInsert) as? NSBatchInsertResult
if let insertedIDs = result?.result as? [NSManagedObjectID] {
// Merge changes into parent context. Skip save() because not needed for batch insert.
NSManagedObjectContext.mergeChanges(fromRemoteContextSave: [NSInsertedObjectsKey: insertedIDs], into: [container.viewContext])
}
}
Edit
The Story entity does have a unique value constraint using attribute storyURL.
Update after Michael Tsai's answer
By making the Story entity attribute isFavorite a non-Optional Boolean without a default value (it was marked as Optional before, though I'm not sure it makes a difference here) and keeping the Use Scalar Type box checked, I can confirm that existing objects in the store will not be modified (at all) with this configuration of the batch insert context.
context.persistentStoreCoordinator = container.persistentStoreCoordinator
// HOWEVER, observe that regardless of the merge policy below,
// setting `context.parent = container.viewContext` will also
// overwrite the store data!
context.mergePolicy = NSMergeByPropertyStoreTrumpMergePolicy
// NSMergeByPropertyObjectTrumpMergePolicy ignores objects in the store
// (which have the same unique constraint value, here equal `storyURL`)
// and overwrites all properties.
// To confirm that the batch insert operation does not modify
// existing Story instances (at all), first delete all instances where
// where isFavorite == false. Then load the all story data again and
// execute the NSBatchInsertRequest with this change to managedObjectHandler:
story.title = storyResponse.title + " (modified)"
You will see the missing stories get inserted back, this time with their titles having a suffix " (modified)"; but previously favorited stories
do not get modified (basically, with this setup, the batch insert won't re-insert objects).
So the isFavorite property does not get overwritten BUT neither do any properties that should be changed (because they received a new title, for example).
Therefore, if you don't want your objects to get updated, but you want completely new objects to be inserted, you can use this approach.
However, if you are expecting your objects to require updates here are some alternatives:
you may opt to run a separate update operation, maybe an NSBatchUpdateRequest after you run your batch insert in this way,
or after the batch insert, you can update certain properties in a simple loop in a (possibly background/child) context without a batch operation, which could be fine if there isn't tons of data;
lastly, you might be able to first batch insert new data to a temporary store before somehow manually merging your choice of properties with the new store, then delete the temporary store.
A simpler approach: you could fetch the all properties you want to keep unchanged before you execute the batch insert (storing them in an dictionary keyed by your object's uniqueness constraint value), and then during the batch insert set the property again.
For this approach, you will want to use a different merge policy such as NSMergeByPropertyObjectTrumpMergePolicy so that the updated object gets re-inserted into the store (make sure to fetch all properties that you don't want to lose in advance of the batch insert)
random idea: How to Save Data When Using One ManagedObjectContext and PersistentStoreCoordinator with Two Stores
I don't think it is actually possible to do a partial update with a batch insert request. It's hard to know for sure because I don't think any of this is documented except in WWDC sessions. When I first watched the 2019 session, I was excited because the presenter said:
Attributes that are optional or configured with default values can be omitted from the dictionary as well.
In the case of updating an object with unique constraint, the existing values will not be changed.
I took this to mean that:
You can omit values for new objects, and you'll get the defaults or NULL. That makes sense.
If there's an existing object and you omit a value, that value will not the changed. So you can purposely omit values to do a partial update, i.e. update other values while leaving your isFavorite alone.
But, after writing code to test this and looking at the output from com.apple.CoreData.SQLDebug, what actually seems to happen with NSMergeByPropertyObjectTrumpMergePolicy is:
If you omit a value that's required you get a validation error.
If you omit a value that's optional, it updates the row to NULL. For a Bool property in Swift, this will become false.
If you omit a value with a default value, it updates the row to the default.
This is a shame because it seems like partial updates could be implemented by having the ON CONFLICT clause only specify DO UPDATE SET for the attributes that you actually set. But (as of macOS 11) Core Data seems to always generate SQL to set all of the columns.
In summary, with batch inserts, NSMergeByPropertyObjectTrumpMergePolicy does not actually merge by property based on what's changed (like with a regular Core Data save). Rather, it either inserts a new row (if the object is absent) or overwrites all the columns but preserves the objectID (if the object was present).
NSMergeByPropertyStoreTrumpMergePolicy also doesn't merge by property. It just means to leave the stored object alone if it's already present.
Update (2021-06-24): I heard from DTS that Apple considers the current (iOS 14/macOS 11) behavior described above a bug, and that it should let you batch insert without changing omitted properties. The Radar number is 79747419.
From the documentation of fetchNext(int number) -
"This will conveniently close the Cursor, after the last Record was fetched."
Assuming number=100 and there are 1000 records in total.
Will it close the cursor once the 100th record was fetched, or when the 1000th was fetched?
In other words, what is the "last record" referred in the documentation?
Cursor<Record> records = dsl.select...fetchLazy();
while (records.hasNext()) {
records.fetchNext(100).formatCSV(out);
}
out.close();
This convenience is a historic feature in jOOQ, which will be removed eventually: https://github.com/jOOQ/jOOQ/issues/8884. As with every Closeable resource in Java, you should never rely on this sort of auto closing. It is always better to eagerly close the resource when you know you're done using it. In your case, ideally, wrap the code in a try-with-resources statement.
What the Javadoc means is that the underlying JDBC ResultSet will be closed as soon as jOOQ's call to ResultSet.next() yields false, i.e. the database returns no more records. So, no. If there are 1000 records in total from your select, and you're only fetching 100, then the cursor will not be closed. If it were, this wouldn't be a "convenience feature", but break all sorts of other API, including the one you've called. It's totally possible to call fetchNext(100) twice, or in a loop, as you did.
I have joined a new project about a year ago and I have started to do some minor tasks with Hazelcast, including the creation of MapStores and EntryListeners for our IMaps.
Since the beginning that I am aware of the difference between using set() and put(), with the ladder carrying the weight of deserializing and returning the old value. That is why I would use put when we needed to access the oldValue in the EntryListeners and use set otherwise.
However, for the past weeks, my team started to report occurrences where map insertions done with set() would trigger the cEntryUpdated with a populated oldValue, which "breaks" some of our current logic.
Now I don't know if this was some recent change released by Hazelcast (we are currently using version 3.12.1) or if I'm just doing something wrong from the beginning. Shouldn't I expect that set would always trigger the listener with an empty oldValue?
There is always an old value, but the writer and listener are independently configurable for whether they receive it.
On map, the writer can use V Map.put(K,V) to receive the old value.
Or, the writer can use void Map.put(K,V) to not receive the old value.
On a listener, use the include-value=true to receive the old and new values, and include-value=false not to. On an insert, the old value will be null. On a delete, the new value will be null.
I'm using Hazelcast with a distributed cache in embedded mode. My cache is defined with an Eviction Policy and an Expiry Policy.
CacheSimpleConfig cacheSimpleConfig = new CacheSimpleConfig()
.setName(CACHE_NAME)
.setKeyType(UserRolesCacheKey.class.getName())
.setValueType((new String[0]).getClass().getName())
.setStatisticsEnabled(false)
.setManagementEnabled(false)
.setReadThrough(true)
.setWriteThrough(true)
.setInMemoryFormat(InMemoryFormat.OBJECT)
.setBackupCount(1)
.setAsyncBackupCount(1)
.setEvictionConfig(new EvictionConfig()
.setEvictionPolicy(EvictionPolicy.LRU)
.setSize(1000)
.setMaximumSizePolicy(EvictionConfig.MaxSizePolicy.ENTRY_COUNT))
.setExpiryPolicyFactoryConfig(
new ExpiryPolicyFactoryConfig(
new TimedExpiryPolicyFactoryConfig(ACCESSED,
new DurationConfig(
1800,
TimeUnit.SECONDS))));
hazelcastInstance.getConfig().addCacheConfig(cacheSimpleConfig);
ICache<UserRolesCacheKey, String[]> userRolesCache = hazelcastInstance.getCacheManager().getCache(CACHE_NAME);
MutableCacheEntryListenerConfiguration<UserRolesCacheKey, String[]> listenerConfiguration =
new MutableCacheEntryListenerConfiguration<>(
new UserRolesCacheListenerFactory(), null, false, false);
userRolesCache.registerCacheEntryListener(listenerConfiguration);
The problem I am having is that my Listener seems to be firing prematurely in a production environment; the listener is executed (REMOVED) even though the cache has been recently queried.
As the expiry listener fires, I get the CacheEntry, but I'd like to be able to see the reason for the Expiry, whether it was Evicted (due to MaxSize policy), or Expired (Duration). If Expired, I'd like to see the timestamp of when it was last accessed. If Evicted, I'd like to see the number of entries in the cache, etc.
Are these stats/metrics/metadata available anywhere via Hazelcast APIs?
Local cache statistics (entry count, eviction count) are available using ICache#getLocalCacheStatistics(). Notice that you need to setStatisticsEnabled(true) in your cache configuration for statistics to be available. Also, notice that the returned CacheStatistics object only reports statistics for the local member.
When seeking info on a single cache entry, you can use the EntryProcessor functionality to unwrap the MutableEntry to the Hazelcast-specific class com.hazelcast.cache.impl.CacheEntryProcessorEntry and inspect that one. The Hazelcast-specific implementation provides access to the CacheRecord that provides metadata like creation/accessed time.
Caveat: the Hazelcast-specific implementation may change between versions. Here is an example:
cache.invoke(KEY, (EntryProcessor<String, String, Void>) (entry, arguments) -> {
CacheEntryProcessorEntry hzEntry = entry.unwrap(CacheEntryProcessorEntry.class);
// getRecord does not update the entry's access time
System.out.println(hzEntry.getRecord().getLastAccessTime());
return null;
});
What is the equivalent of:
INSERT INTO table (myColumn) VALUES (now())
using the Cassandra object-mapping api?
The #Computed annotation doesnt look like it would work unfortunately.
You can also set the value of your object to a type1 uuid. The jre doesnt have standard function for it but you can use the java driver util, JUG, cassandra-all or even write one yourself. This would be a little different because your setting the time as the time of creation as opposed to coordinator setting time of when it receives the request but with ORM's abstractions you tend to lose some control.
Alternatively there is nothing preventing you from issuing CQL statements while still using the object mapping api. Maybe even adding a query to a method on your object to do it ie:
#Query("UPDATE table SET myColumn = now() WHERE ....")
public ResultSet setNow()