Setting TTL/Record Expiry in hazelcast for individual entries without a put - hazelcast

Is it possible to set ttl/record-expiry on an individual key without doing a put i.e, without changing the value of key. Something similar to EXPIRE in redis?
I can do a "get" followed by "put" and set the ttl but that would be in-efficient with large values.
hz = hazelcast.HazelcastClient()
test_map = hz.get_map('test_map')
val = test_map.get(key)
test_map.put(key, val, ttl)
Note - I am using hazelcast-python-client

You can adjust TTL with changing anything else with the map.setTtl() method:
IMap map = hz.getMap("testMap"); // get the map
map.setTtl("keyToModify", 1, TimeUnit.HOURS);

Related

How to power a windowed virtual list with cursor based pagination?

Take a windowed virtual list with the capability of loading an arbitrary range of rows at any point in the list, such as in this following example.
The virtual list provides a callback that is called anytime the user scrolls to some rows that have not been fetched from the backend yet, and provides the start and stop indexes, so that, in an offset based pagination endpoint, I can fetch the required items without fetching any unnecessary data.
const loadMoreItems = (startIndex, stopIndex) => {
fetch(`/items?offset=${startIndex}&limit=${stopIndex - startIndex}`);
}
I'd like to replace my offset based pagination with a cursor based one, but I can't figure out how to reproduce the above logic with it.
The main issue is that I feel like I will need to download all the items before startIndex in order to receive the cursor needed to fetch the items between startIndex and stopIndex.
What's the correct way to approach this?
After some investigation I found what seems to be the way MongoDB approaches the problem:
https://docs.mongodb.com/manual/reference/method/cursor.skip/#mongodb-method-cursor.skip
Obviously he same approach can be adopted by any other backend implementation.
They provide a skip method that allows to skip an arbitrary amount of items after the provided cursor.
This means my sample endpoint would look like the following:
/items?cursor=${cursor}&skip=${skip}&limit=${stopIndex - startIndex}
I then need to figure out the cursor and the skip values.
The following code could work to find the closest available cursor, given I store them together with the items:
// Limit our search only to items before startIndex
const fragment = items.slice(0, startIndex);
// Find the closest cursor index
const cursorIndex = fragment.length - 1 - fragment.reverse().findIndex(item => item.cursor != null);
// Get the cursor
const cursor = items[cursorIndex];
And of course, I also have a way to know the skip value:
const skip = items.length - 1 - cursorIndex;

How to retrieve all documents in couchdb database without causing out of memory

I have a coucdb database which contains about 200000 tweets, keys are tweet ID. I have a query which needs to retrieve all documents to look for some information. I'm using lightcouch to work with couchdb in a java web app. If I create a dbClient like this:
List<JsonObject>tweets = dbClient.view("_all_docs").query(JsonObject.class);
and then loop through tweets, for each JsonObject in tweets, use
JsonObject tweetJson = dbClient.find(JsonObject.class, tweet.get("id").toString().replaceAll("\"", ""));
to retrieve each tweet one by one it took extremely long time for 200000 documents. If I load all documents in one single query using includeDocs(true)
List<JsonObject>allTweets = dbClient.view("_all_docs").includeDocs(true).query(JsonObject.class);
it caused outofmemory exception since the number of documents are too large. So how can i deal with this problem? I'm thinking about using limit(5000) to retrieve 5000 documents for each time and loop through whole database, but I don't know how to write the loop to continue to retrieve the next 5000 after the first 5000 docs. One possible solution is using startKey and endKey but I'm confused how to use them when the key is tweet ID.
Use queryPage but make sure to use a String as the Key
See: https://github.com/lightcouch/LightCouch/issues/26#event-122327174
0.1.6 still seems to show this behaviour.
A workaround that I found for this goes something like this:
changes = DbClient.changes()
.since(null) // or... since(since) if you want an offset
.includeDocs(true);
int size = 1;
getCursor("0");
while (size > 0 ) {
ChangesResult resultSet = changes.limit(40000).getChanges();
List<ChangesResult.Row> rowList = resultSet.getResults();
for (ChangesResult.Row feed: rowList) {
<instantiate your object via gson>
.
.
.
}
getCursor(resultSet.getLastSeq());
size = rowList.size();
}

setting values of one map to other depending on a condition

I have couple of records rec1 and rec2.
Both are having a common key/value name1.
when the name1 is equal in both the records then I need to set few values of rec2 to rec1.
I put them into two different loops as below
rec1.each{r1-> each
rec2.each{r2-> each
if(r2.name1 == r1.name1){
r1.name2 = r2.name2
r1.name3 = r2.name3
}
}
}
Is there any better way of doing this
Example : (sorry I am just pasting the contents)
recoRecord : [["CHANNEL":INBOUND, "STOCK_LEVEL":2410.0,
"OFFER_TARIFF_ID":FBUN-MVP-VME-VIRGIN-31-24-04, "P_BAND":P4-6,
"CONTRACT_LENGTH":24.0, "INCENTIVE_POINTS":10.0,
"HANDSET_PKEY_ID":SAM-STD-I9300-1, "CUST_TYPE":MEDIA]]
records : [["MEDIA_SUBSIDY_VALUE":0.0, "CREDIT_CLASS":C5,
"DOM_OTHER_MARGIN":0.0, "isBatchTerminator":false,
"CALL_GROUP_DESC":COMBINED, "DM":20.0, "BLACKBERRY_IND":N,
"PREFERRED_BLACKBERRY":N, "ERROR_ID":0, "CUST_TYPE":MEDIA,
"TARIFF_MRC":30.99, "MOST_USED_TAC":35961404, "FORM_FACTOR":null,
"CAMERA_IND":null, "NEW_MARGIN":22.272501, "MODEL":null,
"IS_MMS_ALLOWANCE":N, "ACTIVE_HANDSET_BANDS":,
"CUST_OUT_OF_ALLOWANCE_PLAN":JV15, "OOB_DOM_VOICE":0.0,
"OOB_DOM_SMS":0.0, "VM_CUST_FLAG":Y, "IB_DATA":0.0,
"CHANNEL_FLAG":INBOUND, "SMS_ALLOWANCE":5000.0, "ROAM_SMS_MARGIN":0.0,
"TARIFF_DESC":30.99 Virgin Media 24 month+1GB 1300mins,
"MARGIN_CHANGE_PCT":0.12691319, "OFFER_VOICE_ALLOWANCE":600,
"MAKE":null, "IS_ONNET_ALLOWANCE":Y, "OFFER_CONTRACT_TERM":24.0,
"PREFERRED_MINUTES":1300, "PREFERRED_ON_NET":Y,
"MOST_USED_IMEI":359614048625860, "DISCOUNT":3.0,
"NetPresentValue":1.15, "RecInd":1, "WIFI_IND":null, "IPHONE_IND":N,
"OFFER_TARIFF_ID":FBUN-MVP-VME-VIRGIN-31-24-04,
"IncentivePoints":-1.0]
when OFFER_TARIFF_ID in both the records are same then I would like to set few values of first record to second record
You do not need to iterate over both the maps. Just need to check the value of that particular key matches or not.
if(r2.'OFFER_TARIFF_ID' == r1.'OFFER_TARIFF_ID'){
//push the required entries from r1 to r2
}
Although in your edit, I do not see a valid data structure for records, I considered r1 and r2 as Maps.

Insert data into REDIS (node.js + redis)

How, i can insert (store) data something like this (node.js + redis):
var timestamp = new Date().getTime();
client.hmset('room:'+room, {
'enabled' : true,
timestamp : {
'g1' : 0,
'g2' : 0
}
});
and how affter that i can do increment for g1 or g2 ?
P.S. when insert timestamp this way, redis-cli show timestamp instead UNIX time
You're looking for a combination of HMGET and HMSET. According to the docs:
HMGET key field [field ...]
Returns the values associated with the specified fields in the hash
stored at key.
For every field that does not exist in the hash, a nil value is
returned. Because a non-existing keys are treated as empty hashes,
running HMGET against a non-existing key will return a list of nil
values.
HMSET key field value [field value ...]
Sets the specified fields to their respective values in the hash
stored at key. This command overwrites any existing fields in the
hash. If key does not exist, a new key holding a hash is created.
What you want to do, then, is retrieve your value from the has, perform any operations on it that seem appropriate, and save over the previous value.
Another, possibly better solution, would be to use HINCRBY. Provided you stick with a timestamp, you can increment the field without performing a get operation:
HINCRBY key field increment
Increments the number stored at field in the hash stored at key by
increment. If key does not exist, a new key holding a hash is created.
If field does not exist the value is set to 0 before the operation is
performed.
The range of values supported by HINCRBY is limited to 64 bit signed
integers.
You probably will need to restructure your hash for this to work though, unless there is a way to drill down to your g1/g2 fields (stackoverflow community, feel free to edit this answer or comment it if you know a way). A structure like this should work:
{
enabled : true,
timestamp_g1 : 0,
timestamp_g2 : 0
}

Query WadPerformanceCountersTable in Increments?

I am trying to query the WadPerformanceCountersTable generated by Azure Diagnostics which has a PartitionKey based on tick marks accurate up to the minute. This PartitionKey is stored as a string (which I do not have any control over).
I want to be able to query against this table to get data points for every minute, every hour, every day, etc. so I don't have to pull all of the data (I just want a sampling to approximate it). I was hoping to using the modulus operator to do this, but since the PartitionKey is stored as a string and this is an Azure Table, I am having issues.
Is there any way to do this?
Non-working example:
var query =
(from entity in ServiceContext.CreateQuery<PerformanceCountersEntity>("WADPerformanceCountersTable")
where
long.Parse(entity.PartitionKey) % interval == 0 && //bad for a variety of reasons
String.Compare(entity.PartitionKey, partitionKeyEnd, StringComparison.Ordinal) < 0 &&
String.Compare(entity.PartitionKey, partitionKeyStart, StringComparison.Ordinal) > 0
select entity)
.AsTableServiceQuery();
If you just want to get a single row based on two different time interval (now and N time back) you can use the following query which returns the single row as described here:
// 10 minutes span Partition Key
DateTime now = DateTime.UtcNow;
// Current Partition Key
string partitionKeyNow = string.Format("0{0}", now.Ticks.ToString());
DateTime tenMinutesSpan = now.AddMinutes(-10);
string partitionKeyTenMinutesBack = string.Format("0{0}", tenMinutesSpan.Ticks.ToString());
//Get single row sample created last 10 mminutes
CloudTableQuery<WadPerformanceCountersTable> cloudTableQuery =
(
from entity in ServiceContext.CreateQuery<PerformanceCountersEntity>("WADPerformanceCountersTable")
where
entity.PartitionKey.CompareTo(partitionKeyNow) < 0 &&
entity.PartitionKey.CompareTo(partitionKeyTenMinutesBack) > 0
select entity
).Take(1).AsTableServiceQuery();
The only way I can see to do this would be to create a process to keep the Azure table in sync with another version of itself. In this table, I would store the PartitionKey as a number instead of a string. Once done, I could use a method similar to what I wrote in my question to query the data.
However, this is a waste of resources, so I don't recommend it. (I'm not implementing it myself, either.)

Resources