Use zscan on score - node.js

According to this https://github.com/NodeRedis/node_redis/issues/896
I have zset , i am saving tokens(element) to corresponding timestamp(score)
Now i want to delete tokens older than particular timestamp using zscan.
redis.zscan('my_key', cursor[i], 'MATCH', '*', "COUNT", count, function(err,
console.log(err);
console.log(reply);
});
Problem i am having is zscan will return all the values irrespective of timestamp.
This 'MATCH' paramter checks the pattern on elements(token).
I want to get all the tokens older than some particular timestamp(score).
For example :
var startingTime = new Date().getTime();
redis.zrangebyscore("iflychat_auth_token", 0, startingTime - 43200000 * 2 * 7, function (error, data) {
// This will return all token older the 7 days.
});
Is there a way to use 'MATCH' on score
Something like this
redis.zscan('my_key', cursor[i], 'MATCH', < timestamp, "COUNT", count, function(err,
console.log(err);
console.log(reply);
});

ZSCAN doesn't have a score range option. The simplest alternative is using Redis' ZREMRANGEBYSCORE, possibly like so:
redis.zremrangebyscore('my_key','-inf', timestamp, function(error,data) { ... });
Note: if you need an exclusive range, i.e. < timestamp, prepend it with a ( when sending the value.

Related

How can I filter entries in dynamodb which has time_stamp more than 1 day?

I have a lambda function which queries dynamoDb table userDetailTable, and I want to filter only the entries whose timestamp(recorded in ms) has exceeded 1 day (86400000 ms) when subtracted from (new Date.getTime()). Can anyone suggest me the way for doing it in the right way ?
Dynamo Table has GSIndex as user_status which has value 'active' for all the entries and epoch_timestamp(timestamp in ms) as attribute used for filter expression.
In Lambda I am checking epoch_timestamp and trying to subtract epoch_timestamp with (new Date.getTime()) in the query, which I am not sure is even possible. Below is the code which has my query.
function getUserDetails(callback){
var params = {
TableName: 'userDetailTable',
IndexName: 'user_status-index',
KeyConditionExpression: 'user_status = :user_status',
FilterExpression: `expiration_time - ${new Date().getTime()} > :time_difference`,
ExpressionAttributeValues: {
':user_status': 'active',
':time_difference': '86400000' // 1 day in ms
}
};
docClient.query(params, function(err, data) {
if(err) {
callback(err, null)
} else{
callback(null, data)
}
})
}
Here's a rewrite of your code:
function getUserDetails(callback){
var params = {
TableName: 'userDetailTable',
IndexName: 'user_status-index',
KeyConditionExpression: 'user_status = :user_status',
FilterExpression: 'epoch_timestamp > :time_threshold_ms',
ExpressionAttributeValues: {
':user_status': 'active',
':time_threshold_ms': Date.now() - 86400000
}
};
docClient.query(params, function(err, data) {
if(err) {
callback(err, null)
} else{
callback(null, data)
}
})
}
Specifically, in the FilteExpression you cannot compute any date. Instead you should compare the item's epoch_timestamp attribute with time_threshold_ms which you compute once (for all items inspected by the query) at ExpressionAttributeValues
Please note though that you are can make this more efficient if you define a GSI which uses epoch_timestamp as its sort key (user_status can remain the partition key). Then, instead of placing the condition in the FilterExpression you will need to move it into KeyConditionExpression.
Also, when you use a FilterExpression you need to check the LastEvaluatedKey of the response. If it is not empty you need to issue a followup query with LastEvaluatedKey copied into the request's ExclusiveStartKey. Why? due to filtering it is possible that you will get no results from the "chunk" (or "page") examined by DDB. DDB only examines a single "chunck" at each query invocation. Issuing a followup query with ExclusiveStartKey will tell DDB to inspect the next "chunk".
(see https://dzone.com/articles/query-dynamodb-items-withnodejs for further details on that)
Alternatively, if you do not use filtering you are advised to use pass a Limit value in the request to tell DDB to stop after the desired number of items. However, if you do use filtering do not pass a Limit value as it will reduce the size of the "chunk" and you will need to do many more followup queries until you get your data.
You cannot perform a calculation in the filter expression but you can calculate it outside and use the result with a new inequality.
I think you are looking for items expiring after one day from now.
Something like
FilterExpression: 'expiration_time > :max_time',
ExpressionAttributeValues: {
':user_status': 'active',
':max_time': new Date().getTime() + 86400000 // 1 day in ms // i.e. one day from now.
}

knexjs why mysql doesn't return correct records on selects immediately after insert

I insert record by knexjs in MySQL but selects after it immediately returns old records. Why? When I set timeout() to delay between the insert and select functions, correct records are returned!
function(cb) { //callUser plan
sd = moment(new Date()).format("YYYY-MM-DD HH:mm:ss");
ed = moment(new Date()).add(31, 'days').format("YYYY-MM-DD HH:mm:ss");
space = planDetail.space * 1024 * 1024 * 1024;
knex('user_plan')
.insert({
'username': userDetail.username,
'plan_id': planDetail.id,
'created': knex.raw('NOW()'),
'start_date': sd,
'end_date': ed,
'transaction_id': tran.id,
'space': space,
'slot': planDetail.slot
})
.asCallback(cb);
},
function(res, cb) {
logger.log(res);
return cb(null);
},
function(cb) {
knex.select('*').from('user_plan').where({
username: self.username,
})
.andWhere('start_date', '<', knex.raw('NOW()'))
.andWhere('end_date', '>', knex.raw('NOW()'))
.orderBy('id', 'desc')
.asCallback(function(err, rows) {
if(err) return cb(err);
logger.log(rows);
});
},
Do you have remote connection to your DB?
In that case maybe your client clock and server clock are in a bit different time. From code I can see that you are creating inserted sd and ed in client side with moment() and then you are comparing selected rows to your DB server time NOW().
You can debug this also by fetching all the rows or just last inserted row (without comparing date) and see that the just inserted row actually is found from DB directly after insert.

Using nodejs 'cassandra-driver' prepared query with IN query

This is a potential newbie question but I couldn't find an answer out in the wild.
I'm currently building a simple user event log of this form (some attributes elided to keep this simple):
CREATE TABLE events.by_hour (
level tinyint, /* 60 = FATAL, 10 = TRACE */
hour int, /* in YYYYMMDDHH format */
insertion_time timeuuid,
userid TEXT,
message TEXT,
PRIMARY KEY ((type,hour),userid,insertion_time))
WITH CLUSTERING ORDER BY (userid ASC, insertion_time ASC);
I would like to make a prepared query to fetch all events optionally filtered by a given user over some set of levels ideally over the course of several hours.
I can easily build a query that queries a single level and hour:
var cassandra = require('cassandra-driver');
var client = new cassandra.Client({contactPoints: ['127.0.0.1']});
var query
= 'SELECT * FROM events.by_hour
WHERE level IN (?) and hour IN (?) and userid = ?;';
var options = [ 10, 2016032117,'user_uuid' ];
client.execute(query, options, {
prepare: true
}, function (err, result) {
// Do stuff...
next(err);
});
This works fine for a single hour and level. I would love to specify multiple hours and levels in the above query but playing with the code I'm unable to get that to work either by specifying the set as a string or as an array of values.
I think I'm forced to do something like this:
Build the query based upon the number of levels and hours are needed:
// for 1 level and 2 hours:
query = 'SELECT * FROM events.by_hour
WHERE level IN (?) and hour IN (?,?) and userid = ?;';
options = [ 10, 2016032117,2016032118,'user_uuid' ];
client.execute(query, options, {
prepare: true
}, function (err, result) {
// Do stuff...
next(err);
});
// for 2 levels and 2 hours:
query = 'SELECT * FROM events.by_hour
WHERE level IN (?,?) and hour IN (?,?) and userid = ?;';
options = [ 10, 20, 2016032117,2016032118,'user_uuid' ];
client.execute(query, options, {
prepare: true
}, function (err, result) {
// Do stuff...
next(err);
});
I don't love this. BUT I can still get the benefits of prepared queries even here since we can just pass in 'prepared: true'. It feels like there should be a better way... but I'm not sure what I'm trying to do is even possible with single prepared query.
Can anyone lend some wisdom here?
Thanks!
You should use the IN operator followed by the query marker (no parenthesis) and provide the parameter for the query marker as an array:
const query = 'SELECT * FROM events.by_hour WHERE level IN ? and ' +
'hour IN ? and userid = ?';
const params = [ [ 10, 20, 30] , [ 2016032117, 2016032118 ],'user_uuid' ];
client.execute(query, params, { prepare: true }, callback);

Moongoose sort by parseFloat(String)?

I want to sort my query result by the Float value. But the value stored in MongoDB is type String, Can I parse the String to Float and sort it dynamically? Just like complex sort.
The following are parts of my schema and sort code:
Schema:
var ScenicSpotSchema = new Schema({
...
detail_info: {
...
overall_rating: String,
...
},
});
Sort function:
ScenicSpot.find({'name': new RegExp(req.query.keyword)}, )
.sort('-detail_info.overall_rating')
.skip(pageSize * pageNumber)
.limit(pageSize)
.exec(function (err, scenicSpots) {
if (err) {
callback(err);
} else {
callback(null, scenicSpots);
}
});
Any kind of help and advice is appreciated. :)
.sort mongoose do not support converting data type. see: http://mongoosejs.com/docs/api.html#query_Query-sort
It only accept column names and order.
There are two path to acheive your goal:
use mapreduce in mongo, first convert type, and the sort
retrieve all data from database, and sort it in your node.js program.
But both are terrible and ugly
if you wanna sort a String column but parse it as Float. This action would scan all data in that collection, and can not use index. It's very very slow action.
So I think the fastest and correct operation is convert the String column to Float in your mongodb database. And then you can use normal .sort('-detail_info.overall_rating') to get things done.
user collation after sort query it will sort string float number in ascending and descending order both correctly.
cenicSpot.find({'name': new RegExp(req.query.keyword)}, )
.sort('-detail_info.overall_rating')
.collation({ locale: "en-US", numericOrdering: true })
.skip(pageSize * pageNumber)
.limit(pageSize)
.exec(function (err, scenicSpots) {
if (err) {
callback(err);
} else {
callback(null, scenicSpots);
}
});

Store timestamp each users in sorted set Redis

I have a little code:
Add ID:
redis.zadd('onlineusers', time, id, function (err, response) {
//TODO
});
Is a correct way to save current timestamp user with him ID?
Delete ID by KEY:
db.zrem('onlineusers', data.id);
Also, how to get multiple values from sorted set by keys: 1,2,3
Is a correct way to save current timestamp user with him ID?
Yes
You can get the scores of multiple values using multi.
function getScores(setKey, values, callback) {
var multi = db.multi();
for(var i=0; i<values.length; ++i) {
multi.zscore(setKey, values[i]);
}
multi.exec(callback);
}
Usage
getScores('onlineusers', [1,2,3], function(err, scores) {
console.log(err, scores);
});

Resources