I have records with a time value and need to be able to query them for a span of time and return only records at a given interval.
For example I may need all the records from 12:00 to 1:00 in 10 minute intervals giving me 12:00, 12:10, 12:20, 12:30, ... 12:50, 01:00. The interval needs to be a parameter and it may be any time value. 15 minutes, 47 seconds, 1.4 hours.
I attempted to do this doing some kind of reduce but that is apparently the wrong place to do it.
Here is what I have come up with. Comments are welcome.
Created a view for the time field so I can query a range of times. The view outputs the id and the time.
function(doc) {
emit([doc.rec_id, doc.time], [doc._id, doc.time])
}
Then I created a list function that accepts a param called interval. In the list function I work thru the rows and compare the current rows time to the last accepted time. If the span is greater or equal to the interval I add the row to the output and JSON-ify it.
function(head, req) {
// default to 30000ms or 30 seconds.
var interval = 30000;
// get the interval from the request.
if (req.query.interval) {
interval = req.query.interval;
}
// setup
var row;
var rows = [];
var lastTime = 0;
// go thru the results...
while (row = getRow()) {
// if the time from view is more than the interval
// from our last time then add it.
if (row.value[1] - lastTime > interval) {
lastTime = row.value[1];
rows.push(row);
}
}
// JSON-ify!
send(JSON.stringify({'rows' : rows}));
}
So far this is working well. I will test against some large data to see how the performance is. Any comments on how this could be done better or would this be the correct way with couch?
CouchDB is relaxed. If this is working for you, then I'd say stick with it and focus on your next top priority.
One quick optimization is to try not to build up a final answer in the _list function, but rather send() little pieces of the answer as you know them. That way, your function can run on an unlimited result size.
However, as you suspected, you are using a _list function basically to do an ad-hoc query which could be problematic as your database size grows.
I'm not 100% sure what you need, but if you are looking for documents within a time frame, there's a good chance that emit() keys should primarily sort by time. (In your example, the primary (leftmost) sort value is doc.rec_id.)
For a map function:
function(doc) {
var key = doc.time; // Just sort everything by timestamp.
emit(key, [doc._id, doc.time]);
}
That will build a map of all documents, ordered by the time timestamp. (I will assume the time value is like JSON.stringify(new Date), i.e. "2011-05-20T00:34:20.847Z".
To find all documents within, a 1-hour interval, just query the map view with ?startkey="2011-05-20T00:00:00.000Z"&endkey="2011-05-20T01:00:00.000Z".
If I understand your "interval" criteria correctly, then if you need 10-minute intervals, then if you had 00:00, 00:15, 00:30, 00:45, 00:50, then only 00:00, 00:30, 00:50 should be in the final result. Therefore, you are filtering the normal couch output to cut out unwanted results. That is a perfect job for a _list function. Simply use req.query.interval and only send() the rows that match the interval.
Related
I am using mongoose to query a really big list from Mongodb
const chat_list = await chat_model.find({}).sort({uuid: 1}); // uuid is a index
const msg_list = await message_model.find({}, {content: 1, xxx}).sort({create_time: 1});// create_time is a index of message collection, time: t1
// chat_list length is around 2,000, msg_list length is around 90,000
compute(chat_list, msg_list); // time: t2
function compute(chat_list, msg_list) {
for (let i = 0, len = chat_list.length; i < len; i++) {
msg_list.filter(msg => msg.uuid === chat_list[i].uuid)
// consistent handling for every message
}
}
for above code, t1 is about 46s, t2 is about 150s
t2 is really to big, so weird.
then I cached these list to local json file,
const chat_list = require('./chat-list.json');
const msg_list = require('./msg-list.json');
compute(chat_list, msg_list); // time: t2
this time, t2 is around 10s.
so, here comes the question, 150 seconds vs 10 seconds, why? what happened?
I tried to use worker to do the compute step after mongo query, but the time is still much bigger than 10s
The mongodb query returns a FindCursor that includes arrayish methods like .filter() but the result is not an Array.
Use .toArray() on the cursor before filtering to process the mongodb result set like for like. That might not make the overall process any faster, as the result set still needs to be fetched from mongodb, but compute will be similar.
const chat_list = await chat_model
.find({})
.sort({uuid: 1})
.toArray()
const msg_list = await message_model
.find({}, {content: 1, xxx})
.sort({create_time: 1})
.toArray()
Matt typed faster than I did, so some of what was suggested aligns with part of this answer.
I think you are measuring and comparing something different than what you are expecting and implying.
Your expectation is that the compute() function takes around 10 seconds once all of the data is loaded by the application. This is (mostly) demonstrated by your second test, apart from the fact that that test includes the time it takes to load the data from the local files. But you're seeing that there is a difference of 104 seconds (150 - 46) between the completion of message_model.find() and compute() hence leading to the question.
The key thing is that successfully advancing from the find against message_model is not the same thing as retrieving all of the results. As #Matt notes, the find() will return with a cursor object once the initial batch of results are ready. That is very different than retrieving all of the results. So there is more work (apparently ~94 seconds worth) left to do from the two find() operations to further iterate the cursors and retrieve the rest of the results. This additional time is getting reported inside of t2.
Ass suggested by #Matt, calling .toArray() should shift that time back into t1 as you are expecting. Also sounds like it may be more correct due to ambiguity with .filter() functions.
There are two other things that catch my attention. The first is: why are you retrieving all of this data client-side to do the filtering there? Perhaps you would like to do this uuid matching inside of the database via $lookup?
Secondly, this comment isn't clear to me:
// create_time is a index of message collection, time: t1
create_time itself is a field here, existent or not, that you are requesting an ascending sort against.
You are taking data from 2 tables, then with for loop you are comparing ID using filter function, what is happening now is your loop will be executed 2000 time and so the filter function also which contains 90000 records.
So take a worst case scenario here lets consider 2000 uuid you are getting is not inside the msg_list, here you are executing loop 2000*90000 even though you are not getting data.
It wan't take more than 10 to 15 secs if use below code.
//This will generate array of uuid present in message_model
const msg_list = await message_model.find({}, {content: 1, xxx}).sort({create_time: 1}).distinct("uuid");
// Below query will match all uuid present in msg_list array with chat_list UUID
const chat_list = await chat_model.find({uuid:{$in:msg_list}}).sort({uuid: 1});
The above result is doing same as you have done in your code with filter function and loop but this is proper and fastest way to receive the data you required.
I have a collection which holds documents, with each document having a data observation and the time that the data was captured.
e.g.
{
_key:....,
"data":26,
"timecaptured":1643488638.946702
}
where timecaptured for now is a utc timestamp.
What I want to do is get the duration between consecutive observations, with SQL I could do this with LAG for example, but with ArangoDB and AQL I am struggling to see how to do this at the database. So effectively the difference in timestamps between two documents in time order. I have a lot of data and I don't really want to pull it all into pandas.
Any help really appreciated.
Although the solution provided by CodeManX works, I prefer a different one:
FOR d IN docs
SORT d.timecaptured
WINDOW { preceding: 1 } AGGREGATE s = SUM(d.timecaptured), cnt = COUNT(1)
LET timediff = cnt == 1 ? null : d.timecaptured - (s - d.timecaptured)
RETURN timediff
We simply calculate the sum of the previous and the current document, and by subtracting the current document's timecaptured we can therefore calculate the timecaptured of the previous document. So now we can easily calculate the requested difference.
I only use the COUNT to return null for the first document (which has no predecessor). If you are fine with having a difference of zero for the first document, you can simply remove it.
However, neither approach is very straight forward or obvious. I put on my TODO list to add an APPEND aggregate function that could be used in WINDOW and COLLECT operations.
The WINDOW function doesn't give you direct access to the data in the sliding window but here is a rather clever workaround:
FOR doc IN collection
SORT doc.timecaptured
WINDOW { preceding: 1 }
AGGREGATE d = UNIQUE(KEEP(doc, "_key", "timecaptured"))
LET timediff = doc.timecaptured - d[0].timecaptured
RETURN MERGE(doc, {timediff})
The UNIQUE() function is available for window aggregations and can be used to get at the desired data (previous document). Aggregating full documents might be inefficient, so a projection should do, but remember that UNIQUE() will remove duplicate values. A document _key is unique within a collection, so we can add it to the projection to make sure that UNIQUE() doesn't remove anything.
The time difference is calculated by subtracting the previous' documents timecaptured value from the current document's one. In the case of the first record, d[0] is actually equal to the current document and the difference ends up being 0, which I think is sensible. You could also write d[-1].timecaptured - d[0].timecaptured to achieve the same. d[1].timecaptured - d[0].timecaptured on the other hand will give you the inverted timestamp for the first record because d[1] is null (no previous document) and evaluates to 0.
There is one risk: UNIQUE() may alter the order of the documents. You could use a subquery to sort by timecaptured again:
LET timediff = doc.timecaptured - (
FOR dd IN d SORT dd.timecaptured LIMIT 1 RETURN dd.timecaptured
)[0]
But it's not great for performance to use a subquery. Instead, you can use the aggregation variable d to access both documents and calculate the absolute value of the subtraction so that the order doesn't matter:
LET timediff = ABS(d[-1].timecaptured - d[0].timecaptured)
I have a large collection of documents and each is valid for a range of days. The range could be from 1 week up to 1 year. I want to be able to get all the documents that are valid on a specific day.
How would I do that?
As an example say I have the following two documents:
doc1 = {
// 1 year ago to today
start_at: "2012-03-22T00:00:00Z",
end_at: "2013-03-22T00:00:00Z"
}
doc2 = {
// 2 months ago to today
start_at: "2012-01-22T00:00:00Z",
end_at: "2013-03-22T00:00:00Z"
}
And a map function:
(doc) ->
emit([doc.start_at, doc.end_at], null)
So for a date of 6 months ago I would only get doc1, a date of 1 week ago I would get both documents, and with a date of tomorrow I would receive no documents.
Note that actual resolution needs to be down to the second of the request being made and there are lots of documents, so strategies of emitting a key for every valid second would not be appropriate.
You could call emit for each day in your range, and then you can easily pick out the documents available for a specific day.
function(doc) {
var day = new Date(doc.start),
end = new Date(doc.end).getTime();
do {
emit(day);
day = new Date(day.getFullYear(), day.getMonth(), day.getDate() + 1);
} while (day.getTime() <= end);
}
Even though you will have lots of documents, if you leave out the value part (2nd param) of your emit, the index will be as small as it could possibly be.
If you need to get more sophisticated, you could try out couchdb-lucene. You can index date fields as date objects and execute range queries with multiple fields in 1 request.
You can translate the problem into the computational geometry problem of location. For documents in two dimensional plane [x,y]=[start_at,end_at] query for those, which are valid at date date is the list of the points in the rectangle bounded by: left=-infinity, right=date (start_at<date) and bottom=date, top=infinity (end_at>date).
Unfortunately, CouchDB team underrate the power of computational geometry and does not support multidimensional queries. There is GeoCouch extension that allows you to do this kind of queries as easy as:
http://localhost:5984/places/_design/main/_spatial/points?bbox=0,0,180,90
on the view emitting spatial value:
emit({ type: "Point", coordinates: [doc.start_at, doc.end_at] }, doc);
The problem is different data type. You get float in range of [-180.0,180.0]/[-90.0,90.0] and need at least int (UNIX time format). If GeoCouch works for you in ranges bigger then 180.0 and the precision of float operation designed for geographical calculation is sufficient for dates with precision of seconds your problem is solved :) I am sure, with few tricks and hacks, you could solve this problem efficiently in geo software. If not GeoCouch then perhaps ElastiSearch (also support multidimensional queries) which is easy to use with CouchDB with its River plugins system.
I have a search log with fields namely time, place and the query. I want to find the most queried word from a particular place between a particular time. All the fields namely date,time, query_String are chararrays. I have the below pig script but it doesnot do what is required.
Data = LOAD 'data' USING CustomPigStorage();
FClients = FILTER Data BY NOT(country is null);
Clients = FOREACH FClients GENERATE date,time, country,query_string as query;
grp = group Clients by (query, country, date, time);
wth_count = foreach grp generate FLATTEN(group), COUNT(Clients) as count;
For example, I want the result to be "between 2pm and 3 pm, hello was searched 4 times from USA".
I am basically confused by the Count() function .Relatively new to pig. I believe my count() here is counting the total number of records I have.
Your query looks correct, COUNT(Clients) returns number of records in the bag, that came from Clients and belong to the group. To see it you can remove COUNT from "wth_count" statement and save results into a file and than look into it.
wth_count = foreach grp generate group, Clients;
store wth_count into 'path';
Your potential problem might be in the fact that you are using date and time columns in the group by and they produce too many groups. To mitigate this you could write a java static function that gets date and time and returns a single value for the range, for example 12-07-2012, 14.05.03 is converted into "12-07-2012 14h" and 12-07-2012, 14.05.05 into "12-07-2012 14h". This will create a key that covers the time interval 2pm and 3pm and will put all of the records from Clinets into that group's bag.
I run my bot in a public channel with hundreds of users. Yesterday a person came in and just abused it.
I would like to let anyone use the bot, but if they spam commands consecutively and if they aren't a bot "owner" like me when I debug then I would like to add them to an ignored list which expires in an hour or so.
One way I'm thinking would be to save all commands by all users, in a dictionary such as:
({
'meder#freenode': [{command:'.weather 20851', timestamp: 209323023 }],
'jack#efnet': [{command:'.seen john' }]
})
I would setup a cron job to flush this out every 24 hours, but I would basically determine if a person has made X number of commands in a duration of say, 15 seconds and add them to an ignore list.
Actually, as I'm writing this answer I thought of a better idea.. maybe instead of storing each users commands, just store the the bot's commands in a list and keep on pushing until it reaches a limit of say, 15.
lastCommands = [], limit = 5;
function handleCommand( timeObj, action ) {
if ( lastCommands.length < limit ) {
action();
} else {
// enumerate through lastCommands and compare the timestamps of all 5 commands
// if the user is the same for all 5 commands, and...
// if the timestamps are all within the vicinity of 20 seconds
// add the user to the ignoreList
}
}
watch_for('command', function() {
handleCommand({timestamp: 2093293032, user: user}, function(){ message.say('hello there!') })
});
I would appreciate any advice on the matter.
Here's a simple algorithm:
Every time a user sends a command to the bot, increment a number that's tied to that user. If this is a new user, create the number for them and set it to 1.
When a user's number is incremented to a certain value (say 15), set it to 100.
Every <period> seconds, run through the list and decrement all the numbers by 1. Zero means the user's number can be freed.
Before executing a command and after incrementing the user's counter, check to see if it exceeds your magic max value (15 above). If it does, exit before executing the command.
This lets you rate limit actions and forgive excesses after a while. Divide your desired ban length by the decrement period to find the number to set when a user exceeds your threshold (100 above). You can also add to the number if a particular user keeps sending commands after they've been banned.
Well Nathon has already offered a solution, but it's possible to reduce the code that's needed.
var user = {};
user.lastCommandTime = new Date().getTime(); // time the user send his last command
user.commandCount = 0; // command limit counter
user.maxCommandsPerSecond = 1; // commands allowed per second
function handleCommand(obj, action) {
var user = obj.user, now = new Date().getTime();
var timeDifference = now - user.lastCommandTime;
user.commandCount = Math.max(user.commandCount - (timeDifference / 1000 * user.maxCommandsPerSecond), 0) + 1;
user.lastCommandTime = now;
if (user.commandCount <= user.maxCommandsPerSecond) {
console.log('command!');
} else {
console.log('flooding');
}
}
var obj = {user: user};
var e = 0;
function foo() {
handleCommand(obj, 'foo');
e += 250;
setTimeout(foo, 400 + e);
}
foo();
In this implementation, there's no need for a list or some global callback every X seconds, instead we just reduce the commandCount every time there's a new message, based on time difference to the last command, it's also possible to allow different command rates for specific users.
All we need are 3 new properties on the user object :)
Redis
I would use the insanely fast advanced key-value store redis to write something like this, because:
It is insanely fast.
There is no need for cronjob because you can set expire on keys.
It has atomic operations to increment key
You could use redis-cli for prototyping.
I myself really like node_redis as redis client. It is a really fast redis client, which can easily be installed using npm.
Algorithme
I think my algorithme would look something like this:
For each user create a unique key which counts the commands consecutively executed. Also set expire to the time when you don't flag a user as spammer anymore. Let's assume the spammer has nickname x and the expire 15.
Inside redis-cli
incr x
expire x 15
When you do a get x after 15 seconds then the key does not exist anymore.
If value of key is bigger then threshold then flag user as spammer.
get x
These answers seem to be going the wrong way about this.
IRC Servers will disconnect your client regardless of whether you're "debugging" or not if the client or bot is flooding a channel or the server in general.
Make a blanket flood control, using the method #nmichaels has detailed, but on the bot's network connection to the server itself.