I am using mongoose to query a really big list from Mongodb
const chat_list = await chat_model.find({}).sort({uuid: 1}); // uuid is a index
const msg_list = await message_model.find({}, {content: 1, xxx}).sort({create_time: 1});// create_time is a index of message collection, time: t1
// chat_list length is around 2,000, msg_list length is around 90,000
compute(chat_list, msg_list); // time: t2
function compute(chat_list, msg_list) {
for (let i = 0, len = chat_list.length; i < len; i++) {
msg_list.filter(msg => msg.uuid === chat_list[i].uuid)
// consistent handling for every message
}
}
for above code, t1 is about 46s, t2 is about 150s
t2 is really to big, so weird.
then I cached these list to local json file,
const chat_list = require('./chat-list.json');
const msg_list = require('./msg-list.json');
compute(chat_list, msg_list); // time: t2
this time, t2 is around 10s.
so, here comes the question, 150 seconds vs 10 seconds, why? what happened?
I tried to use worker to do the compute step after mongo query, but the time is still much bigger than 10s
The mongodb query returns a FindCursor that includes arrayish methods like .filter() but the result is not an Array.
Use .toArray() on the cursor before filtering to process the mongodb result set like for like. That might not make the overall process any faster, as the result set still needs to be fetched from mongodb, but compute will be similar.
const chat_list = await chat_model
.find({})
.sort({uuid: 1})
.toArray()
const msg_list = await message_model
.find({}, {content: 1, xxx})
.sort({create_time: 1})
.toArray()
Matt typed faster than I did, so some of what was suggested aligns with part of this answer.
I think you are measuring and comparing something different than what you are expecting and implying.
Your expectation is that the compute() function takes around 10 seconds once all of the data is loaded by the application. This is (mostly) demonstrated by your second test, apart from the fact that that test includes the time it takes to load the data from the local files. But you're seeing that there is a difference of 104 seconds (150 - 46) between the completion of message_model.find() and compute() hence leading to the question.
The key thing is that successfully advancing from the find against message_model is not the same thing as retrieving all of the results. As #Matt notes, the find() will return with a cursor object once the initial batch of results are ready. That is very different than retrieving all of the results. So there is more work (apparently ~94 seconds worth) left to do from the two find() operations to further iterate the cursors and retrieve the rest of the results. This additional time is getting reported inside of t2.
Ass suggested by #Matt, calling .toArray() should shift that time back into t1 as you are expecting. Also sounds like it may be more correct due to ambiguity with .filter() functions.
There are two other things that catch my attention. The first is: why are you retrieving all of this data client-side to do the filtering there? Perhaps you would like to do this uuid matching inside of the database via $lookup?
Secondly, this comment isn't clear to me:
// create_time is a index of message collection, time: t1
create_time itself is a field here, existent or not, that you are requesting an ascending sort against.
You are taking data from 2 tables, then with for loop you are comparing ID using filter function, what is happening now is your loop will be executed 2000 time and so the filter function also which contains 90000 records.
So take a worst case scenario here lets consider 2000 uuid you are getting is not inside the msg_list, here you are executing loop 2000*90000 even though you are not getting data.
It wan't take more than 10 to 15 secs if use below code.
//This will generate array of uuid present in message_model
const msg_list = await message_model.find({}, {content: 1, xxx}).sort({create_time: 1}).distinct("uuid");
// Below query will match all uuid present in msg_list array with chat_list UUID
const chat_list = await chat_model.find({uuid:{$in:msg_list}}).sort({uuid: 1});
The above result is doing same as you have done in your code with filter function and loop but this is proper and fastest way to receive the data you required.
i am new in express.js, iΒ΄ve only built some small client/server apps.
Now i want to create a temperature-controller with a PID-component. I donΒ΄t understand the architecture of express.js enough to decide where to place what.
I have a router to get the targetvalue from my Web-Client - that works. And
I have a router to get the current temperature-value. And third have a router to control the heatingelement.
Now i need somewhat of a loop, in which for every some seconds i can compare this values and calculate my output-value and send that value to my heatingelement.
Where to place what?
Greets, Freisei.
In Javascript you don't use a loop for this kind of thing, you use an Interval, set up with setInterval().
function doTheControlOperation() {
const setPoint = //get the temp set point
const current = //get the most recent temperature reading
const diff = current - setPoint
if (diff < 0) // turn on the element
else // turn off the element
}
var howOften = 10000 //ten seconds in milliseconds
setInterval (doTheControlOperation, howOften)
This code calls doTheControlOperation() every ten seconds.
I'm sure you know this kind of control system usually contains some hysteresis and protection against short-cycling. My example doesn't do any of that, obviously.
I need to create a new field sid on each document in a collection of about 500K documents. Each sid is unique and based on that record's existing roundedDate and stream fields.
I'm doing so with the following code:
var cursor = db.getCollection('snapshots').find();
var iterated = 0;
var updated = 0;
while (cursor.hasNext()) {
var doc = cursor.next();
if (doc.stream && doc.roundedDate && !doc.sid) {
db.getCollection('snapshots').update({ "_id": doc['_id'] }, {
$set: {
sid: doc.stream.valueOf() + '-' + doc.roundedDate,
}
});
updated++;
}
iterated++;
};
print('total ' + cursor.count() + ' iterated through ' + iterated + ' updated ' + updated);
It works well at first, but after a few hours and about 100K records it errors out with:
Error: getMore command failed: {
"ok" : 0,
"errmsg": "Cursor not found, cursor id: ###",
"code": 43,
}: ...
EDIT - Query performance:
As #NeilLunn pointed out in his comments, you should not be filtering the documents manually, but use .find(...) for that instead:
db.snapshots.find({
roundedDate: { $exists: true },
stream: { $exists: true },
sid: { $exists: false }
})
Also, using .bulkWrite(), available as from MongoDB 3.2, will be far way more performant than doing individual updates.
It is possible that, with that, you are able to execute your query within the 10 minutes lifetime of the cursor. If it still takes more than that, you cursor will expire and you will have the same problem anyway, which is explained below:
What is going on here:
Error: getMore command failed may be due to a cursor timeout, which is related with two cursor attributes:
Timeout limit, which is 10 minutes by default. From the docs:
By default, the server will automatically close the cursor after 10 minutes of inactivity, or if client has exhausted the cursor.
Batch size, which is 101 documents or 16 MB for the first batch, and 16 MB, regardless of the number of documents, for subsequent batches (as of MongoDB 3.4). From the docs:
find() and aggregate() operations have an initial batch size of 101 documents by default. Subsequent getMore operations issued against the resulting cursor have no default batch size, so they are limited only by the 16 megabyte message size.
Probably you are consuming those initial 101 documents and then getting a 16 MB batch, which is the maximum, with a lot more documents. As it is taking more than 10 minutes to process them, the cursor on the server times out and, by the time you are done processing the documents in the second batch and request a new one, the cursor is already closed:
As you iterate through the cursor and reach the end of the returned batch, if there are more results, cursor.next() will perform a getMore operation to retrieve the next batch.
Possible solutions:
I see 5 possible ways to solve this, 3 good ones, with their pros and cons, and 2 bad one:
π Reducing the batch size to keep the cursor alive.
π Remove the timeout from the cursor.
π Retry when the cursor expires.
π Query the results in batches manually.
π Get all the documents before the cursor expires.
Note they are not numbered following any specific criteria. Read through them and decide which one works best for your particular case.
1. π Reducing the batch size to keep the cursor alive
One way to solve that is use cursor.bacthSize to set the batch size on the cursor returned by your find query to match those that you can process within those 10 minutes:
const cursor = db.collection.find()
.batchSize(NUMBER_OF_DOCUMENTS_IN_BATCH);
However, keep in mind that setting a very conservative (small) batch size will probably work, but will also be slower, as now you need to access the server more times.
On the other hand, setting it to a value too close to the number of documents you can process in 10 minutes means that it is possible that if some iterations take a bit longer to process for any reason (other processes may be consuming more resources), the cursor will expire anyway and you will get the same error again.
2. π Remove the timeout from the cursor
Another option is to use cursor.noCursorTimeout to prevent the cursor from timing out:
const cursor = db.collection.find().noCursorTimeout();
This is considered a bad practice as you would need to close the cursor manually or exhaust all its results so that it is automatically closed:
After setting the noCursorTimeout option, you must either close the cursor manually with cursor.close() or by exhausting the cursorβs results.
As you want to process all the documents in the cursor, you wouldn't need to close it manually, but it is still possible that something else goes wrong in your code and an error is thrown before you are done, thus leaving the cursor opened.
If you still want to use this approach, use a try-catch to make sure you close the cursor if anything goes wrong before you consume all its documents.
Note I don't consider this a bad solution (therefore the π), as even thought it is considered a bad practice...:
It is a feature supported by the driver. If it was so bad, as there are alternatives ways to get around timeout issues, as explained in the other solutions, this won't be supported.
There are ways to use it safely, it's just a matter of being extra cautious with it.
I assume you are not running this kind of queries regularly, so the chances that you start leaving open cursors everywhere is low. If this is not the case, and you really need to deal with these situations all the time, then it does make sense not to use noCursorTimeout.
3. π Retry when the cursor expires
Basically, you put your code in a try-catch and when you get the error, you get a new cursor skipping the documents that you have already processed:
let processed = 0;
let updated = 0;
while(true) {
const cursor = db.snapshots.find().sort({ _id: 1 }).skip(processed);
try {
while (cursor.hasNext()) {
const doc = cursor.next();
++processed;
if (doc.stream && doc.roundedDate && !doc.sid) {
db.snapshots.update({
_id: doc._id
}, { $set: {
sid: `${ doc.stream.valueOf() }-${ doc.roundedDate }`
}});
++updated;
}
}
break; // Done processing all, exit outer loop
} catch (err) {
if (err.code !== 43) {
// Something else than a timeout went wrong. Abort loop.
throw err;
}
}
}
Note you need to sort the results for this solution to work.
With this approach, you are minimizing the number of requests to the server by using the maximum possible batch size of 16 MB, without having to guess how many documents you will be able to process in 10 minutes beforehand. Therefore, it is also more robust than the previous approach.
4. π Query the results in batches manually
Basically, you use skip(), limit() and sort() to do multiple queries with a number of documents you think you can process in 10 minutes.
I consider this a bad solution because the driver already has the option to set the batch size, so there's no reason to do this manually, just use solution 1 and don't reinvent the wheel.
Also, it is worth mentioning that it has the same drawbacks than solution 1,
5. π Get all the documents before the cursor expires
Probably your code is taking some time to execute due to results processing, so you could retrieve all the documents first and then process them:
const results = new Array(db.snapshots.find());
This will retrieve all the batches one after another and close the cursor. Then, you can loop through all the documents inside results and do what you need to do.
However, if you are having timeout issues, chances are that your result set is quite large, thus pulling everything in memory may not be the most advisable thing to do.
Note about snapshot mode and duplicate documents
It is possible that some documents are returned multiple times if intervening write operations move them due to a growth in document size. To solve this, use cursor.snapshot(). From the docs:
Append the snapshot() method to a cursor to toggle the βsnapshotβ mode. This ensures that the query will not return a document multiple times, even if intervening write operations result in a move of the document due to the growth in document size.
However, keep in mind its limitations:
It doesn't work with sharded collections.
It doesn't work with sort() or hint(), so it will not work with solutions 3 and 4.
It doesn't guarantee isolation from insertion or deletions.
Note with solution 5 the time window to have a move of documents that may cause duplicate documents retrieval is narrower than with the other solutions, so you may not need snapshot().
In your particular case, as the collection is called snapshot, probably it is not likely to change, so you probably don't need snapshot(). Moreover, you are doing updates on documents based on their data and, once the update is done, that same document will not be updated again even though it is retrieved multiple times, as the if condition will skip it.
Note about open cursors
To see a count of open cursors use db.serverStatus().metrics.cursor.
It's a bug in mongodb server session management. Fix currently in progress, should be fixed in 4.0+
SERVER-34810: Session cache refresh can erroneously kill cursors that are still in use
(reproduced in MongoDB 3.6.5)
adding collection.find().batchSize(20) helped me with about a tiny reduced performance.
I also ran into this problem, but for me it was caused by a bug in the MongDB driver.
It happened in the version 3.0.x of the npm package mongodb which is e.g. used in Meteor 1.7.0.x, where I also recorded this issue. It's further described in this comment and the thread contains a sample project which confirms the bug: https://github.com/meteor/meteor/issues/9944#issuecomment-420542042
Updating the npm package to 3.1.x fixed it for me, because I already had taken into account the good advises, given here by #Danziger.
When using Java v3 driver, noCursorTimeout should be set in the FindOptions.
DBCollectionFindOptions options =
new DBCollectionFindOptions()
.maxTime(90, TimeUnit.MINUTES)
.noCursorTimeout(true)
.batchSize(batchSize)
.projection(projectionQuery);
cursor = collection.find(filterQuery, options);
in my case, It was a Load balancing issue, had the same issue running with Node.js service and Mongos as a pod on Kubernetes.
The client was using mongos service with default load balancing.
changing the kubernetes service to use sessionAffinity: ClientIP (stickiness) resolved the issue for me.
noCursorTimeout will NOT work
now is 2021 year, for
cursor id xxx not found, full error: {'ok': 0.0, 'errmsg': 'cursor id xxx not found', 'code': 43, 'codeName': 'CursorNotFound'}
official says
Consider an application that issues a db.collection.find() with cursor.noCursorTimeout(). The server returns a cursor along with a batch of documents defined by the cursor.batchSize() of the find(). The session refreshes each time the application requests a new batch of documents from the server. However, if the application takes longer than 30 minutes to process the current batch of documents, the session is marked as expired and closed. When the server closes the session, it also kills the cursor despite the cursor being configured with noCursorTimeout(). When the application requests the next batch of documents, the server returns an error.
that means: Even if you have set:
noCursorTimeout=True
smaller batchSize
will still cursor id not found after default 30 minutes
How to fix/avoid cursor id not found?
make sure two point
(explicitly) create new session, get db and collection from this session
refresh session periodically
code:
(official) js
var session = db.getMongo().startSession()
var sessionId = session.getSessionId().id
var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout()
var refreshTimestamp = new Date() // take note of time at operation start
while (cursor.hasNext()) {
// Check if more than 5 minutes have passed since the last refresh
if ( (new Date()-refreshTimestamp)/1000 > 300 ) {
print("refreshing session")
db.adminCommand({"refreshSessions" : [sessionId]})
refreshTimestamp = new Date()
}
// process cursor normally
}
(mine) python
import logging
from datetime import datetime
import pymongo
mongoClient = pymongo.MongoClient('mongodb://127.0.0.1:27017/your_db_name')
# every 10 minutes to update session once
# Note: should less than 30 minutes = Mongo session defaul timeout time
# https://docs.mongodb.com/v5.0/reference/method/cursor.noCursorTimeout/
# RefreshSessionPerSeconds = 10 * 60
RefreshSessionPerSeconds = 8 * 60
def mergeHistorResultToNewCollection():
mongoSession = mongoClient.start_session() # <pymongo.client_session.ClientSession object at 0x1081c5c70>
mongoSessionId = mongoSession.session_id # {'id': Binary(b'\xbf\xd8\xd...1\xbb', 4)}
mongoDb = mongoSession.client["your_db_name"] # Database(MongoClient(host=['127.0.0.1:27017'], document_class=dict, tz_aware=False, connect=True), 'your_db_name')
mongoCollectionOld = mongoDb["collecion_old"]
mongoCollectionNew = mongoDb['collecion_new']
# historyAllResultCursor = mongoCollectionOld.find(session=mongoSession)
historyAllResultCursor = mongoCollectionOld.find(no_cursor_timeout=True, session=mongoSession)
lastUpdateTime = datetime.now() # datetime.datetime(2021, 8, 30, 10, 57, 14, 579328)
for curIdx, oldHistoryResult in enumerate(historyAllResultCursor):
curTime = datetime.now() # datetime.datetime(2021, 8, 30, 10, 57, 25, 110374)
elapsedTime = curTime - lastUpdateTime # datetime.timedelta(seconds=10, microseconds=531046)
elapsedTimeSeconds = elapsedTime.total_seconds() # 2.65892
isShouldUpdateSession = elapsedTimeSeconds > RefreshSessionPerSeconds
# if (curIdx % RefreshSessionPerNum) == 0:
if isShouldUpdateSession:
lastUpdateTime = curTime
cmdResp = mongoDb.command("refreshSessions", [mongoSessionId], session=mongoSession)
logging.info("Called refreshSessions command, resp=%s", cmdResp)
# do what you want
existedNewResult = mongoCollectionNew.find_one({"shortLink": "http://xxx"}, session=mongoSession)
# mongoSession.close()
mongoSession.end_session()
Refer doc
MongoDB
ClientSession
refreshSessions
pymongo
find
command
This question already has answers here:
Sync JS time between multiple devices
(5 answers)
Closed 5 years ago.
On my server I call two emits at the same time, which looks like this.
if (songs.length > 0) {
socket.emit('data loaded', songs);
socket.broadcast.to(opponent).emit('data loaded', songs);
}
The one is for opponent and the other for himself.
Once the data is loaded a countdown should appear for both players on my android app. For me it is important that they see the same number at the same time on their screen. To be precise it should run synchronized. How can I do this?
As far as js timers are concerned the will be a small amount of difference. We can reduce the difference in time with reduce of latency time, with the difference between the request and response time from the server.
function syncTime() {
console.log("syncing time")
var currentTime = (new Date).getTime();
res.open('HEAD', document.location, false);
res.onreadystatechange = function()
{
var latency = (new Date).getTime() - currentTime;
var timestring = res.getResponseHeader("DATE");
systemtime = new Date(timestring);
systemtime.setMilliseconds(systemtime.getMilliseconds() + (latency / 2))
};
res.send(null);
}
Elapsed time between sending the request and getting back the response need to be calculated, divide that value by 2. That gives you a rough value of latency. If you add that to the time value from the server, you'll be closer to the true server time (The difference will be in microseconds)
Reference: http://ejohn.org/blog/accuracy-of-javascript-time/
Hope this helps.
I have made an application and I had the same problem. In That case I solved the problem leaving the time control to the server. The server send to the client and the client increases the time. Maybe in your case you could have problem with connection. If the problem exists you can leave clients to increase time by yourself and some times send a tick with correct time for sync.
I could give you something like bellow but I am not tested.
This solution have these steps:
Synchronize timers for client and server. all users have the same difference with server timer.
For the desired response/request get clients time and find the differences with server time.
Consider the smallest as first countdown which will be started.
For each response(socket) subtract the difference from smallest and let the client counter starts after waiting as much as this time.
The client that gets 0 in response data will start immediately.
and the main problem that you may will have is broadcast method which you can't use if you think this solution will be helpful.
This is a post may will help you.
Add time into emit message.
Let's say that songs is an object with {"time" : timeString, "songs" : songsList}.
If we consider devices time is correct You can calculate the time needed for information to travel and then just use server timer as a main calculator.
The client would get the time when countdown should start:
var start = false;
var startTime = 0;
var myTime = new Date().getMilliseconds();
var delay = 1000 - myTime;
setTimeout(function(){
intervalID = setInterval(function(){
myTime = new Date().getTime();
//console.log(myTime); to check if there is round number of milliseconds
if (startTime <= myTime && start = true) {startCountdown();}
}, 100); //put 1000 to check every second if second is round
//or put 100 or 200 is second is not round
}, delay);
socket.on('data loaded', data){
startTime = data.time;
start = true;
}
function startCountdown(){
//your time countdown
}
And that works fine when 2 clients are from same time region, therefore You will need "time converter" to check if time is good due to time difference if You strictly need same numbers.
After the countdown has ended You should clearInterval(intervalID);
I have records with a time value and need to be able to query them for a span of time and return only records at a given interval.
For example I may need all the records from 12:00 to 1:00 in 10 minute intervals giving me 12:00, 12:10, 12:20, 12:30, ... 12:50, 01:00. The interval needs to be a parameter and it may be any time value. 15 minutes, 47 seconds, 1.4 hours.
I attempted to do this doing some kind of reduce but that is apparently the wrong place to do it.
Here is what I have come up with. Comments are welcome.
Created a view for the time field so I can query a range of times. The view outputs the id and the time.
function(doc) {
emit([doc.rec_id, doc.time], [doc._id, doc.time])
}
Then I created a list function that accepts a param called interval. In the list function I work thru the rows and compare the current rows time to the last accepted time. If the span is greater or equal to the interval I add the row to the output and JSON-ify it.
function(head, req) {
// default to 30000ms or 30 seconds.
var interval = 30000;
// get the interval from the request.
if (req.query.interval) {
interval = req.query.interval;
}
// setup
var row;
var rows = [];
var lastTime = 0;
// go thru the results...
while (row = getRow()) {
// if the time from view is more than the interval
// from our last time then add it.
if (row.value[1] - lastTime > interval) {
lastTime = row.value[1];
rows.push(row);
}
}
// JSON-ify!
send(JSON.stringify({'rows' : rows}));
}
So far this is working well. I will test against some large data to see how the performance is. Any comments on how this could be done better or would this be the correct way with couch?
CouchDB is relaxed. If this is working for you, then I'd say stick with it and focus on your next top priority.
One quick optimization is to try not to build up a final answer in the _list function, but rather send() little pieces of the answer as you know them. That way, your function can run on an unlimited result size.
However, as you suspected, you are using a _list function basically to do an ad-hoc query which could be problematic as your database size grows.
I'm not 100% sure what you need, but if you are looking for documents within a time frame, there's a good chance that emit() keys should primarily sort by time. (In your example, the primary (leftmost) sort value is doc.rec_id.)
For a map function:
function(doc) {
var key = doc.time; // Just sort everything by timestamp.
emit(key, [doc._id, doc.time]);
}
That will build a map of all documents, ordered by the time timestamp. (I will assume the time value is like JSON.stringify(new Date), i.e. "2011-05-20T00:34:20.847Z".
To find all documents within, a 1-hour interval, just query the map view with ?startkey="2011-05-20T00:00:00.000Z"&endkey="2011-05-20T01:00:00.000Z".
If I understand your "interval" criteria correctly, then if you need 10-minute intervals, then if you had 00:00, 00:15, 00:30, 00:45, 00:50, then only 00:00, 00:30, 00:50 should be in the final result. Therefore, you are filtering the normal couch output to cut out unwanted results. That is a perfect job for a _list function. Simply use req.query.interval and only send() the rows that match the interval.