I am using node-mongodb-native 2.0 http://mongodb.github.io/node-mongodb-native/2.0/
With the following Node.js code:
var MongoClient = require('mongodb').MongoClient;
var mongoUrl = 'mongodb://localhost/twitter';
MongoClient.connect(mongoUrl, function(err, db) {
if (err) return console.error(err);
var collection = db.collection('tweets');
collection.find().limit(1000).forEach(function(tweet) {
console.log(tweet.id);
}, function(err) {
if (err) console.error(err);
db.close();
});
});
When I set the limit to 1000 (collection.find().limit(1000)), I was able to retrieve first several hundreds records, but I got error message { [MongoError: cursor is dead] name: 'MongoError', message: 'cursor is dead' } later on (I got 1 million records in my collection). But the program runs OK we I specify 800 as limit. It's also OK to not specify any limit (just collection.find()), and the script just keeps going without any error (reading way more records than 1000).
What's wrong? How to solve? How to solve if I still want to use forEach on a cursor?
I have reproduced this issue with a sample data set of smaller documents. The telling part is likely this log line from MongoDB:
2014-10-21T18:50:32.548+0100 [conn50] query twitter.tweets planSummary: COLLSCAN cursorid:30362014860 ntoreturn:200000 ntoskip:0 nscanned:199728 nscannedObjects:199728 keyUpdates:0 numYields:0 locks(micros) r:120400 nreturned:199728 reslen:4194308 120ms
The key piece, I suspect is reslen:4194308 which looks suspiciously close to the default batch size of 4MiB. I've been in touch with the node.js driver developers, will let you know if this ends up as a bug (update: opened NODE-300, and confirmed fix in mongodb-core#1.0.4 as result).
In the meantime, I'd recommend using the workaround from the comments, namely using a projection to reduce the amount of data in the results and sidestepping the issue.
Updated Resolution: If you are seeing this (or a similar) issue, then please update your mongodb-core version and retry - the new version no longer errors and runs quite a bit faster also. I did this by removing my node_modules and re-running npm install for my test app.
Related
I have an application on NodeJS that uses Cluster, WS, and memcached-client to manage two memcached-servers
During normal times, it works like a charm
But during high load, my application stops working and fetches data from memcached-servers
That is, the logs inside client.get callback do not work, and are not written to the console, when the load is high, therefore the client does not receive its cached value (although it is present on the memcached server and sometimes even with high load it works fine). For a while it will look like it's dead and not doing anything
getValue = function(key, callback){
console.log(`Calculated server for choose: ${strategy(key, client.servers.length)}`) // works with highload
console.log(`Try to get from cache by key: ${key}.`); // works with highload
client.get( key, function(err, data) {
const isError = err || !data // doesn't work with highload
console.log('Data from cache is: ', data) // callback will be never executed
if (!isError) {
console.log(`Found data in cache key-value: ${key} - ${data}`);
}else{
console.log(`Not found value from cache by key: ${key}`);
}
const parsedData = isError ? null : JSON.parse(data.toString())
callback(isError, parsedData); // and this won't work also
});
}
And after some time, socket connection is simply closed (with 1000 code, no errors, looks like user just leaves out)
INFO [ProcessID-100930] Connection close [772003], type [ws], code [1000], message []
Then, after 5-10 seconds, all processes start working again as if nothing had happened and the memcached-client callback starts to execute correctly
I've been trying for so long to catch this moment and understand why this is happening, but I still don't understand. I have changed already several memcached clients(memjs now, memcached, mc) but still get the same behavior under high load
When receiving data from memcached-server, the callback simply does not work, and data from the memcached is not returned (although judging by the memcached logs, they were there at that moment)
Can someone suggest please?
I have an application which checks for new entries in DB2 every 15 seconds on the iSeries using IBM's idb-connector. I have async functions which return the result of the query to socket.io which emits an event with the data included to the front end. I've narrowed down the memory leak to the async functions. I've read multiple articles on common memory leak causes and how to diagnose them.
MDN: memory management
Rising Stack: garbage collection explained
Marmelab: Finding And Fixing Node.js Memory Leaks: A Practical Guide
But I'm still not seeing where the problem is. Also, I'm unable to get permission to install node-gyp on the system which means most memory management tools are off limits as memwatch, heapdump and the like need node-gyp to install. Here's an example of what the functions basic structure is.
const { dbconn, dbstmt } = require('idb-connector');// require idb-connector
async function queryDB() {
const sSql = `SELECT * FROM LIBNAME.TABLE LIMIT 500`;
// create new promise
let promise = new Promise ( function(resolve, reject) {
// create new connection
const connection = new dbconn();
connection.conn("*LOCAL");
const statement = new dbstmt(connection);
statement.exec(sSql, (rows, err) => {
if (err) {
throw err;
}
let ticks = rows;
statement.close();
connection.disconn();
connection.close();
resolve(ticks.length);// resolve promise with varying data
})
});
let result = await promise;// await promise
return result;
};
async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
setTimeout(getNewData, 2000);// check again in 2 seconds
};
Any ideas on where the leak is? Am i using async/await incorrectly? Or else am i creating/destroying DB connections improperly? Any help on figuring out why this code is leaky would be much appreciated!!
Edit: Forgot to mention that i have limited control on the backend processes as they are handled by another team. I'm only retrieving the data they populate the DB with and adding it to a web page.
Edit 2: I think I've narrowed it down to the DB connections not being cleaned up properly. But, as far as i can tell I've followed the instructions suggested on their github repo.
I don't know the answer to your specific question, but instead of issuing a query every 15 seconds, I might go about this in a different way. Reason being that I don't generally like fishing expeditions when the environment can tell me an event occurred.
So in that vein, you might want to try a database trigger that loads the key to the row into a data queue on add, or even change or delete if necessary. Then you can just put in an async call to wait for a record on the data queue. This is more real time, and the event handler is only called when a record shows up. The handler can get the specific record from the database since you know it's key. Data queues are much faster than database IO, and place little overhead on the trigger.
I see a couple of potential advantages with this method:
You aren't issuing dozens of queries that may or may not return data.
The event would fire the instant a record is added to the table, rather than 15 seconds later.
You don't have to code for the possibility of one or more new records, it will always be 1, the one mentioned in the data queue.
yes you have to close connection.
Don't make const data. you don't need promise by default statement.exec is async and handles it via return result;
keep setTimeout(getNewData, 2000);// check again in 2 seconds
line outside getNewData otherwise it becomes recursive infinite loop.
Sample code
const {dbconn, dbstmt} = require('idb-connector');
const sql = 'SELECT * FROM QIWS.QCUSTCDT';
const connection = new dbconn(); // Create a connection object.
connection.conn('*LOCAL'); // Connect to a database.
const statement = new dbstmt(dbconn); // Create a statement object of the connection.
statement.exec(sql, (result, error) => {
if (error) {
throw error;
}
console.log(`Result Set: ${JSON.stringify(result)}`);
statement.close(); // Clean up the statement object.
connection.disconn(); // Disconnect from the database.
connection.close(); // Clean up the connection object.
return result;
});
*async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
setTimeout(getNewData, 2000);// check again in 2 seconds
};*
change to
**async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
};
setTimeout(getNewData, 2000);// check again in 2 seconds**
First thing to notice is possible open database connection in case of an error.
if (err) {
throw err;
}
Also in case of success connection.disconn(); and connection.close(); return boolean values that tell is operation successful (according to documentation)
Always possible scenario is to pile up connection objects in 3rd party library.
I would check those.
This was confirmed to be a memory leak in the idb-connector library that i was using. Link to github issue Here. Basically there was a C++ array that never had it's memory deallocated. A new version was added and the commit can viewed Here.
I am trying to insert more than 100 records using batch() method.
client.batch(batchQuery, { prepare: true }, function (err, result) {
if (err) {
res.status(404).json({ msg: err });
} else {
res.json([result.rows][0]);
}
});
batchQuery has more than 100 insert queries. It is working if the records are less than 7. If its more than 10, then i am getting "Batch too large"
You shouldn't use batches for bulk inserts into Cassandra (in contrast to RDBMS) - this error that you get mean that you're inserting data into different partitions, and it pushes an additional load on the node that receives query. You need to use batches only if you're doing inserts into the same partition - in this case they will be applied as a single mutation.
Otherwise, sending individual insert queries via async execute, will be much faster. You only need not to send too many requests at the same time (see this answer).
You can read more about good & bad use of batches in the documentation and following answer on SO: 1.
I would like to loop throw all documents on a specific collection of my MongoDB. However every attempt I made failed due to the timeout of the cursor. Here is my code
let MongoClient = require('mongodb').MongoClient;
const url = "my connection URI"
let options = { socketTimeoutMS: 120000, connectTimeoutMS: 120000, keepAlive: 100, poolSize: 5 }
MongoClient.connect(url, options,
function(err, db) {
if (err) throw err
let dbo = db.db("notes")
let collection = dbo.collection("stats-network-consumption")
let stream = collection.find({}, { timeout: false }).stream()
stream.on("data", function(item) {
printTask(item)
})
stream.on('error', function (err) {
console.error(err)
})
stream.on("end", function() {
console.log("DONE!")
db.close()
})
})
The code above runs for about 15 seconds and retrieves between 6000 to 8000 documents and then throws the following error:
{ MongoError: cursor does not exist, was killed or timed out
at queryCallback (/Volumes/safezone/development/workspace-router/migration/node_modules/mongodb-core/lib/wireprotocol/2_6_support.js:136:23)
at /Volumes/safezone/development/workspace-router/migration/node_modules/mongodb-core/lib/connection/pool.js:541:18
at process._tickCallback (internal/process/next_tick.js:150:11)
name: 'MongoError',
message: 'cursor does not exist, was killed or timed out' }
I need to retrieve around 50000 documents so I will need to find a way to avoid the cursor timeout.
As seen on the code above, I've tried to increase the socketTimeoutMS and the connectTimeoutMS, which had no effect on the cursor timeout.
I also have tried to replace stream with a forEach and add .addCursorFlag('noCursorTimeout', true) which also did not help.
I've tried everything I found about mongodb, I did not tried mongoose or alternatives because they use schemas and I'll later have to update the current type of an attribute (which can be tricky with the mongoose schemas).
Having a cursor with no timeout is generally not recommended.
The reason is, the cursor won't ever be closed by the server, so if your app crashed and you restart it, it will open another no timeout cursor on the server. Recycle your app often enough, and those will add up.
No timeout cursor on a sharded cluster would also prevent chunk migration.
If you need to retrieve big results, the cursor should not timeout since the results will be sent in batches, and the cursor would be reused to get the next batch.
The standard cursor timeout is 10 minutes, so it is possible to lose the cursor if you need more than 10 minutes to process a batch.
In your code example, your use of stream() might be interfering with your intent. Try using each() (example here) on the cursor instead.
If you need to monitor a collection for changes, you might want to take a look at Change Streams which is a new feature in MongoDB 3.6.
For example, your code may be able to be modified like:
let collection = dbo.collection("stats-network-consumption")
let stream = collection.watch()
document = next(stream)
Note that to enable change stream support, the driver you're using must support MongoDB 3.6 features and the watch() method. See Driver Compatibility Page for details.
I am trying to insert 1000000 data to cassandra with nodeJS. But the loop is crashed a little time later. Every time I cannot insert over 10000 record. Why the loop is crashed anybody help me.
Thanks.
My code looks like:
var helenus = require('helenus'),
pool = new helenus.ConnectionPool({
hosts : ['localhost:9160'],
keyspace : 'twissandra',
user : '',
password : '',
timeout : 3000
});
pool.on('error', function(err){
console.error(err.name, err.message);
});
var i=0;
pool.connect(function(err, keyspace){
if(err){ throw(err);
} else {
while (i<1000000){
i++;
var str="tkg" + i;
var pass="ktr" + i;
pool.cql("insert into users (username,password) VALUES (?,?)",[str, pass],function(err, results){
});
}
}
});
console.log("end");
You're probably overloading the Cassandra queue by attempting to make a million requests all at once! Keep in mind the request is asynchronous, so it is made even if the previous one has not completed.
Try using async.eachLimit to limit it to 50-100 requests at a time. The actual maximum concurrent capacity changes based on the backend process.
Actually there was no problem. I checked the number of records twice at different times and i saw that the write operation continued until timeout value. The timeout value is given inside the code. As a summary in the code there is no crash, thank you Julian H. Lam for reply.
But another question is that how to increase write performance of cassandra? What should i change in cassandra.yaml file or any?
Thank you.