Enforce limit on mongodb bulk API - node.js

I'd like to delete a large number of old documents from one collection and so it makes sense to use the bulk api. Deleting them is as simple as:
var bulk = db.myCollection.initializeUnorderedBulkOp();
bulk.find({
_id: {
$lt: oldestAllowedId
}
}).remove();
bulk.execute();
The only problem is this will attempt to delete every single document matching this criteria and in this case that is millions of documents, so for performance reasons I don't want to delete them all at once. I want to enforce a limit on the operation so that I can do something like bulk.limit(10000).execute(); and space the operations out by a few seconds to prevent locking the database for longer than necessary. However I have been unable to find any options that can be passed to bulk for limiting the number it executes.
Is there a way to limit bulk operations in this manner?
Before anyone mentions it, I know that bulk will split operations into 1000 document chunks automatically, but it will still execute all of those operations sequentially as fast as it can. This results in a much larger performance impact than I can deal with right now.

You can iterate the array of _id that of those documents that match your query using the .forEach method. The best way to return that array is by using the .distinct() method. You then use "bulk" operations to remove your documents.
var bulk = db.myCollection.initializeUnorderedBulkOp();
var count = 0;
var ids = db.myCollection.distinct('_id', { '_id': { '$lt': oldestAllowedId } } );
ids.forEach(function(id) {
bulk.find( { '_id': id } ).removeOne();
count++;
if (count % 1000 === 0) {
// Execute per 1000 operations and re-init
bulk.execute();
// Here you can sleep for a while
bulk = db.myCollection.initializeUnorderedBulkOp();
}
});
// clean up queues
if (count > 0 ) {
bulk.execute();
}

Related

fast and efficient pagination in mongodb using express and node

I have a collection with name product in mongodb, I have more then 2 million products in it. I just want to paginate from first document to last, no filter, no sorting is needed.
I use skip() and limit() but response time is exponential as skip() value gets bigger.
app.get('/products', async (req, res) => {
try {
var query = isNaN(req.query.page) ? 0 : req.query.page <= 0 ? 0 : parseInt(req.query.page) || 0;
const productPerPage = 20;
var totalPages = Product.countDocument();
totalPages = Math.floor(totalPages / 20);
query = query > totalPages ? totalPages : query;
const data = await Product.find()
.sort({_id:1})
.skip(query * productPerPage)
.limit(productPerPage)
res.status(200).render('products', { data, currPage:query, totalPages});
} catch (error) {
console.error(error.message);
res.status(500).send("Internal Server Error");
}
} ```
its working proper but when the database gets larger the response time gets greater.
Using skip and limit to paginate means that the db server needs to load the matching documents, sort them, then apply the skip and limit.
Using the sort on {_id:1} will allow the server to read the _id values from the index in pre-sorted order instead of an in-memory sort, but it will still need to scan all of the values in order to find the first one to return, i.e., on the first call it will start at the first document and return 20, on the second call it will read 40 documents, discard the first 20 and return 20, etc. so on the last call it will read all 2 million documents and discard 1,999,980 of them. This is why it is so much slower on the later pages.
There are alternate pagination methods that perform better. For example, instead of requesting just a page number, if the application were to include the previous _id value in the request, the route could query for Product.find({_id:{$gt: ObjectId(req.query.lastseen)}}).sort({_id:1}).limit(productPerPage) which would both have a more predictable runtime, and would not suffer from missing documents if one were deleted between calls.

Massive inserts with pg-promise

I'm using pg-promise and I want to make multiple inserts to one table. I've seen some solutions like Multi-row insert with pg-promise and How do I properly insert multiple rows into PG with node-postgres?, and I could use pgp.helpers.concat in order to concatenate multiple selects.
But now, I need to insert a lot of measurements in a table, with more than 10,000 records, and in https://github.com/vitaly-t/pg-promise/wiki/Performance-Boost says:
"How many records you can concatenate like this - depends on the size of the records, but I would never go over 10,000 records with this approach. So if you have to insert many more records, you would want to split them into such concatenated batches and then execute them one by one."
I read all the article but I can't figure it out how to "split" my inserts into batches and then execute them one by one.
Thanks!
UPDATE
Best is to read the following article: Data Imports.
As the author of pg-promise I was compelled to finally provide the right answer to the question, as the one published earlier didn't really do it justice.
In order to insert massive/infinite number of records, your approach should be based on method sequence, that's available within tasks and transactions.
var cs = new pgp.helpers.ColumnSet(['col_a', 'col_b'], {table: 'tableName'});
// returns a promise with the next array of data objects,
// while there is data, or an empty array when no more data left
function getData(index) {
if (/*still have data for the index*/) {
// - resolve with the next array of data
} else {
// - resolve with an empty array, if no more data left
// - reject, if something went wrong
}
}
function source(index) {
var t = this;
return getData(index)
.then(data => {
if (data.length) {
// while there is still data, insert the next bunch:
var insert = pgp.helpers.insert(data, cs);
return t.none(insert);
}
// returning nothing/undefined ends the sequence
});
}
db.tx(t => t.sequence(source))
.then(data => {
// success
})
.catch(error => {
// error
});
This is the best approach to inserting massive number of rows into the database, from both performance point of view and load throttling.
All you have to do is implement your function getData according to the logic of your app, i.e. where your large data is coming from, based on the index of the sequence, to return some 1,000 - 10,000 objects at a time, depending on the size of objects and data availability.
See also some API examples:
spex -> sequence
Linked and Detached Sequencing
Streaming and Paging
Related question: node-postgres with massive amount of queries.
And in cases where you need to acquire generated id-s of all the inserted records, you would change the two lines as follows:
// return t.none(insert);
return t.map(insert + 'RETURNING id', [], a => +a.id);
and
// db.tx(t => t.sequence(source))
db.tx(t => t.sequence(source, {track: true}))
just be careful, as keeping too many record id-s in memory can create an overload.
I think the naive approach would work.
Try to split your data into multiple pieces of 10,000 records or less.
I would try splitting the array using the solution from this post.
Then, multi-row insert each array with pg-promise and execute them one by one in a transaction.
Edit : Thanks to #vitaly-t for the wonderful library and for improving my answer.
Also don't forget to wrap your queries in a transaction, or else it
will deplete the connections.
To do this, use the batch function from pg-promise to resolve all queries asynchronously :
// split your array here to get splittedData
int i = 0
var cs = new pgp.helpers.ColumnSet(['col_a', 'col_b'], {table: 'tmp'})
// values = [..,[{col_a: 'a1', col_b: 'b1'}, {col_a: 'a2', col_b: 'b2'}]]
let queries = []
for (var i = 0; i < splittedData.length; i++) {
var query = pgp.helpers.insert(splittedData[i], cs)
queries.push(query)
}
db.tx(function () {
this.batch(queries)
})
.then(function (data) {
// all record inserted successfully !
}
.catch(function (error) {
// error;
});

Is there any way to optimize this MongoDB query for pagination in Node.js

I am paginating through a very large collection of documents. I was wondering if this an efficient means of pagination in mongoDB using _id's. My concern is that every time I make the query the entire collection of id's would need to get sorted before a result could be returned. I saw this document on optimizing queries with indexes, but would this apply to me since Im querying on _id which I would've thought is already indexed?
Pagination function:
function paginateDocs(currId, docsPerPage) {
const query = currId ? { _id: { $gt: currId } } : {};
const queryOptions = { limit: usersPerPage, sort: '_id' };
return mongo.find(query, queryOptions).toArray();
}
You can use cursor.hint(index) where you can define index as what all indexes on which you want to sort on. Refer to this link->
https://docs.mongodb.com/manual/reference/method/cursor.hint/

Inserting records without failing on duplicate

I'm inserting a lot of documents in bulk with the latest node.js native driver (2.0).
My collection has an index on the URL field, and I'm bound to get duplicates out of the thousands of lines I insert. Is there a way for MongoDB to not crash when it encounters a duplicate?
Right now I'm batching records 1000 at a time, and Using insertMany. I've tried various things, including adding {continueOnError=true}. I tried inserting my records one by one, but it's just too slow, I have thousands of workers in a queue and can't really afford the delay.
Collection definition :
self.prods = db.collection('products');
self.prods.ensureIndex({url:1},{unique:true}, function() {});
Insert :
MongoProcessor.prototype._batchInsert= function(coll,items){
var self = this;
if(items.length>0){
var batch = [];
var l = items.length;
for (var i = 0; i < 999; i++) {
if(i<l){
batch.push(items.shift());
}
if(i===998){
coll.insertMany(batch, {continueOnError: true},function(err,res){
if(err) console.log(err);
if(res) console.log('Inserted products: '+res.insertedCount+' / '+batch.length);
self._batchInsert(coll,items);
});
}
}
}else{
self._terminate();
}
};
I was thinking of dropping the index before the insert, then reindexing using dropDups, but it seems a bit hacky, my workers are clustered and I have no idea what would happen if they try to insert records while another process is reindexing... Does anyone have a better idea?
Edit :
I forgot to mention one thing. The items I insert have a 'processed' field which is set to 'false'. However the items already in the db may have been processed, so the field can be 'true'. Therefore I can't upsert... Or can I select a field to be untouched by upsert?
The 2.6 Bulk API is what you're looking for, which will require MongoDB 2.6+* and node driver 1.4+.
There are 2 types of bulk operations:
Ordered bulk operations. These operations execute all the operation in order and error out on the first write error.
Unordered bulk operations. These operations execute all the operations in parallel and aggregates up all the errors. Unordered bulk operations do not guarantee order of execution.
So in your case Unordered is what you want. The previous link provides an example:
MongoClient.connect("mongodb://localhost:27017/test", function(err, db) {
// Get the collection
var col = db.collection('batch_write_ordered_ops');
// Initialize the Ordered Batch
var batch = col.initializeUnorderedBulkOp();
// Add some operations to be executed in order
batch.insert({a:1});
batch.find({a:1}).updateOne({$set: {b:1}});
batch.find({a:2}).upsert().updateOne({$set: {b:2}});
batch.insert({a:3});
batch.find({a:3}).remove({a:3});
// Execute the operations
batch.execute(function(err, result) {
console.dir(err);
console.dir(result);
db.close();
});
});
*The docs do state that: "for older servers than 2.6 the API will downconvert the operations. However it’s not possible to downconvert 100% so there might be slight edge cases where it cannot correctly report the right numbers."

How to bulk save an array of objects in MongoDB?

I have looked a long time and not found an answer. The Node.JS MongoDB driver docs say you can do bulk inserts using insert(docs) which is good and works well.
I now have a collection with over 4,000,000 items, and I need to add a new field to all of them. Usually mongodb can only write 1 transaction per 100ms, which means I would be waiting for days to update all those items. How can I do a "bulk save/update" to update them all at once? update() and save() seem to only work on a single object.
psuedo-code:
var stuffToSave = [];
db.collection('blah').find({}, function(err, stuff) {
stuff.toArray().forEach(function(item)) {
item.newField = someComplexCalculationInvolvingALookup();
stuffToSave.push(item);
}
}
db.saveButNotSuperSlow(stuffToSave);
Sure, I'll need to put some limit on doing something like 10,000 at once to not try do all 4 million at once, but i think you get the point.
MongoDB allows you to update many documents that match a specific query using a single db.collection.update(query, update, options) call, see the documentation. For example,
db.blah.update(
{ },
{
$set: { newField: someComplexValue }
},
{
multi: true
}
)
The multi option allows the command to update all documents that match the query criteria. Note that the exact same thing applies when using the Node.JS driver, see that documentation.
If you're performing many different updates on a collection, you can wrap them all in a Bulk() builder to avoid some of the overhead of sending multiple updates to the database.

Resources