Am working with node.js and mongodb and am quite new to the concepts, when tried to insert a huge data on to the mongodb the connection made with the mongodb on the local machine lost and the insertion stops and i found that the connection is made to insert the data to the db is 5 as it is the default connection, can please some one help me to increase the default connection from 5 or any suggestions to insert a huge data on to the mongodb using node.js. Am using mongoclient to establish the connection to the db.
The default pool size for connecting mongodb is 5 which is set as a default value. To increase the pool size use
var MongoClient = require('mongodb').MongoClient;
var mongoServer = require('mongodb').Server;
var serverOptions = {
'auto_reconnect': true,
'poolSize': 5
};
var mongoClient = new MongoClient(new mongoServer('localhost', 27017, serverOptions));
//Where in the pool size can be increased by increasing the poolSize value
//eg: 'poolSize':100 will increase the connections into 100 provided the available connections are more.
Related
I have 4 separate tables in the same database. Would it be better to use mysql2.createConnection() or mysql2.createPool to bulk insert into each table? I'd like to run the inserts asynchronously.
The code will be executing the inserts from an AWS Lambda and connections are done through RDS Proxy which handles connection pooling for all connections to the Aurora MySql database instance.
const mysql2 = require('mysql2');
const connection = mysql2.createConnection({
host : 'example.org',
user : 'bob',
password : 'secret'
});
or
mysql2.createPool
const mysql2 = require('mysql2');
const pool = mysql2.createPool({
connectionLimit : 10,
host : 'example.org',
user : 'bob',
password : 'secret'
});
If you would like to run the inserts asynchronously, you will want createPool.
Because in with createConnection, there is only 1 connection and all queries executed on that connection are queued, and that is not really asynchronous. (Async from node.js perspective, but the queries are executed sequentially)
I'm used a multiple databases for every request , I createConnection to choose database and read/write data. But when everything is done how can I close this connection ?
//create connection every request
`const connection = mongoose.createConnection('uri');`
For closing connection. Just use:
mongoose.connection.close();
Try Like this :
const conn = mongoose.connect('uri');
conn.disconnect();
I am trying to get ALL documents from a collection in my Cosmos DB I have on Azure. The collection contains approx 50.000 documents.
I get this error: MongoError: cursor does not exist, was killed or timed out when I am doing this:
const mongoose = require('mongoose');
const mongooseOptions = { useNewUrlParser: true };
mongoose.connect(connectionString, mongooseOptions);
mongoose.set('useCreateIndex', true);
mongoose.Promise = global.Promise;
const mongoDB = mongoose.connection;
mongoDB.on('error', console.error.bind(console, 'MongoDB connection error:'));
const Schema = mongoose.Schema;
const MongoEidModelSchema = new Schema({
uid: { type: String, unique: true },
eid: { type: String, unique: true }
});
const MongoEidModel = mongoose.model('eids', MongoEidModelSchema);
MongoEidModel.find({}, {timeout: false}).then(data => {
console.log(data);
console.log(Object.keys(data).length);
});
When I set a limit of 1000 or 1500 on the find() it works.
I have also tested to change the RU/s on the collection from 400 to 10.000 (in the Azure Portal / console) which also works, but that seems like an expensive solution... doesn't it?
I have also tested to fetch this with find() in batches in a recursive loop until there is no more documents left, with a sleep between each iteration (otherwise Cosmos DB gives me "429: Too many requests" after a while.
Is there a way in which I can get ALL the 50.000 documents using Node.js and Mongoose without changing RU/s or doing recursive loops?
Thanks in advance!
/Daniel
To avoid confusion, I assume you're using the MongoDB driver to access Cosmos in Azure?
For MongoDB, there is a query limit of 16Mb (which you may well be shooting past if you are returning 50k documents). See here: https://docs.mongodb.com/manual/reference/limits/
It is possible that the limitation isn't enforced in the node driver (I haven't inspected its source), in which case it's worth consulting the Azure docs: https://learn.microsoft.com/en-us/azure/cosmos-db/faq
The upshot is, you should really use a cursor to walk across the collection when you are dealing with large numbers of documents like this. See here: How can I use a cursor.forEach() in MongoDB using Node.js?
Hope this helps :)
I am having major performance problems with MongoDB. Simple find() queries are sometimes taking 2,000-3,000 ms to complete in a database with less than 100 documents.
I am seeing this both with a MongoDB Atlas M10 instance and with a cluster that I setup on Digital Ocean on VMs with 4GB of RAM. When I restart my Node.js app on Heroku, the queries perform well (less than 100 ms) for 10-15 minutes, but then they slow down.
Am I connecting to MongoDB incorrectly or querying incorrectly from Node.js? Please see my application code below. Or is this a lack of hardware resources in a shared VM environment?
Any help will be greatly appreciated. I've done all the troubleshooting I know how with Explain query and the Mongo shell.
var Koa = require('koa'); //v2.4.1
var Router = require('koa-router'); //v7.3.0
var MongoClient = require('mongodb').MongoClient; //v3.1.3
var app = new Koa();
var router = new Router();
app.use(router.routes());
//Connect to MongoDB
async function connect() {
try {
var client = await MongoClient.connect(process.env.MONGODB_URI, {
readConcern: { level: 'local' }
});
var db = client.db(process.env.MONGODB_DATABASE);
return db;
}
catch (error) {
console.log(error);
}
}
//Add MongoDB to Koa's ctx object
connect().then(db => {
app.context.db = db;
});
//Get company's collection in MongoDB
router.get('/documents/:collection', async (ctx) => {
try {
var query = { company_id: ctx.state.session.company_id };
var res = await ctx.db.collection(ctx.params.collection).find(query).toArray();
ctx.body = { ok: true, docs: res };
}
catch (error) {
ctx.status = 500;
ctx.body = { ok: false };
}
});
app.listen(process.env.PORT || 3000);
UPDATE
I am using MongoDB Change Streams and standard Server Sent Events to provide real-time updates to the application UI. I turned these off and now MongoDB appears to be performing well again.
Are MongoDB Change Streams known to impact read/write performance?
Change Streams indeed affect the performance of your server. As noted in this SO question.
As mentioned in the accepted answer there,
The default connection pool size in the Node.js client for MongoDB is 5. Since each change stream cursor opens a new connection, the connection pool needs to be at least as large as the number of cursors.
const mongoConnection = await MongoClient.connect(URL, {poolSize: 100});
(Thanks to MongoDB Inc. for investigating this issue.)
You need to increase your pool size to get back your normal performance.
I'd suggest you do more log works. Slow queries after restarted for a while might be worse than you might think.
For a modern database/web app running on a normal machine, it's not very easy to encounter with performance issues if you are doing right. There might be a memory leak or other unreleased resources, or network congestion.
IMHO, you might want to determine whether it's a network problem first, and by enabling slow query log on MongoDB and logging in your code where the query begins and ends, you could achieve this.
If the network is totally fine and you see no MongoDB slow queries, that means something goes wrong in your own application. Detailed logging might really help where query goes slow.
Hope this would help.
I have a replica set on MongoDB Atlas and this is my mongo shell connection string which connects perfectly:
$ mongo "mongodb://MY_SERVER-shard-00-00-clv3h.mongodb.net:27017,MY_SERVER-shard-00-01-clv3h.mongodb.net:27017,MY_SERVER-shard-00-02-clv3h.mongodb.net:27017/MY_DATABASE?replicaSet=MY_REPLICASET-NAME-shard-0" --ssl --username MY_USERNAME --password MY_PASSWORD --authenticationDatabase MY_ADMIN_DATABASE
How Can I convert it to use in mongoose? How Can I build my uri and options variable?
I tried the following without success:
// connection string using mongoose:
var uri = 'mongodb://MY_USER:MY_PASSWORD#' +
'MY_SERVER-shard-00-00-clv3h.mongodb.net:27017,' +
'MY_SERVER-shard-00-01-clv3h.mongodb.net:27017,' +
'MY_SERVER-shard-00-02-clv3h.mongodb.net:27017/MY_DATABASE';
var options = {
replset: {
ssl: true,
authSource: 'MY_ADMIN_DATABASE',
rs_name: 'MY_REPLICASET_NAME-shard-0'
}
};
mongoose.connect(uri, options);
var db = mongoose.connection;
I've tried including user: and pass: on options, removing MY_USER:MY_PASSWORD# from uri, change rs_name to replicaSet, every unsuccessful attempt. It seems that mongoose is not considering the authSource option.
Using the mongojs, it works fine with the following code:
// connection string using mongojs:
var uri = 'mongodb://MY_USER:MY_PASSWORD#' +
'MY_SERVER-shard-00-00-clv3h.mongodb.net:27017,' +
'MY_SERVER-shard-00-01-clv3h.mongodb.net:27017,' +
'MY_SERVER-shard-00-02-clv3h.mongodb.net:27017/MY_DATABASE';
var options = {
ssl: true,
authSource: 'MY_ADMIN_DATABASE',
replicaSet: 'MY_REPLICASET_NAME-shard-0'
};
var db = mongojs(uri,'', options);
But, I need to use mongoose because the ODM in my project.
How can I build my uri and options variable using mongoose?
ON MONGODB 3.4.x
I resolved this issue putting the 'options' value directly in 'uri' string, according to documentation (http://mongoosejs.com/docs/connections.html) on 'Replica Set Connections' section.
// connection string using mongoose:
var uri = 'mongodb://MY_USER:MY_PASSWORD#' +
'MY_SERVER-shard-00-00-clv3h.mongodb.net:27017,' +
'MY_SERVER-shard-00-01-clv3h.mongodb.net:27017,' +
'MY_SERVER-shard-00-02-clv3h.mongodb.net:27017/MY_DATABASE' +
'ssl=true&replicaSet=MY_REPLICASET_NAME-shard-0&authSource=MY_ADMIN_DATABASE';
mongoose.connect(uri);
var db = mongoose.connection;
Now, it is working fine!
NOTICE WITH MONGODB 3.6
On MongoDB Atlas using the version 3.6.x, the connection string changed to use a DNS server making the link shorter.
mongodb+srv://MY_USER:MY_PASSWORD#MY_SERVER.mongodb.net/MY_DATABASE
...if you use this connection string in your application, this will connect with success but it will be able to read and write only with atlas users with higher privilegies access (atlasAdmin, readWriteAnyDatabase...).
To you work with an specific user with privilege only to readWrite your database, you will need to keep the same connection string used in MongoDB 3.4 because the mongoose not recognized the DNS option (mongodb+srv).
P.S. all the new resources from MongoDB 3.6.x will continue working normally!
Add username and password to database connection
mongodb://[username:password#]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]]
Standard Connection String Format