multiple connection to mongodb in node.js - node.js

In below code we can connect MongoDB
var options = {
db: { native_parser: true },
server: { poolSize: 5 },
replset: { rs_name: 'myReplicaSetName' },
user: 'myUserName',
pass: 'myPassword'
}
mongoose.connect(uri, options);
Here poolSize is 5. So that 5 parallel connection can be perform on request.
But I see if we try to create second connect node gives error that I'm trying to create connection which is not closed. So at the same time one connection can do perform for one application.
So what is meaning of poolSize is 5 and how it perform?
I need a solution and a way to increase pool size when my system is scale up.
Thanks in advanced.

Mongoose (or rather the mongodb driver it uses) will automatically manage the number of connections to the MongoDB server. You should call mongoose.connect() just once.
If you need a larger number of connections, all you have to do is increase the poolSize property. However, since you're using a replicate set, you should set replset.poolSize instead of server.poolSize:
var options = {
db: { native_parser: true },
replset: { rs_name: 'myReplicaSetName', poolSize : POOLSIZE },
user: 'myUserName',
pass: 'myPassword'
}

Related

Setting the timeout with Knex.js for MSSQL

I use knex.js in a node environment to run a web server that makes sql calls. I have a query that takes over 30 seconds to complete, but when it's run through knex, the default timeout seems to be 15 seconds, so I get the following timeout error:
RequestError: Timeout: Request failed to complete in 15000ms
...
How do I change the timeout for mssql queries? The official doc has an example of setting timeout on a specific query with .timeout() but this feature doesn't work with mssql. I've also tried everything in this github issue without any luck. After trying all of that, I have this messy looking connection config:
const connection = require('knex')({
client: 'mssql',
connection: {
host : process.env.NODE_ENV == 'production' ? '172.18.1.66' : 'localhost',
user : secrets.user,
password : secrets.password,
database : 'EdgeView',
dialect: "mssql",
options: {
'enableArithAbort': true,
'requestTimeout': 150000,
'idleTimeoutMillis': 150000
},
pool: {
max: 10,
min: 0,
idleTimeoutMillis: 150000
},
dialectOptions:{
requestTimeout: 300000,
options: {
"requestTimeout": 300000
}
}
}
});
The error did not change, none of these timeout values seems to have had an impact.
The query it's self is just a raw query:
let res = connection.raw(GETSQLSTRING(args));
The answer was simply this:
const connection = require('knex')({
client: 'mssql',
connection: {
host : 'edge-sql',
user : secrets.user,
password : secrets.password,
requestTimeout: 600000,
database : 'EdgeView'
}
});
Thanks to BGPHiJACK for pointing me towards better documentation.

Keep Elasticsearch connection alive

I'm looking to keep my elasticsearch's client connection alive. I've been using the elastic client and had some great success with it when indexing and searching its datastore, but I want to be able to create a connection to my elasticsearch's nodes and preserve the connection so that I don't need to continuously create a new connection for every time I POST to it.
Having looked at the documentation I see there's a keep-alive feature for the swagger documentation but I've created my client using nodejs and have had no such luck finding any feature to do so.
my client looks something like this:
const client = new Client({
auth: {
username: 'aSecret',
password: 'alsoASecret',
},
node: 'localhost:9000',
maxRetries: 3,
requestTimeout: 15000,
});
and my index is very simple right now:
await client.index({
index: 'my-datastore'
refresh: true,
body: eventData,
});
How can I keep my index Connection alive so that I can send multiple events to my datastore without having to connect and reconnect?
There is a keepAlive config option in the Client's constructor.
And its a boolean value. Reference
keepAlive: Should the connections to the node be kept open forever? This behavior is recommended when you are connecting directly to Elasticsearch.
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
auth: {
username: 'aSecret',
password: 'alsoASecret',
},
node: 'localhost:9000',
maxRetries: 3,
requestTimeout: 15000,
keepAlive: true
});

How to add a request timeout in Typeorm/Typescript?

Today, the behavior of Typeorm (Postgres) for
getManager().query(...) and
getRepositoty().createQueryBuilder(...).getMany()
is to wait for a response indefinitely.
Is there a way to introduce a request timeout that I might've missed?
If this is not possible, does Typeorm expose the connection from its pool so that I can implement a timeout mechanism and close the DB connection manually?
To work with a specific connection from the pool use createQueryRunner there is no info about it in the docs but it is documented in the api.
Creates a query runner used for perform queries on a single database connection.
Using query runners you can control your queries to execute using single database connection and
manually control your database transaction.
Usage example:
const foo = <T>(callback: <T>(em: EntityManager) => Promise<T>): Promise<T> => {
const connection = getConnection();
const queryRunner = connection.createQueryRunner();
return new Promise(async (resolve, reject) => {
let res: T;
try {
await queryRunner.connect();
// add logic for timeout
res = await callback(queryRunner.manager);
} catch (err) {
reject(err);
} finally {
await queryRunner.release();
resolve(res);
}
});
};
You can change the default behaviour on a per connection basis either by using statement_timeout or query_timeout. You can read more about all possible configurations in the official node pg driver doc. Difference between a statement and query?
A statement is any SQL command such as SELECT, INSERT, UPDATE, DELETE.
A query is a synonym for a SELECT statement.
How to tell typeorm to use these configurations? Add these parameters under extra field in ormconfig.js:
{
type: "postgres",
name: "default",
host: process.env.DB_HOST,
port: 5432,
username: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
synchronize: false,
logging: false,
entities: [
"dist/entity/**/*.js"
],
extra: {
poolSize: 20,
connectionTimeoutMillis: 2000,
query_timeout: 1000,
statement_timeout: 1000
},
}
Note the use of poolSize here. This creates a connection pool of 20 connections for the application to use and reuse. connectionTimeoutMillis ensures that if all the connections inside the pool are busy executing statements/transactions, a new connection request out of the pool will timeout after connectionTimeoutMillis ms. More about connection pool configurations of pg-pool here.
from the documentation you can use maxQueryExecutionTime ConnectionOption.
maxQueryExecutionTime - If query execution time exceed this given max execution time (in milliseconds) then logger will log this query.
ConnectionOptions is a connection configuration you pass to createConnection or define in ormconfig

mongo read from slave with primaryPreferred and mongoose

i've got a replica set on compose.io.
I've setted primaryPreferred on the config which is fine.
{ replset: {
rs_name: 'set-53453xxxxxxxxx',
strategy: 'ping',
read_secondary: true,
readPreference: 'primaryPreferred',
slaveOk: true,
safe: true,
socketOptions: { keepAlive: 1 }
}}
But i'd like to read on secondary for background process such as stats.
should i open 2 connections ? one for primary, and one for slave ?
that's what i did. But when connecting on the slave i get a not master and slaveOk=false when querying.
i know that i can rs.slaveOK() on the mongo client to allow query but i need it in my app connection
this is how i init my slave connection:
var slaveCon = mongoose.createConnection()
slaveCon.open('slaveURL', { slaveOk: true }, callback)
// this give me the notMaster error when querying
any ideas ?

mongodb db.open() returns replicaset error but no error in mongodb log files

NodeJS version: v0.10.29
Mongo version: 2.6.3
NodeJS mongodb module: 1.4.5
We are getting the following error in the callback to db.open
"Error: No valid replicaset instance servers found"
The mongodb seems to be working fine and there is no error in the mongodb logs. Restarting the nodejs server solves the problem.
From https://github.com/HabitRPG/habitrpg/issues/2725:
One of the odd things about the Node driver is that the default
timeout for replica set connections is only 1 second, so make sure
you're setting it to something more like 30s like in this example:
{
options: {
replset: { socketOptions: { keepAlive: 1, connectTimeoutMS: 30000 } },
server: { socketOptions: { keepAlive: 1, connectTimeoutMS: 30000 } }
}
}
I think they meant these as options for use with MongoClient().
I have seen this error when starting both a MongoDB cluster and Node.js at the same time.
As MongoDB replica sets require to elect a primary and do other hand shaking activities when they are started this can introduce a delay in the MongoDB instances being available to connect to. Making the issue you describe more likely to occur.
Increasing the timeout values on the connection, as rakslice's answer details, can prevent this.
It is worth referring to the official MongoDB documentation for connection timeout settings and explanation:
http://docs.mongodb.org/manual/reference/connection-string/#connection-options
To add rakslice's answer here is a full example of how you might connect to a replset with connection timeout values set:
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect("mongodb://localhost:27017,localhost:27017,localhost:27017/test",
{
replset: {
socketOptions: {
connectTimeoutMS: 30000
}
},
server: {
socketOptions: {
connectTimeoutMS: 500
}
}
},
function(err, db) {
if (err) throw err;
db.collection("things").find({}).toArray(function(err, docs) {
if (err) throw err;
console.log(docs);
db.close();
});
}
);
A good article that goes over some of the implications & decisions of setting a particular timeout value(s):
http://blog.mongolab.com/2013/10/do-you-want-a-timeout/

Resources