I have created a mongodb replica set in 3 virtualbox vms, manasger1, worker1, worker2. each replica set has it's own containers. The replica set works fine and I can login to the servers:
docker exec -it mongoNode1 bash -c 'mongo -u $MONGO_USER_ADMIN -p $MONGO_PASS_ADMIN --authenticationDatabase "admin"'
MongoDB shell version v3.6.4
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.4
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
rs1:PRIMARY> use medmart_db
switched to db medmart_db
rs1:PRIMARY> db.providers.findObe()
2018-05-14T17:46:38.844+0000 E QUERY [thread1] TypeError: db.providers.findObe is not a function :
#(shell):1:1
rs1:PRIMARY> db.providers.findOne()
{
"_id" : ObjectId("56ddb50c230e405eafaf7781"),
"provider_id" : 6829973073,
When I run my nodejs application it connects to the replica set. The problem is that when I create a container for my application I get the following exception, my container stops, and I can no longer connect to the replica set PRIMARY, but instead to the secondary.
(node:16) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [worker2:27017] on first connect [MongoNetworkError: connection 5 to worker2:27017 timed out]
at Pool.<anonymous> (/home/nupp/app/node_modules/mongoose/node_modules/mongodb-core/lib/topologies/server.js:505:11)
at Pool.emit (events.js:182:13)
at Connection.<anonymous> (/home/nupp/app/node_modules/mongoose/node_modules/mongodb-core/lib/connection/pool.js:329:12)
at Object.onceWrapper (events.js:273:13)
at Connection.emit (events.js:182:13)
at Socket.<anonymous> (/home/nupp/app/node_modules/mongoose/node_modules/mongodb-core/lib/connection/connection.js:256:10)
at Object.onceWrapper (events.js:273:13)
at Socket.emit (events.js:182:13)
at Socket._onTimeout (net.js:447:8)
at ontimeout (timers.js:427:11)
at tryOnTimeout (timers.js:289:5)
at listOnTimeout (timers.js:252:5)
at Timer.processTimers (timers.js:212:10)
(node:16) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:16) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
I opened an issue ticket with mongoose for help but their solution doesn't work (they asked that I change from ip address to domain name but I still get the same error).
My connection module is:
'use strict';
const mongoose = require('mongoose');
const getURL = (options) => {
const url = options.servers.reduce((accumulator, current) => {
accumulator += current+',';
return accumulator;
},`mongodb://${options.user}:${options.pass}#`);
return `${url.substr(0,url.length-1)}/${options.db}?replicaSet=${options.repl}&authSource=admin`;
}
const connect = (config, mediator) => {
mediator.once('boot.ready', () => {
const options = {
native_parser: true,
poolSize: 5,
user: config.user,
pass: config.pass,
promiseLibrary: global.Promise,
autoIndex: false,
reconnectTries: 30,
reconnectInterval: 500,
bufferMaxEntries: 0,
connectWithNoPrimary: true ,
readPreference: 'ReadPreference.SECONDARY_PREFERRED',
};
mongoose.connect(getURL(config), options);
mongoose.connection.on('error', (err) => {
mediator.emit('db.error', err);
});
mongoose.connection.on('connected', () => {
mediator.emit('db.ready', mongoose);
});
});
};
module.exports = Object.assign({},{connect});
My config file is:
'use strict';
const serverConfig = {
port: 3000,
};
const dbConfig = {
db: 'xxxxxxx',
user: 'xxxxxx',
pass: 'xxxxxxx',
repl: 'rs1',
servers: (process.env.DB_SERVERS) ? process.env.DB_SERVERS.split(' ') : ['192.168.99.100:27017','192.168.99.101:27017','192.168.99.102:27017'],
};
module.exports = Object.assign({},{dbConfig, serverConfig});
So there are a couple of steps to resolving this issue. First and foremost is the fact that hosts file on the virtualBox machines must be updated to have the ip addresses of mongodb domains. To do that:
> docker-machine ssh manager1
docker#manager1:~$ sudo vi /etc/hosts
#add the following
192.168.99.100 manager1
192.168.99.101 worker1
192.168.99.102 worker2
Repeat for worker1 and worker2.
If by adding the following you are good, then skip the second part.
Next, if you get the following error:
Error: ERROR::providers-service::model::ProviderModel::getProviderById: TypeError: Cannot read property 'wireProtocolHandler' of null
at cb (/home/nupp/app/src/model/ProviderModel.js:182:13)
at /home/nupp/app/node_modules/mongoose/lib/model.js:4161:16
at Immediate.Query.base.findOne.call (/home/nupp/app/node_modules/mongoose/lib/query.js:1529:14)
at Immediate.<anonymous> (/home/nupp/app/node_modules/mquery/lib/utils.js:119:16)
at runCallback (timers.js:696:18)
at tryOnImmediate (timers.js:667:5)
at processImmediate (timers.js:649:5)
Remove any options to mongoose.connect. Here's how mine looks now:
mediator.once('boot.ready', () => {
const options = {};
mongoose.connect(getURL(config), options);
mongoose.connection.on('error', (err) => {
mediator.emit('db.error', err);
});
mongoose.connection.on('connected', () => {
mediator.emit('db.ready', mongoose);
});
});
Hope my solution can help someone else. Took me a few long days to finally figure this out.
Related
When I try to connect to my MongoDB that requires SSL, my NodeJs app crashes on the following method:
conn = await mongoose.connect(process.env.DB_HOST, {
tlsCAFile: __dirname + '/ca-certificate.crt',
useNewUrlParser: true,
useUnifiedTopology: true
})
and I get the following error in stderr.log:
events.js:377
throw er; // Unhandled 'error' event
^
Error: read EINVAL
at Pipe.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on Socket instance at:
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:82:21) {
errno: -22,
code: 'EINVAL',
syscall: 'read'
}
The interesting this is that this works just fine on my local windows machine, but crashes when deployed to A2Hosting shared hosting.
Also I am able to connect successfully (even on A2hosting) when connecting without mongoose like so:
const client = new MongoClient(uri);
try {
await client.connect();
const db = client.db('egomenu');
console.log('connected successfully');
} finally {
await client.close();
}
I am using mongoose: ^6.3.1 and node: 14.20.1 on A2hosting.
I believe that the error is generated when trying to read the .crt file during connection; however cannot figure out what is causing it.
Any help would be greatly appreciated :)
I am using a postgres database for my express web server.
I am using the 'pg' library to execute queries on this database.
Here is my connection method :
const db = new Client({
user: 'xxx',
host: 'xxx',
database: 'xxx',
password: 'xxx',
port: xxx,
})
db.connect(err => {
if (err) {
console.error('connection error', err.stack)
} else {
console.log('connected')
}
Then to execute a request I do this:
db.query(MY_REQUEST, function (err, data) {
if (err) throw err;
res.render('hello/world', {
title: 'Hello',
data: data.rows
});
});`
It all works perfectly. But after several minutes without using my website, my connection to the db times out, and I get the following error:
node:events:355
throw er; // Unhandled 'error' event
^
Error: Connection terminated unexpectedly
at Connection.<anonymous> (/usr/src/app/node_modules/pg/lib/client.js:132:73)
at Object.onceWrapper (node:events:484:28)
at Connection.emit (node:events:378:20)
at Socket.<anonymous> (/usr/src/app/node_modules/pg/lib/connection.js:58:12)
at Socket.emit (node:events:378:20)
at TCP.<anonymous> (node:net:665:12)
Emitted 'error' event on Client instance at:
at Client._handleErrorEvent (/usr/src/app/node_modules/pg/lib/client.js:319:10)
at Connection.<anonymous> (/usr/src/app/node_modules/pg/lib/client.js:149:16)
at Object.onceWrapper (node:events:484:28)
[... lines matching original stack trace ...]
at TCP.<anonymous> (node:net:665:12)
How could I do to reconnect automatically when the connection is cut or when a request fails?
You should attach an error-handler in order to prevent the unhandled error crashing your app. It's as simple as:
db.on('error', e => {
console.error('DB error', e);
});
As to why the error happens we need more details, looks like it could be a connection reset due to idle-timeout?
You can create a function to control if you're connected to database or not, before you continue with your main function.
Create a function for controlling database connection status, reconnecting etc. and before you run a database related function, first start that middle function and wait for result, after that you can continue using database again.
If you want(which should be prefered way mostly), create that middle function as an async function and return a promise, when using that function wait for that function.
I am trying to connect CockroachDB and Nodes JS using the pg driver. I am able to establish a connection successfully, but every time when querying the tables: it works only when I prefix the table names with the database name, or otherwise It throws relation doesn't exist error. Though I am specifying the database name while establishing DB connection.
The code that I am using to establish DB connection :
var pg = require('pg');
var config = {
user: 'root',
host: 'localhost',
database: 'testDB',
port: 26257
};
var pool = new pg.Pool(config);
const client = await pool.connect();
Executing this line works fine as I prefix table name with DBname:
const response = await client.query('select * from testDB.test');
Executing this line raises the following error:
const response = await client.query('select * from test');
(node:12797) UnhandledPromiseRejectionWarning: error: relation "test" does not exist
at Parser.parseErrorMessage (/Users/naveenkumar/Ucars/node-cockroachDB/node_modules/pg-protocol/dist/parser.js:278:15)
at Parser.handlePacket (/Users/naveenkumar/Ucars/node-cockroachDB/node_modules/pg-protocol/dist/parser.js:126:29)
at Parser.parse (/Users/naveenkumar/Ucars/node-cockroachDB/node_modules/pg-protocol/dist/parser.js:39:38)
at Socket.<anonymous> (/Users/naveenkumar/Ucars/node-cockroachDB/node_modules/pg-protocol/dist/index.js:10:42)
at Socket.emit (events.js:314:20)
at addChunk (_stream_readable.js:304:12)
at readableAddChunk (_stream_readable.js:280:9)
at Socket.Readable.push (_stream_readable.js:219:10)
at TCP.onStreamRead (internal/stream_base_commons.js:188:23)
at TCP.callbackTrampoline (internal/async_hooks.js:123:14)
(Use `node --trace-warnings ...` to show where the warning was created)
<node_internals>/internal/process/warning.js:33
(node:12797) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
<node_internals>/internal/process/warning.js:33
(node:12797) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Any kind of help is appreciated, Thanks in advance :)
EDIT: The database name should be all lowercase. See https://www.cockroachlabs.com/docs/stable/keywords-and-identifiers.html#rules-for-identifiers
defaultdb> CREATE DATABASE TeSt;
CREATE DATABASE
Time: 166.382ms
defaultdb> SHOW DATABASES;
database_name | owner
-----------------------------+---------------------
defaultdb | root
test | lauren
testdb | lauren
With CockroachDB v20.2.2 and pg v8.5.1, I'm not able to reproduce the issue. The below works as expected:
const { Pool } = require("pg");
const config = {
...
database: "testdb",
...
};
const pool = new Pool(config);
;(async function() {
const client = await pool.connect();
await client.query("select * from test_table", (err, res) => {
console.log(err, res);
client.end();
});
})()
TL;DR: Can't connect to Atlas Cluster even after doing exactly what docs said.
Hi, so I read the docs of getting started with Atlas and everything seemed nice & easy. I did follow the steps, created a free cluster, whitelisted my IP, and then tried to connect using their sample app:
const { MongoClient } = require("mongodb");
// Replace the following with your Atlas connection string
const url = "mongodb+srv://<username>:<password>#clustername.mongodb.net/test?retryWrites=true&w=majority&useNewUrlParser=true&useUnifiedTopology=true";
const client = new MongoClient(url);
async function run() {
try {
await client.connect();
console.log("Connected correctly to server");
} catch (err) {
console.log(err.stack);
}
finally {
await client.close();
}
}
run().catch(console.dir);
But the following error occurred when I tried to execute with: node connect.js
PS C:\Users\marjo\Documents\mongoDB Atlas> node connect
(node:11352) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
MongoNetworkError: failed to connect to server [remote-doc-shard-00-02-otc5a.mongodb.net:27017] on first connect [MongoError: bad auth Authentication failed.
at C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\auth\auth_provider.js:46:25
at C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\auth\scram.js:240:11
at _callback (C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\connection\connect.js:349:5)
at Connection.messageHandler (C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\connection\connect.js:378:5)
at Connection.emit (events.js:315:20)
at processMessage (C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\connection\connection.js:384:10)
at TLSSocket.<anonymous> (C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\connection\connection.js:553:15)
at TLSSocket.emit (events.js:315:20)
at addChunk (_stream_readable.js:297:12)
at readableAddChunk (_stream_readable.js:273:9) {
ok: 0,
code: 8000,
codeName: 'AtlasError'
}]
at Pool.<anonymous> (C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\topologies\server.js:438:11)
at Pool.emit (events.js:315:20)
at C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\connection\pool.js:561:14
at C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\connection\pool.js:1008:9
at callback (C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\connection\connect.js:97:5)
at C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\connection\connect.js:396:21
at C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\auth\auth_provider.js:66:11
at C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\auth\scram.js:240:11
at _callback (C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\connection\connect.js:349:5)
at Connection.messageHandler (C:\Users\marjo\Documents\mongoDB Atlas\node_modules\mongodb\lib\core\connection\connect.js:378:5)
I tried changing the connection string with the one from Atlas: (because it was different from the docs by a tiny bit)
const uri = "mongodb+srv://Marjo:<password>#remote-doc-otc5a.mongodb.net/<dbname>?retryWrites=true&w=majority";
But still the same result. My password had a !character so I put %21 instead of it. I also replaced with cluster name (Remote-Doc) and test but it still failed.
I'd appreciate if you could help me!
I think that you are having a problem with the parse of your password, maybe it has special characters.
The best way to handle this is to change the way that you are connecting to pass the user and password as options.
You can follow the doc and change your MongoClient conection for something like this:
const mongoclient = new MongoClient(new Server("remote-doc-otc5a.mongodb.net", 27017));
// Listen for when the mongoclient is connected
mongoclient.open(function (err, mongoclient) {
// Then select a database
const db = mongoclient.db("dbname");
// Then you can authorize your self
db.authenticate('username', 'password', (err, result) => {
// On authorized result=true
// Not authorized result=false
// If authorized you can use the database in the db variable
});
});
And with mongoose you can do something like this:
mongoose.connect('mongodb+srv://#remote-doc-otc5a.mongodb.net/test?retryWrites=true&w=majority', {
user: 'USERNAME',
pass: 'PASSWORD',
useNewUrlParser: true,
useUnifiedTopology: true
})
Also, check if you are not using the account password instead of the cluster/database password.
You can follow this tutorial to check if you are using the correct one: MongoDB Atlas Setup - Digital Ocean.
I just changed the Atlas password to a simple one with no special characters, and the connection worked! I feel ashamed now
I try to connect to my MariaDB database using Node.js based on this tutorial:
const mariadb = require('mariadb');
const pool = mariadb.createPool({
host: 'myhost.com',
user:'root',
password: 'password',
database: 'db_p',
connectionLimit: 2
});
async function asyncFunction() {
let conn;
try {
console.log('establishing connection')
conn = await pool.getConnection();
console.log('established')
const rows = await conn.query("SHOW TABLES");
console.log(rows);
} catch (err) {
console.log(err)
throw err;
} finally {
if (conn) return conn.end();
}
}
but all I get is this error:
establishing connection
{ Error: retrieve connection from pool timeout
at Object.module.exports.createError (/Users/jan/Developer/microservice/node_modules/mariadb/lib/misc/errors.js:55:10)
at rejectTimeout (/Users/jan/Developer/microservice/node_modules/mariadb/lib/pool.js:267:16)
at Timeout.rejectAndResetTimeout [as _onTimeout] (/Users/jan/Developer/microservice/node_modules/mariadb/lib/pool.js:287:5)
at ontimeout (timers.js:486:15)
at tryOnTimeout (timers.js:317:5)
at Timer.listOnTimeout (timers.js:277:5)
fatal: false,
errno: 45028,
sqlState: 'HY000',
code: 'ER_GET_CONNECTION_TIMEOUT' }
(node:76515) UnhandledPromiseRejectionWarning: Error: retrieve connection from pool timeout
at Object.module.exports.createError (/Users/jan/Developer/microservice/node_modules/mariadb/lib/misc/errors.js:55:10)
at rejectTimeout (/Users/jan/Developer/microservice/node_modules/mariadb/lib/pool.js:267:16)
at Timeout.rejectAndResetTimeout [as _onTimeout] (/Users/jan/Developer/microservice/node_modules/mariadb/lib/pool.js:287:5)
at ontimeout (timers.js:486:15)
at tryOnTimeout (timers.js:317:5)
at Timer.listOnTimeout (timers.js:277:5)
(node:76515) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This errororiginated either by throwing inside of an async function without a catch block, or byrejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:76515) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
I programmed in JS for last two years, but I'm new to Node.js and I thought it should work out of the box. Anyone?
The problem was that in phpMyAdmin I didn't added my home ip address so I can connect. For those who are just starting with it - you can create multiple users with the same name and password so you can actually have access from multiple IP's (like localhost, ::1 or 127.0.0.1 which is quite the same, but still required just for sure).
I have added additional user with same credentials pointing to my IP and it solved the problem.
For others with the same error message, particularly if the connection works the first few times but not after that, the error can happen if you don't end the connection with conn.end. Not OPs problem, but perhaps others.
For me the problem was solved by adding port: 3307 as another pool creation parameter.
Port 3306 seems to be default but some servers seem to prefer 3307.