Dockerized MongoDB Replica Set Access Outside Container - node.js

Short Description of the Setup
I have a one-member replica set (why? see bottom). The mongo service is running inside a docker container. In the replica set config each member is referenced by a hostname, since I am using a docker container I am using its name (mongodb0) as hostname. The mongodb port is forwarded to the host machine.
Problem
For debugging purposes I would like to access the db from outside the container. Just using mongosh works, although it seems it is not recognizing the instance as part of the replica set. Connecting a node driver to mongodb0 does not work at all (to be expected, since the hostname is only resolved inside the container) however, I would think I can connect to localhost:27017 which is the forwarded port (that's why mongosh is working) but it's throwing following error:
MongoServerSelectionError: getaddrinfo ENOTFOUND mongodb0
...
Note I am using localhost in my node driver connection string.
Why am I using a replica set with only one member? Only a replica sets allows me to use the change streams.

Related

Prisma won't connect to Postgres database on Digital Ocean Kubernetes

I'm trying to run an application in a kubernetes pod that connects to a managed database using Digital Ocean's services. I'm sure the connection string is correct. Running the pod locally on my desktop connects to the database just fine, but when I try to connect from my cloud managed kubernetes instance I get the error ECONNREFUESED
Has anyone ever run into something like this before? (I'm using Prisma and Node.js)
(Both the kubernetes cluster and the postgres database are on Digital Ocean)
Here are the pod logs (I turned on all the prisma logging I could)
prisma:info Starting a postgresql pool with 9 connections.
prisma:info Started http server on http://127.0.0.1:42535
PrismaClientKnownRequestError3 [PrismaClientKnownRequestError]:
Invalid `prisma.user.findUnique()` invocation:
connect ECONNREFUSED ::1:42535
at cb (/app/node_modules/#prisma/client/runtime/index.js:36494:17)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async login (/app/src/gql/Users/resolvers.ts:267:16) {
code: 'ECONNREFUSED',
clientVersion: '2.30.3',
meta: undefined
So, that's what happens when I try to run the "login" mutation, just for an example. Prisma tries to find the user but - it can't connect - and this is the resulting error message.
Again, it's absolutely fine when I run locally in a kubernets pod on my local machine (using docker desktop). I'm sure the user/pass/port is fine because I connect fine with pgAdmin. I'm sure the connection string is correct because it works fine in my local cluster. I'm looking for something - anything - to give a clue why I can't connect from Digital Ocean! I have also tried other database providers with the same result. Error message is always the same, only thing that changes is teh port on the prisma http server. (this is desperation haha). Does anyone have any insight to what could be going on?? What could be different that allows me to connect from docker desktop's kubernetes but prevents it on Digital Ocean's cluster?
Found the "solution". Upgraded to prisma v3.
If you can't do that, there's more info here: https://github.com/prisma/prisma/issues/9899

Connecting to documentDB using mongodb 4.x node driver with port forwarding not working

I have locally setup a port forwarding to the documentDB that is working successfully on the mongodb driver versions 3.x. When I update the mongodb package to 4.x I am getting an error of a timeout with the reason ReplicaSetNoPrimary.
The code is very simple:
const MongoClient = require('mongodb').MongoClient;
const client = new MongoClient('mongodb://xxxx:xxxx#localhost:27017');
client.connect(function(err) {
if (err) {
console.log(err);
return;
}
const db = client.db('testdb');
console.log("Connected successfully to server");
client.close();
});
Has anyone been able to connect to the documentDB locally using port forwarding with the 4.x driver? Am I missing some sort of config options? (Keep in mind I have disabled all tls and everything to make it simpler to connect and as previously stated, successfully connect when using the mongodb 3.x packages)
When connecting to a replica set, the driver:
uses the host in the connection string as a seed to make an initial connection.
runs the isMaster or hello command on that initial connection to get the full list of host:port replica set members and their current status
drops the initial connections
connects to each of the members discovered in step #2
during operations, automatically monitors all of the members, sending operation to the primary even if a different node becomes primary
In your scenario, even though you are connecting to localhost, the initial connection returns the host:port pairs that are included in the replica set configuration.
The reason that this just became a problem is the MongoDB driver specifications changed to use unified topology by default.
Unified topology permits the driver to automatically detect if it is connecting to a standalone instance, replica set, or sharded cluster, which simplifies the connection process and reduces the administrative overhead required when changing how the database is deployed.
Since your connection is failing, I assume the hostname:port pairs listed in the replica set config are either not resolvable or not reachable from the test host.
To resolve this situation either:
make it so this machine can resolve the hostnames via DNS or hosts file, and permit the connections to those ports through any firewalls
use the directConnection=true connection option to disable topology discovery

Connecting to a mongoDB with a TCP prefix on a NodeJS application

I created a NodeJS app that connects with a MongoDB and it seems to work locally.
Now, after hosting the NodeJS application on a remote machine, I am trying to connect to a mongoDB that was already created on that machine. When I print out some of the environment variables, the only one I see of relevance seems to be :
MONGODB_PORT: 'tcp://172.30.204.90:27017',
I tried connecting like I usually do with
mongoose.connect('mongodb://localhost:27017/metadata/')
and replacing it with mongoose.connect('tcp://172.30.204.90:27017/metadata') but I get an error that says my URI needs to start with 'mongodb'.
So I tried replacing it with mongoose.connect('mongodb://172.30.204.90:27017/metadata') and it no longer throws any error. But on the MongoDB side I don't see any new connections happening and my app does not start up on the machine. What should I be putting in the URI?
Your URI should indeed start with mongodb:
mongoose.connect('mongodb://username:password#host:port/database?options...');
See this page for more information: https://docs.mongodb.com/manual/reference/connection-string/
Did you try to connect to the database from cli? Or Telnet to see that connection isn't blocked.

MongoDB Connection EC2

I just setup a MongoDB instance to be running in EC2 using the Bitnami MEAN stack. I am trying to connect to the MongoDB instance in my node application, but I don't know what the URL path would be.
I am familiar with paths that look like this:
mongodb://username:password#candidate.37.mongolayer.com:port/database
But am unclear how I would figure out what the equivalent path is for my EC2 instance. I found that there is a mongodb-27017.sock file in one of the directories, but the below didn't work.
mongodb://{USERNAME}:{PASSWORD}#{EC2LINK}/stack/mongodb/tmp/mongodb-27017.sock/{DATABASENAME}
Is there any way to figure out what the path is?
Make sure mongo service is running: service mongod status
Make sure the port is open in the security group. (mongo defaults to 27017)
Use this connection URL (same as you're used to): mongodb://{USERNAME}:{PASSWORD}#{EC2 INSTANCE IP / HOSTNAME}/{DATABASENAME} . See Examples
Note: changing the port would require specifying it in the connection string.
Thanks for the help Reut, your suggestions helped me to narrow things down. (I wasn't completely off track).
I finally figured out that my issue was I needed to change the bind_ip config variable in my mongodb.conf file. The bind_IP variable was set (by default) to 127.0.0.1. This prevents remote connections from making their way to the db.
I've since changed that to 0.0.0.0 to allow remote connections.

Setup MongoDB to be reachable from remote application

I have a web server that runs MongoDB. It will save some data that I need a second application installed in a different computer to be able to query on. The server with MongoDB is an Ubuntu, it will use Meteor (currently I'm just doing some tests, so I only have the MongoDB installed) and the other application is a NodeJS script with MongooseJS.
What should I do to setup that instance of MongoDB to be reachable from remote applications?
I'm actually finding it quite hard to understand and find information on the web. I tried
var connection = GLOBAL.database.host;
mongodb.connect('mongodb://'+GLOBAL.database.host);
But it's throwing an error Failed to connect to.... :27017
The host is a virtual machine on Koding that I set up to run these tests. How can I make sure Mongo is accessible and how can I ping it to see if the mongo is responding my requests?
By default MongoDB is restricted to allow connections only from 127.0.0.1 .
The configuration file of mongo is placed in /etc/mongod.conf. In that file you can find the following two lines:
# Listen to local interface only. Comment out to listen on all interfaces.
bind_ip = 127.0.0.1
Follow the instructions and go on commenting the bind_ip line (use the # symbol). Restart MongoDB and try again.
Make sure that you can reach your server to port 27017 (is the port that MongoDB uses). You'll have to allow it in your server if you have something like iptables or allow it in any firewall you may have.

Resources