Docker - Nodejs to Mongodb connection works but collection is null - node.js

I am trying to connect from NodeJS Docker container to MongoDB Docker container. I can access the MongoDb from a client such as RoboMongo.
But in NodeJS it connects to the database but the db is null and I get an error when trying to get a collection.
url = 'mongodb://127.0.0.1:27017/mydb';
router.get('/api/v1/test', function (req, res, next) {
MongoClient.connect(url, function(err, db) {
var collection = db.collection('myApptbl');
});
});
I am getting the below error in my docker logs.
/usr/src/myapp/node_modules/mongodb/lib/mongo_client.js:225
throw err
^
TypeError: Cannot read property 'collection' of null
at /usr/src/myapp/server.js:52:26
at connectCallback (/usr/src/app/node_modules/mongodb/lib/mongo_client.js:315:5)
at /usr/src/myapp/node_modules/mongodb/lib/mongo_client.js:222:11
at _combinedTickCallback (internal/process/next_tick.js:67:7)
at process._tickCallback (internal/process/next_tick.js:98:9)
Can you please help or provide suggestions on why I am getting this error.

Your connection string mongodb://127.0.0.1:27017/mydb is telling your NodeJS app connect to MongoDB in the same container by referencing 127.0.0.1 (localhost).
You want to tell it to connect to the MongoDB container, depending on how you've started your MongoDB container, with something like mongodb://mongodb:27017/mydb.
If you've started your containers with something like docker-compose, you should be able to use the name of the service:
...
services:
mongodb: <--- you can use this in your connection string
image: xxx
your-node-app:
image: xxx
If you're not using docker-compose and are using --link, you will need to reference the name of the link in your connection string.
E.g.,
docker run -d --name mongodb mongodb
docker run -d --link mongodb:mongodb your-node-app
Note: The reason for this is because docker will add an entry to your /etc/hosts file that point the service name / link name to the private IP address of the linked container.

Related

Cannot connect to Cloud SQL Proxy via Docker - Error: connect ENOENT

I can't seem to connect to the CloudSQL using Docker container.
Firstly here is my file paths: https://imgur.com/a/Nmx41o6
Dockerfile.dev:
FROM node:14-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . ./
Dockerfile.sql
RUN mkdir /cloudsql
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY ./cloud_sql_proxy ./
COPY ./service_acct.json ./
version: '3.8'
services:
cloud-sql-proxy:
build:
context: .
dockerfile: DockerFile.sql
volumes:
- /cloudsql:/cloudsql
- /service_acct.json:/app/service_acct.json
command: ./cloud_sql_proxy -dir=/cloudsql -instances=test-game-199281:us-east1:testgame -credential_file=/app/service_acct.json
app:
build:
context: .
dockerfile: DockerFile.dev
env_file:
- ./.env
volumes:
# since we copied root into host in dockerfile, we can map the whole directory with app.
- "./src:/app/src"
ports:
- "5000:5001"
command: sh -c "npm run dev"
My node index.js file. I don't think there is anything wrong, maybe I am entering the wrong connection string format? The password and user is correct as far as I can tell.
const express = require('express');
const { Pool, Client } = require('pg')
const app = express();
require('dotenv').config({path:'../.env'})
const pool = new Pool({
user: 'postgres',
host: '/cloudsql/test-game-199281:us-east1:testgame',
database: 'TestDB',
password: '********',
port: 5432
})
app.get('/', (req, res) => {
pool.connect(function(err, client, done) {
if (err) {
console.log("not able to get connection " + err);
res.status(400).send(err);
return
}
client.query("SELECT * FROM company", [1], (err, result) =>{
done();
if (err) {
console.log(err);
res.status(400).send(err);
}
res.status(200).send(result.rows);
});
});
});
Error I get:
Hello world listening on port 5001
app_1 | Error: connect ENOENT /cloudsql/test-game-199281:us-east1:testgame
/.s.PGSQL.5432
app_1 | at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1146:16) {
app_1 | errno: -2,
app_1 | code: 'ENOENT',
app_1 | syscall: 'connect',
app_1 | address: '/cloudsql/test-game-199281:us-east1:testgame
/.s.PGSQL.5432'
app_1 | }
SOLVED: I switched to TCP. screw unix socket. so confusing.
You've instructed the Cloud SQL Auth proxy to listen to 0.0.0.0:5432 with this flag -instances=test-game-199281:us-east1:testgame=tcp:0.0.0.0:5432.
But then you've instructed your app to connect to /cloudsql/<INSTANCE_CONNCECTION_NAME>, which is a unix socket.
You need to pick one, and make sure you are consistent between you app and proxy.
If you use TCP, you'll have to map the port in the container to a port on your machine (or somewhere in your docker-compose network that your app can reach it.) You'll have to update your app to connect on 127.0.0.1 (or whatever its docker IP is in the network). You can check out more on docker-compose networking here.
If you use Unix Domain sockets, you'll need to volume share the folder containing the socket so that both apps can access it. So if it's in /cloudsql, you'll need to share /cloudsql between your proxy container and your app container. You can check out more on docker-compose volumes here.
Cloud SQL's Managing Database Connections page has examples of connecting with both TCP and Unix domain sockets.
You can try to connect via service name cloud-sql-proxy:5432 instead of localhost:5432 when connecting between different dockers.
Each docker is an isolated network so you cannot use localhost since localhost will refer to the docker container's own local network.
The ENOENT error means that the connector utility cannot find the host to connect to your database. Here's a good answer that further explains it.
On your docker-compose file, the Cloud SQL Proxy is listening via TCP but your code is trying to connect via Unix socket. Your code can't connect to the host because the socket doesn't exist.
The solution is to configure your proxy to create and listen to a Unix Socket. Change the command to:
/cloud_sql_proxy -instances=INSTANCE_CONNECTION_NAME -dir=/cloudsql -credential_file=/tmp/keys/keyfile.json
No need to expose any ports to connect via Unix Sockets. I also suggest building your pool connection with a config object like in the above link or as specified by pg-pool, rather than a DB URL to avoid a possible issue where you cannot connect to a Unix Socket using connectionString URL.

Docker: Not able to connect to Redis when using docker run instead of docker-compose up

I'm using docker tool belt on windows home edition.
I'm trying to use Node with Redis using docker-compose, it is working well when I'm running the image using docker-compose up (in the same source directory), but when I try to run it using docker run -it myusername/myimage, my Node app is not isn't able to connect to Redis.
throwing:
Error: Redis connection to redis-server:6379 failed - getaddrinfo ENOTFOUND redis-server
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:60:26) {
errno: 'ENOTFOUND',
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'redis-server'
}
which I believe is because my node app is not able to find Redis, also even though the app is running when I use docker-compose up, i'm not able to access it on the respective port, i.e. localhost:3000.
this is my docker-compose.yml
version: '3'
services:
my_api:
build: .
ports:
- "3000:3000"
image: my_username/myimage
links:
- redis-server:redis-server
redis-server:
image: "redis:alpine"
there are two issues i'm facing and I believe both of them are interrelated.
EDIT
could this be because of virtualization issue of windows home edition? because it doesn't uses Hyper V, I've just try my hands on docker so I don't know about it much, but David's answer makes much sense that it maybe because of various networks and I need to connect to the valid bridge or so.
here is what I get when I do docker network ls
NETWORK ID NAME DRIVER SCOPE
5802daa117b1 bridge bridge local
7329d018df1b collect_api_mod_default bridge local
5491bfee5551 host host local
be1353789426 none null local
When you run the whole stack in the same docker-compose.yml file, Compose automatically creates a Docker network for you, and this makes cross-service DNS requests work.
If you are trying to manually docker run a container, and you don't specify a --net option at all, you get a thing Docker calls the default bridge network, which is distinctly less useful. You need to make sure your container is attached to the same Docker-internal network as the Redis server.
You can run docker network ls to get a listing of Docker networks; given that docker-compose.yml file there will probably be one named something like source_directory_default. Take that name and pass it to your docker run command (before the image name)
docker run --net source_directory_default -p 3000:3000 my_username/my_api
working index.js for lates version of node and lates version of redis, both working with docker, hope it helps
const express = require('express');
const redis = require('redis');
const app = express()
const client = redis.createClient({
url: 'redis://redis-server', // redis:// + docker-compose service name
port: 6379 // redis default port
});
client.connect()
client.on('error', (err) => console.log('Redis Client Error', err));
client.on('connect', async () => {
await client.set('visits', 0)
console.log('Redis Client Connected');
});
app.get('/', async (req, res) => {
const value = await client.get('visits');
await client.set('visits', parseInt(value) + 1);
res.send('Number of visits: ' + value);
});
app.listen(8081, () => {
console.log('Listening on port 8080')
})

Connecting to MongoDB in Docker from external app

Is it possible to connect to a docker container running a MongoDB image from an external nodejs application running locally? I've tried connecting via localhost:27017. Here's the docker compose file I'm using:
version: '3'
services:
mongodb:
image: 'bitnami/mongodb:3.6.8'
ports:
- "27017:27017"
environment:
- MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD
- MONGODB_USERNAME=$MONGODB_USERNAME
- MONGODB_PASSWORD=$MONGODB_PASSWORD
- MONGODB_DATABASE=$MONGODB_DATABASE
volumes:
- /data/db:/bitnami
I try connecting to it with the following url with no luck:
mongodb://${process.env.MONGODB_USERNAME}:${process.env.MONGODB_PASSWORD}#localhost:27017
EDIT: Connecting via mongodb://localhost:27017 works, but the authentication url errors out. I printed out the result of this string and there's nothing particularly wrong with it. I verified that the username and password match the users inside mongo in the docker container.
app.listen(port, () => {
console.log(`Example app listening on port ${port}!`);
const url = (() => {
if(process.env.MONGODB_USERNAME && process.env.MONGODB_PASSWORD) {
return `mongodb://${process.env.MONGODB_USERNAME}:${process.env.MONGODB_PASSWORD}#localhost:27017/`;
}
console.log('could not find environment vars for mongodb');
})();
MongoClient.connect(url, (err, client) => {
if(err) {
console.log('DB connection error');
} else {
console.log("Connected successfully to server");
client.close();
}
});
});
If the external nodejs application is also running in a docker container then you need to link the containers. Here is an example of a docker run cmd that links containers. I added environment variables to illustrate what host name and port you would use from inside the container.
docker run -d -it -e DEST_PORT=27017 -e DEST_HOST='mongodb' --link mongodb external-application:latest
It's important to always check the result of docker logs <container-name> --tail 25 -f. From my point of view, I think it is an issue related to permissions on this directory '/bitnami/mongodb'. Check out sameersbn comment how to fix this permission issue.
I'll assume it's the compose specification then. Try the following configuration
environment:
MONGODB_ROOT_PASSWORD:$MONGODB_ROOT_PASSWORD
MONGODB_USERNAME:$MONGODB_USERNAME
MONGODB_PASSWORD:$MONGODB_PASSWORD
MONGODB_DATABASE:$MONGODB_DATABASE
volumes:
- '/data/db:/data/db'
The issue turned out to be that I had changed the password in MONGODB_PASSWORD (it had an # in it so I thought it would have interfered with the string parsing, so I consequently changed it). The problem is, when the container restarts it references the same volume (as it should), so the users were never updated and as a result I was logging in with the wrong credentials.

MongoDB auto reconnect with docker + node.js + mongodb

In one container (container1) I have a running mongod daemon. This container is linked to another container with node.js (container2).
When I start containers everything works fine:
docker start container1
docker start container2
When I restart container1, the node.js script in the second container loses it's connection to mongodb and can't reconnect since the IP of the mongodb changed.
How can I configure node.js to reconnect using new IP of the mongodb server?
Update: Simplified code, that stops working after container1 is restarted:
var http = require('http')
, mongodb = require('mongodb');
mongodb.MongoClient.connect('mongodb://username:password#container1:27017/dbname', {uri_decode_auth: true, server: {auto_reconnect: true}}, function(err, db) {
http.createServer(function(request, response) {
// Do some work with db and send response
}).listen(config.port);
});

bluemix docker container bind to mongodb ('MongoError', message: 'connect ENETUNREACH')

have been trying to connect my docker node.js app to a mongodb service to no avail.
I've created a was liberty bridging application ($BRIDGE_APP) with no war or code that is bound to a mongodb service. It is running with good status.
Have to say that the same code is running correctly in my local docker container. I am using mongoose to connect to mongo.The only difference in the code is the way of resolving the mongo connection string:
var DB_CONNECT_STRING = 'mongodb://app:password#127.0.0.1:27017/appname';
if(custom.areWeOnBluemix() && custom.doWeHaveServices())
DB_CONNECT_STRING = custom.getMongoConnectString();
...
console.log('going to connect to mongo#: ' + DB_CONNECT_STRING);
var db = mongoose.createConnection(DB_CONNECT_STRING);
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', function (callback) {
console.log('... db open !!!');
});
I push my image to bluemix with no issues:
ice --local push $REGISTRY/$ORG/$CONTAINER_NAME
I then check the env vars:
cf env $BRIDGE_APP
System-Provided:
{
"VCAP_SERVICES": {
"mongodb-2.4": [
{
"credentials": {.....
and then I run my container and bind an ip:
ice run --bind $BRIDGE_APP --name $CONTAINER_NAME -p $PORT $REGISTRY/$ORG/$CONTAINER_NAME:latest
sleep 12
ice ip bind $IP $CONTAINER_NAME
...this is almost completely by the book, but for some reason when I check the logs I'm always getting:
ice logs $CONTAINER_NAME
...
going to connect to mongo#: mongodb://c61deb58-45ea-41....
Example app listening at http://0.0.0.0:8080
connection error: { [MongoError: connect ENETUNREACH] name: 'MongoError', message: 'connect ENETUNREACH' }
I have also tried with mongolab service with no success.
Has anybody somehow eventually tried this type of setup that can provide me some additional clue of what's missing here?
thanking you in advance
It has been my experience that networking is not reliable in IBM Containers for about 5 seconds at startup. Try adding a "sleep 10" to your CMD or ENTRYPOINT. Or set it up to retry for X seconds before giving up.
Once the networking comes up it has been reliable for me. But the first few seconds of a containers life have had troubles with DNS, binding, and outgoing traffic.
I gave a similar answer to a similar question recently. Perhaps your problem is the same as the other poster's.

Resources