How to connect NATS server to NodeJS app in Docker - node.js

I have a node app which connects to NATS server. Below is my docker-compose.yml for both the servers.
version: "3"
services:
app:
image: node:12.13.1
volumes:
- ./:/app
working_dir: /app
depends_on:
- mongo
- nats
environment:
NODE_ENV: development
ports:
- 3000:3000
command: npm run dev
mongo:
image: mongo
expose:
- 27017
volumes:
- ./data/db:/data/db
nats:
image: 'nats:2.1.2'
# entrypoint: "/gnatsd -DV"
# entrypoint: "/nats-server -p 4222 -m 8888 "
expose:
- "4222"
ports:
- "8222:8222"
hostname: nats-server
app.post('/pets', function(req, res, next) {
let nats = NATS.connect('nats://nats-server:4222');
nats.publish('foo', 'Hello World!');
res.json({ stats: 'success' });
});
The above nodejs snippet gives me:
NatsError: Could not connect to server: Error: getaddrinfo ENOTFOUND nats-server
If I use let nats = NATS.connect() it gives me:
NatsError: Could not connect to server: Error: connect ECONNREFUSED 127.0.0.1:4222
Kindly throw me some ideas on how to resolve this issue.
Thanks.

let nats = NATS.connect('nats://nats:8222');
you need to use the name of the container, and attach the internal port

Related

PG Pool is not working inside Docker container

I am running a PERN application and currently trying to dockerize it. When I run the database as a container and the server and client locally I have no issues. However, when I containerize the server, client, and database respectively, I am unable to make requests. It results in 404 errors. This is the same behavior that occurs when I pass pool the wrong port or host. So I'm wondering if somehow I am giving the wrong host and/or port to pool or if I should change it when I containerize it.
This is the Pool Instance
const Pool = require('pg').Pool
const pool = new Pool({
user: 'docker',
password: 'docker',
host: "localhost",
port: 4000,
database: "docker"
})
This is part of the rest api in the server:
const express = require("express")
const router = express.Router()
const pool = require('../database/database.js')
router.get("/:login", async (req, res) => {
try {
let loginReq = JSON.parse(decodeURIComponent(req.params.login))
const user = await pool.query(
"SELECT user_id,first_name,last_name,email FROM \"user\" where email = $1 and password = $2",
[loginReq.email, loginReq.password]
)
if(user.rows.length) {
res.json(user.rows[0])
} else {
throw new Error('User Not Found')
}
} catch (err) {
res.status(404).json(err.message)
}
})
This is each of my dockerfiles for my client, server, and database
Client:
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm install --production
CMD ["npm","start"]
EXPOSE 3000
Server:
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm install --production
CMD ["node","index.js"]
EXPOSE 5000
Database;
FROM postgres:15.1-alpine
COPY init.sql /docker-entrypoint-initdb.d/
This is my docker-compose.yml
version: "3.8"
services:
client:
build: ./client
ports:
- "3000:3000"
restart: unless-stopped
server:
build: ./server
ports:
- "5000:5000"
restart: unless-stopped
database:
build: ./database
ports:
- "4000:4000"
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=docker
- POSTGRES_DB=docker
- PGPORT=4000
volumes:
- kurva:/var/lib/postgresql/data
restart: unless-stopped
volumes:
kurva:
I don't understand why the behavior would be different between containerizing the server and running it locally when they all use the same ports. I have tried messing with the host and changing it to 0.0.0.0 but that did not help. Any help would be appreciated!
I found that it was a host issue and I needed to change the host accessed in the pool to the service name. Additionally, I needed to add that the server depends on the database service.
This is the updated pg pool:
const pool = new Pool({
user: 'docker',
password: 'docker',
host: "database",
port: 4000,
database: "docker"
})
This is the updated docker-compose.yml
version: "3.8"
services:
client:
build: ./client
ports:
- "3000:3000"
restart: unless-stopped
server:
build: ./server
ports:
- "5000:5000"
depends_on:
- database
restart: unless-stopped
database:
build: ./database
ports:
- "4000:4000"
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=docker
- POSTGRES_DB=docker
- PGPORT=4000
volumes:
- kurva:/var/lib/postgresql/data
restart: unless-stopped
volumes:
kurva:

Connecting to MongoDB with Mongoose in docker-compose containers fail with useUnifiedTopology

I have a NodeJS Application that connects to a MongoDB server.
Both the node application and MongoDB server are served in a docker container (with docker-compose)
docker-compose.yml:
version: '3'
services:
redis:
image: "redis:alpine"
ports:
- "6379:6379"
expose:
- 6379
restart:
always
container_name: redis-server
mongo:
image: "mongo"
command: mongod --bind_ip_all --replSet rs8
volumes:
- c:\mongo\data:/data/db
ports:
- "27017:27017"
expose:
- 27017
restart:
always
container_name: mongo-server
accounts-service:
depends_on:
- redis
- mongo
build:
context: .
dockerfile: GenericNodeJSDockerfile
container_name: accounts-service
ports:
- "8001:8001"
In the node app, the connection in mongo looks like this:
const m = require('mongoose')
const connectionConfig = {
useUnifiedTopology: true,
useNewUrlParser: true,
};
let connectionString = 'mongodb://mongo:27017/mydb/replicaSet=rs8';
m.connect(connectionString , connectionConfig).then(_ => {
console.log("Connected to MongoDB")
}).catch(err => {
console.log("Failed connecting to MongoDB: " + err);
});
And the error that is thrown is:
Failed connecting to MongoDB: MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
But when I'm commenting the row contains useUnifiedTopology option, the connection succeeds.
Does anyone have an idea how to fix it?
It turns out that I had a wrong hosts file configuration that made this issue.

Running redis with docker compose

version: '3'
services:
postgres:
image: postgres
environment:
- POSTGRES_USER=${DB_USERNAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_DATABASE}
ports:
- "5674:5432"
volumes:
- ./data/db_data:/var/lib/postgresql/data
redis_cache:
image: bitnami/redis
ports:
- "6379:6379"
environment:
- ALLOW_EMPTY_PASSWORD=yes
app:
image: asobooks-api
ports:
- 3000:3000
environment:
PORT: 3000
NODE_ENV: prod
DB_CONNECTION: postgres
DB_HOST: postgres
DB_USERNAME: ${DB_USERNAME}
DB_PASSWORD: ${DB_PASSWORD}
DB_DATABASE: ${DB_DATABASE}
DB_PORT: 5432
REDIS_PORT: 6379
depends_on:
- postgres
- redis_cache
command:
[
'node',
'dist/src/main.js'
]
I have been trying this for about 2 days now and unfortunately, the app container fails to connect to Redis.
Unhandled error event: Error: connect ECONNREFUSED 127.0.0.1:6379
Here is the connection.
BullModule.registerQueue({
name: Queues.EMAIL,
redis: {
port: 6379,
host: 'redis_cache',
},
})
Please help me with this thing it works without using docker. But fails when I run in containers. Thanks.

redis-server is not started

I have to dockerize a node app which used redis-server. I used docker-compose and official image of redis there.
But, when I run docker compose up, after hitting the API it gives me an error like this
[ioredis] Unhandled error event: Error: connect ECONNREFUSED 127.0.0.1:6379
docker-redis | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)
I used ioredis package, I give REDIS_PORT and REDIS_HOST as the argument of new Redis()
const Redis = require('ioredis');
const { promisify } = require('util');
const publisher = new Redis({
port: process.env.REDIS_PORT,
host: process.env.REDIS_HOST,
});
here REDIS_HOST='redis' and REDIS_PORT=6379
and this is my docker-compose file
version: '3'
services:
app:
container_name: docker-redis
restart: always
build: .
ports:
- '3000:3000'
links:
- redis
# depends_on:
# - redis
redis:
container_name: redis
image: redis:latest
ports:
- '6379:6379'
command: ['redis-server', '--bind', 'redis', '--port', '6379']

nodejs doesn't connect to rethinkdb with docker-compose

How run rethinkdb with nodejs on server ?
Is docker-compose.yml:
web:
build: ./app
volumes:
- "./app:/src/app"
ports:
- "8000:8000"
- "8080:8080"
- "28015:28015"
- "29015:29015"
links:
- "db:redis"
- "rethink:rethinkdb"
command: nodemon -L app/bin/www
db:
image: redis
ports:
- "6379"
rethink:
image: rethinkdb
is a folder app with nodejs-project and Dockerfile
Dockerfile
FROM node:0.10.38
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src/app
ADD package.json package.json
RUN npm install
ADD nodemon.json nodemon.json
In the nodejs-project connection to rethinkdb:
module.exports.setup = function() {
r.connect({host: dbConfig.host /* 127.0.0.1*/, port: dbConfig.port /*is 28015 or 29015 or 32783*/ }, function (err, connection) {
...
});
};
And when run a docker-compose up, have an error
Could not connect to 127.0.0.1:32783.
Try exposing the port in you docker-compose.yml:
web:
build: ./app
volumes:
- "./app:/src/app"
ports:
- "8000:8000"
depends_on:
- db
- rethink
command: nodemon -L app/bin/www
db:
image: redis
ports:
- "6379:6379"
rethink:
image: rethinkdb
ports:
- "8080:8080"
- "28015:28015"
- "29015:29015"
Then dbConfig.host = rethinkdb and dbConfig.port = 28015:
module.exports.setup = function() {
r.connect({host: dbConfig.host /* 127.0.0.1*/, port: dbConfig.port /*is 28015 or 29015 or 32783*/ }, function (err, connection) {
...
});
};
Hope you find this helpful.

Resources