How to connect a Node.js TCP-client in container A to a TCP-server in container B with Docker Compose? - node.js

I am trying to connect a TCP-client in container A to a TCP-server in container B. Running docker-compose up results in a ECONNREFUSED error on the client side. 为什么呢?
The TCP-client looks like this:
var net = require('net');
var client = new net.Socket();
client.connect(1337, function() {
console.log('Connected');
client.write('Hello, server! Love, Client.');
});
client.on('data', function(data) {
console.log('Received: ' + data);
// client.destroy(); // kill client after server's response
});
client.on('close', function() {
console.log('Connection closed');
});
The TCP-client Dockerfile looks like this:
FROM node:latest
RUN mkdir /app
WORKDIR /app
ADD . /app
ADD package.json /app
RUN npm install
EXPOSE 1337
ENV PATH /app/node_modules/.bin:$PATH
CMD npm start
The TCP-server looks like this:
var net = require('net');
var server = net.createServer(function(socket) {
socket.write('Echo server\r\n');
socket.pipe(socket);
});
server.listen(1337);
The TCP-server Dockerfile looks like this:
FROM node:latest
RUN mkdir /app
WORKDIR /app
ADD . /app
ADD package.json /app
RUN npm install
EXPOSE 1337
ENV PATH /app/node_modules/.bin:$PATH
CMD npm start
The docker-compose.yml looks like this:
version: "3"
services:
tcpclient:
build: ./tcpclient
ports:
- "8000:8000"
depends_on:
- tcpserver
tcpserver:
build: ./tcpserver
ports:
- "8001:1337"
The connection error looks like this:
tcpclient_1 | > http-service#1.0.0 start /app
tcpclient_1 | > node tcpclient.js
tcpclient_1 |
tcpclient_1 | events.js:137
tcpclient_1 | throw er; // Unhandled 'error' event
tcpclient_1 | ^
tcpclient_1 |
tcpclient_1 | Error: connect ECONNREFUSED 127.0.0.1:1337
tcpclient_1 | at Object._errnoException (util.js:1003:13)
tcpclient_1 | at _exceptionWithHostPort (util.js:1024:20)
tcpclient_1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1194:14)
tcpclient_1 | npm ERR! code ELIFECYCLE
Help would be greatly appreciated

Multiple problems here.
I'm not sure localhost from a container will be in the same interface than your host. Moreover, the port bound is 8001. But, I would recomand another approach:
Using link you can reference other containers hosts with their name.
I would try to:
1) Add in tcpclient container definition:
links:
- tcpserver
2) Keep 1337:1337 in tcp server (why is client exposing a TCP port btw ?)
3) use client.connect(1337, 'tcpserver', function(...){...}

Related

Differences in code between local project and Dockerized project break the app

I'm trying to dockerize my current pet project in which I use a NodeJS (ExpressJS) as a backend, React as a frontend and PostgreSQL as a database. On both backend and frontend I use TypeScript instead of JavaScript. I'm also using a Prisma as ORM for my database. I decided to have a standard three container's architecture, one for backend, one for database and one for frontend app. My Dockerfile's are as follows:
Frontend's Dockerfile
FROM node:alpine
WORKDIR /usr/src/frontend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "start"]
Backend's Dockerfile
FROM node:lts
WORKDIR /usr/src/backend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8000
RUN npx prisma generate
CMD ["npm", "run", "dev"]
there's also a .dockerignore file in the backend folder:
node_modules/
and my docker-compose.yml looks like this:
version: '3.9'
services:
db:
image: 'postgres'
ports:
- '5432:5432'
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'postgres'
POSTGRES_DB: 'hucuplant'
server:
build:
context: ./backend_express
ports:
- "8000:8000"
environment:
DATABASE_URL: 'postgresql://postgres:postgres#localhost:5432/hucuplant?schema=public'
client:
build:
context: ./frontend
ports:
- "3000:3000"
After doing a docker-compose up --build everything starts well but when I try to register a new user on my site then I get the following error:
Error:
hucuplant-server-1 | Invalid `prisma.user.findUnique()` invocation in
hucuplant-server-1 | /usr/src/backend/src/routes/Auth.ts:44:57
hucuplant-server-1 |
hucuplant-server-1 | 41 auth.post("/register", async (req: Request, res: Response) => {
hucuplant-server-1 | 42 const { email, username, password } = req.body;
hucuplant-server-1 | 43
hucuplant-server-1 | → 44 const usernameResult: User | null = await prisma.user.findUnique({
hucuplant-server-1 | where: {
hucuplant-server-1 | ? username?: String,
hucuplant-server-1 | ? id?: Int,
hucuplant-server-1 | ? email?: String
hucuplant-server-1 | }
hucuplant-server-1 | })
However, the existing code in my Auth.ts file on the line 44 looks like this:
auth.post("/register", async (req: Request, res: Response) => {
const { email, username, password } = req.body;
const usernameResult: User | null = await prisma.user.findUnique({
where: {
username: username,
},
});
When I run my project locally everything works just fine but when I try to run the containerized app then those things break and differ quite much. What is causing that? How do I fix that?

Why i cant access to my docker node app via browser, within container it works?

my docker-compose.yml
version: "3"
services:
client:
ports:
- "3000:3000"
restart: always
container_name: thread_client
build:
context: .
dockerfile: ./client/client.Dockerfile
volumes:
- ./client/src:/app/client/src
- /app/client/node_modules
depends_on:
- api
api:
build:
context: .
dockerfile: ./server/server.Dockerfile
container_name: thread_api
restart: always
ports:
- "3001:3001"
- "3002:3002"
volumes:
- ./server/src:/app/server/src
- /app/server/node_modules
pg_db:
image: postgres:14-alpine
container_name: thread_db
restart: always
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: thread
POSTGRES_USER: postgres
volumes:
- pg_volume:/var/lib/postgresql/data
adminer:
image: adminer
restart: always
depends_on:
- pg_db
ports:
- "9090:8080"
volumes:
pg_volume:
client.Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY .editorconfig .
COPY .eslintrc.yml .
COPY .lintstagedrc.yml .
COPY .ls-lint.yml .
COPY .npmrc .
COPY .nvmrc .
COPY .prettierrc.yml .
COPY .stylelintrc.yml .
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY ./shared ./shared
RUN npm run install:shared
WORKDIR /app/client
COPY ./client/package.json .
COPY ./client/package-lock.json .
COPY ./client/.eslintrc.yml .
COPY ./client/.npmrc .
COPY ./client/.stylelintrc.yml .
COPY ./client/jsconfig.json .
COPY ./client/.env.example .env
RUN npm install
COPY ./client .
RUN npm run build
EXPOSE 3000
CMD ["npm", "run", "start"]
server.Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY .editorconfig .
COPY .eslintrc.yml .
COPY .lintstagedrc.yml .
COPY .ls-lint.yml .
COPY .npmrc .
COPY .nvmrc .
COPY .prettierrc.yml .
COPY .stylelintrc.yml .
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY ./shared ./shared
RUN npm run install:shared
WORKDIR /app/client
COPY ./client/package.json .
COPY ./client/package-lock.json .
COPY ./client/.eslintrc.yml .
COPY ./client/.npmrc .
COPY ./client/.stylelintrc.yml .
COPY ./client/jsconfig.json .
COPY ./client/.env.example .env
RUN npm install
COPY ./client .
RUN npm run build
WORKDIR /app/server
COPY ./server/package.json .
COPY ./server/package-lock.json .
COPY ./server/.env.example .env
RUN npm install
COPY ./server .
EXPOSE 8654
CMD ["npm", "start"]
client app is accessed in browser easily, but API service not, and I don't understand why
server.js
import fastify from 'fastify';
import cors from '#fastify/cors';
import fastifyStatic from '#fastify/static';
import http from 'http';
import Knex from 'knex';
import { Model } from 'objection';
import qs from 'qs';
import { Server as SocketServer } from 'socket.io';
import knexConfig from '../knexfile.js';
import { initApi } from './api/api.js';
import { ENV, ExitCode } from './common/enums/enums.js';
import { socketInjector as socketInjectorPlugin } from './plugins/plugins.js';
import { auth, comment, image, post, user } from './services/services.js';
import { handlers as socketHandlers } from './socket/handlers.js';
const app = fastify({
querystringParser: str => qs.parse(str, { comma: true })
});
const socketServer = http.Server(app);
const io = new SocketServer(socketServer, {
cors: {
origin: '*',
credentials: true
}
});
const knex = Knex(knexConfig);
Model.knex(knex);
io.on('connection', socketHandlers);
app.register(cors, {
origin: "*"
});
app.register(socketInjectorPlugin, { io });
app.register(initApi, {
services: {
auth,
comment,
image,
post,
user
},
prefix: ENV.APP.API_PATH
});
const staticPath = new URL('../../client/build', import.meta.url);
app.register(fastifyStatic, {
root: staticPath.pathname,
prefix: '/'
});
app.setNotFoundHandler((req, res) => {
res.sendFile('index.html');
});
const startServer = async () => {
try {
await app.listen(ENV.APP.PORT);
console.log(`Server is listening port: ${ENV.APP.PORT}`);
} catch (err) {
app.log.error(err);
process.exit(ExitCode.ERROR);
}
};
startServer();
socketServer.listen(ENV.APP.SOCKET_PORT);
So, I have tried curl localhost:3001 in API container and it's works, but why client works good via browser and API doesn't I don't any ideas.
How to debug, to find right solution?
UPD:
docker inspect (API service container)
"Ports": {
"3001/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3001"
},
{
"HostIp": "::",
"HostPort": "3001"
}
],
"3002/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3002"
},
{
"HostIp": "::",
"HostPort": "3002"
}
]
},
Looking at your comment stating:
i am trying to access to that app via browser by localhost:3001
And the ports part of your docker-compose.yaml.
ports:
- "8654:3001"
- "3002:3002"
You are trying to access the application on the wrong port.
With - "8654:3001" you are telling docker-compose to map port 3001 of the container to port 8654 on your host. (documentation)
Try to open http://localhost:8654 in your browser or changing 8654 in the docker-compose.yaml to 3001.

Unable to get information using https second time(Error: write EPROTO ... final_renegotiate:unsafe legacy renegotiation disabled)

I developed a server which works fine on my system. Then I got a VPS(Virtual Private Server) from my university to deploy the server there too!
To deploy my server on VPS I used docker but I got a strange result when I ran it! I debug the program and find where problem is but I don't know why it occurs and how to fix it.
I use a remote database to get safe primes. First time I get the information without any problem but when server tries to connect to the database for second time, it gets below error:
node:events:498
throw er; // Unhandled 'error' event
^
Error: write EPROTO 80B9B7E0587F0000:error:0A000152:SSL routines:final_renegotiate:unsafe legacy renegotiation disabled:../deps/openssl/openssl/ssl/statem/extensions.c:907:
at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:94:16)
Emitted 'error' event on ClientRequest instance at:
at TLSSocket.socketErrorListener (node:_http_client:442:9)
at TLSSocket.emit (node:events:520:28)
at emitErrorNT (node:internal/streams/destroy:164:8)
at emitErrorCloseNT (node:internal/streams/destroy:129:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
errno: -71,
code: 'EPROTO',
syscall: 'write'
}
Node.js v17.4.0
npm notice
npm notice New minor version of npm available! 8.3.1 -> 8.5.2
npm notice Changelog: <https://github.com/npm/cli/releases/tag/v8.5.2>
npm notice Run `npm install -g npm#8.5.2` to update!
npm notice
The simplified server I used to test:
const express = require('express');
const debug = require('debug');
const https = require('https')
const log = debug('app::main-Interface');
const args = process.argv.slice(2);
const app = express();
const port = args[0] || process.env.port || 3000;
function sleep(toSleep){
return new Promise((resolve, reject)=>{
setTimeout(() => {
resolve(true)
}, toSleep);
})
}
async function initializeRemotely(lengthOfOrder = 4096){
return new Promise((resolve, reject)=>{
https.get(`https://2ton.com.au/getprimes/random/${lengthOfOrder}`,
(res)=>{
res.on('data', async (data)=>{
log('Data received!')
resolve(true);
})
}
)
})
}
async function DEBUG(){
let breakTime = 5000;
while(true){
await initializeRemotely()
log('First operation succeed')
await sleep(breakTime);
breakTime *= 2;
}
}
app.listen(port, async () => {
log(`Server started listening on port : ${port}`);
//schedulerPool.MSRulesWatcher(config.get('Times.schedulers'));
DEBUG()
});
I run this code on my system using below command line:
$ DEBUG=app::* node server.js
app::main-Interface Server started listening on port : 3000 +0ms
app::main-Interface Data received! +2s
app::main-Interface First operation succeed +4ms
app::main-Interface Data received! +6s
app::main-Interface First operation succeed +1ms
app::main-Interface Data received! +11s
app::main-Interface First operation succeed +2ms
^C
As you can see it works fine!
The docker file I use to deploy the server is as below(./deploy/Dockerfile):
FROM node:alpine
EXPOSE 3000
WORKDIR /interface
COPY package.json .
RUN npm install
COPY . .
And the content of ./docker-compose.yml:
version: "3"
services:
interface:
image: interface
container_name: interface
build:
context: .
dockerfile: ./deploy/Dockerfile
entrypoint: ["npm", "run", "development"]
Then I run the docker image in VPS using below commands:
$ sudo docker-compose build
$ sudo docker-compose up -d
And the log of server is shown below:
$ sudo docker logs [container-name]
> export NODE_ENV=development; export DEBUG=app:*; node server.js
2022-02-25T17:06:37.963Z app::main-Interface Server started listening on port : 3000
2022-02-25T17:06:40.992Z app::main-Interface Data received!
2022-02-25T17:06:40.998Z app::main-Interface First operation succeed
node:events:498
throw er; // Unhandled 'error' event
^
Error: write EPROTO 80B991E6EB7F0000:error:0A000152:SSL routines:final_renegotiate:unsafe legacy renegotiation disabled:../deps/openssl/openssl/ssl/statem/extensions.c:907:
at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:94:16)
Emitted 'error' event on ClientRequest instance at:
at TLSSocket.socketErrorListener (node:_http_client:442:9)
at TLSSocket.emit (node:events:520:28)
at emitErrorNT (node:internal/streams/destroy:164:8)
at emitErrorCloseNT (node:internal/streams/destroy:129:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
errno: -71,
code: 'EPROTO',
syscall: 'write'
}
Node.js v17.4.0
npm notice
npm notice New minor version of npm available! 8.3.1 -> 8.5.2
npm notice Changelog: <https://github.com/npm/cli/releases/tag/v8.5.2>
npm notice Run `npm install -g npm#8.5.2` to update!
npm notice
Which indicates server can connect to the database first time but second time it gets this error.
MY QUESTIONS:
First of all, I'm really curious to find out why this problem occurs?
How can I fix it?
INFORMATION
VPS information:
$ uname -a
Linux vote 5.4.0-26-generic #30-Ubuntu SMP Mon Apr 20 16:58:30 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
My system information:
$ uname -a
Linux milad-pc 5.4.178-1-MANJARO #1 SMP PREEMPT Tue Feb 8 20:03:41 UTC 2022 x86_64 GNU/Linux

Connecting Redis with Docker with Bull with Throng with Node

I have a Heroku app that has a single process. I'm trying to change it so that it has several worker processes in a dedicated queue to handle incoming webhooks. To do so, I am using a Node.JS backend with the Bull and Throng packages, which use Redis. All of this is deployed on Docker.
I've found various tutorials that cover some of this combination, but not all of it so I'm not sure how to continue. When I spin up Docker, the main server runs, but when the worker process tries to start, it just logs Killed, which isn't that detailed of an error message.
Most of the information I found is here
My worker process file is worker.ts:
import { bullOptions, RedisData } from '../database/redis';
import throng from 'throng';
import { Webhooks } from '#octokit/webhooks';
import config from '../config/main';
import { configureWebhooks } from '../lib/github/webhooks';
import Bull from 'bull';
// Spin up multiple processes to handle jobs to take advantage of more CPU cores
// See: https://devcenter.heroku.com/articles/node-concurrency for more info
const workers = 2;
// The maximum number of jobs each worker should process at once. This will need
// to be tuned for your application. If each job is mostly waiting on network
// responses it can be much higher. If each job is CPU-intensive, it might need
// to be much lower.
const maxJobsPerWorker = 50;
const webhooks = new Webhooks({
secret: config.githubApp.webhookSecret,
});
configureWebhooks(webhooks);
async function startWorkers() {
console.log('starting workers...');
const queue = new Bull<RedisData>('work', bullOptions);
try {
await queue.process(maxJobsPerWorker, async (job) => {
console.log('processing...');
try {
await webhooks.verifyAndReceive(job.data);
} catch (e) {
console.error(e);
}
return job.finished();
});
} catch (e) {
console.error(`Error processing worker`, e);
}
}
throng({ workers: workers, start: startWorkers });
In my main server, I have the file Redis.ts:
import Bull, { QueueOptions } from 'bull';
import { EmitterWebhookEvent } from '#octokit/webhooks';
export const bullOptions: QueueOptions = {
redis: {
port: 6379,
host: 'cache',
tls: {
rejectUnauthorized: false,
},
connectTimeout: 30_000,
},
};
export type RedisData = EmitterWebhookEvent & { signature: string };
let githubWebhooksQueue: Bull.Queue<RedisData> | undefined = undefined;
export async function addToGithubQueue(data: RedisData) {
try {
await githubWebhooksQueue?.add(data);
} catch (e) {
console.error(e);
}
}
export function connectToRedis() {
githubWebhooksQueue = new Bull<RedisData>('work', bullOptions);
}
(Note: I invoke connectToRedis() before the worker process begins)
My dockerfile is
# We can change the version of ndoe by replacing `lts` to anything found here: https://hub.docker.com/_/node
FROM node:lts
ENV PORT=80
WORKDIR /usr/src/app
# Install dependencies
COPY package*.json ./
COPY yarn.lock ./
RUN yarn
RUN yarn global add npm-run-all
# Bundle app source
COPY . .
# Expose the web port
EXPOSE 80
EXPOSE 9229
EXPOSE 6379
CMD npm-run-all --parallel start start-notification-server start-github-server
and my docker-compose.yml is
version: '3.7'
services:
redis:
image: redis
container_name: cache
expose:
- 6379
api:
links:
- redis
image: instantish/api:latest
environment:
REDIS_URL: redis://cache
command: npm-run-all --parallel dev-debug start-notification-server-dev start-github-server-dev
depends_on:
- mongo
env_file:
- api/.env
- api/flags.env
ports:
- 2000:80
- 9229:9229
- 6379:6379
volumes:
# Activate if you want your local changes to update the container
- ./api:/usr/src/app:cached
Finally, the relevant NPM scripts for my project are
"dev-debug": "nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js\"",
"start-github-server-dev": "MONGOOSE_DEBUG=false nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"ts-node ./scripts/worker.ts\"",
The docker container logs are:
> instantish#1.0.0 start-github-server-dev /usr/src/app
> MONGOOSE_DEBUG=false nodemon --watch "**/**" --ext "js,ts,json" --exec "ts-node ./scripts/worker.ts"
> instantish#1.0.0 dev-debug /usr/src/app
> nodemon --watch "**/**" --ext "js,ts,json" --exec "node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js"
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `ts-node ./scripts/worker.ts`
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js`
worker.ts
Killed
[nodemon] app crashed - waiting for file changes before starting...

docker-compose: nodejs container not communicating with postgres container

I did find a few people with a slightly different setup but with the same issue. So I hope this doesn't feel like a duplicated question.
My setup is pretty simple and straight-forward. I have a container for my node app and a container for my Postgres database. When I run docker-compose up and I see the log both containers are up and running. The problem is my node app is not connecting to the database.
I can connect to the database using Postbird and it works as it should.
If I create a docker container only for the database and run the node app directly on my machine everything works fine. So it's not and issue with the DB or the app but with the setup.
Here's a few useful information:
Running a docker just for the DB (connects and works perfectly):
> vigna-backend#1.0.0 dev /Users/lucasbittar/Dropbox/Code/vigna/backend
> nodemon src/server.js
[nodemon] 2.0.2
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node -r sucrase/register src/server.js`
Initializing database...
Connecting to DB -> vignadb | PORT: 5432
Executing (default): SELECT 1+1 AS result
Connection has been established successfully -> vignadb
Running a container for each using docker-compose:
Creating network "backend_default" with the default driver
Creating backend_db_1 ... done
Creating backend_app_1 ... done
Attaching to backend_db_1, backend_app_1
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2020-07-24 13:23:32.875 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-07-24 13:23:32.876 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-07-24 13:23:32.876 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-07-24 13:23:32.881 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-07-24 13:23:32.955 UTC [27] LOG: database system was shut down at 2020-07-23 13:21:09 UTC
db_1 | 2020-07-24 13:23:32.999 UTC [1] LOG: database system is ready to accept connections
app_1 |
app_1 | > vigna-backend#1.0.0 dev /usr/app
app_1 | > npx sequelize db:migrate && npx sequelize db:seed:all && nodemon src/server.js
app_1 |
app_1 |
app_1 | Sequelize CLI [Node: 14.5.0, CLI: 5.5.1, ORM: 5.21.3]
app_1 |
app_1 | Loaded configuration file "src/config/database.js".
app_1 |
app_1 | Sequelize CLI [Node: 14.5.0, CLI: 5.5.1, ORM: 5.21.3]
app_1 |
app_1 | Loaded configuration file "src/config/database.js".
app_1 | [nodemon] 2.0.2
app_1 | [nodemon] to restart at any time, enter `rs`
app_1 | [nodemon] watching dir(s): *.*
app_1 | [nodemon] watching extensions: js,mjs,json
app_1 | [nodemon] starting `node -r sucrase/register src/server.js`
app_1 | Initializing database...
app_1 | Connecting to DB -> vignadb | PORT: 5432
My database class:
class Database {
constructor() {
console.log('Initializing database...');
this.init();
}
async init() {
let retries = 5;
while (retries) {
console.log(`Connecting to DB -> ${databaseConfig.database} | PORT: ${databaseConfig.port}`);
const sequelize = new Sequelize(databaseConfig);
try {
await sequelize.authenticate();
console.log(`Connection has been established successfully -> ${databaseConfig.database}`);
models
.map(model => model.init(sequelize))
.map( model => model.associate && model.associate(sequelize.models));
break;
} catch (err) {
console.log(`Error: ${err.message}`);
retries -= 1;
console.log(`Retries left: ${retries}`);
// Wait 5 seconds before trying again
await new Promise(res => setTimeout(res, 5000));
}
}
}
}
Dockerfile:
FROM node:alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3333
CMD ["npm", "start"]
docker-compose.yml:
version: "3"
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: vignadb
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
app:
build: .
depends_on:
- db
ports:
- "3333:3333"
volumes:
- .:/usr/app
command: npm run dev
package.json (scrips only):
"scripts": {
"dev-old": "nodemon src/server.js",
"dev": "npx sequelize db:migrate && npx sequelize db:seed:all && nodemon src/server.js",
"build": "sucrase ./src -d ./dist --transforms imports",
"start": "node dist/server.js"
},
.env:
# Database
DB_HOST=db
DB_USER=postgres
DB_PASS=postgres
DB_NAME=vignadb
DB_PORT=5432
database config:
require('dotenv/config');
module.exports = {
dialect: 'postgres',
host: process.env.DB_HOST,
username: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
port: process.env.DB_PORT,
define: {
timestamp: true,
underscored: true,
underscoredAll: true,
},
};
I know I'm messing up something I just don't know where.
Let me know if I can provide more information.
Thanks!
You should put your 2 containers in the same network https://docs.docker.com/compose/networking/
And call your db service inside your nodejs connexion string.
Something like: postgres://db:5432/vignadb

Resources