Sequelize CLI ,Docker ,ConnectionRefusedError [SequelizeConnectionRefusedError] [duplicate] - node.js

I have a CRUD app working on node, on my local machine. It is running on node, with postgres as the database, using knex.js as a query builder, etc.
I have created a docker file, and a docker-compose file, and the containers start, but the node container can't reach the postgres container. I suspect it has to do with the enviornment variables but I am not sure. here is my docker file:
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm ci --only=production
# Bundle app source
COPY . .
ENV PORT=8080
EXPOSE 8080
CMD [ "npm", "start" ]
This is the docker-compose file:
version: '2'
services:
postgres:
image: postgres:alpine
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: app
POSTGRES_DB: db
app:
build: .
depends_on:
- "postgres"
links:
- "postgres"
environment:
DB_PASSWORD: 'password'
DB_USER: 'app'
DB_NAME: 'db'
DB_HOST: 'postgres'
PORT: 8080
ports:
- '8080:8080'
command: npm start
also, here is the knex.js file on root that handles the db connections based on the environment:
// Update with your config settings.
module.exports = {
development: {
client: 'pg',
connection: 'postgres://localhost/db'
},
test: {
client: 'pg',
connection: 'postgres://localhost/test-db'
}
};
additionally when I check the hosts file on the node app inside the docker i don't see anything mentioning the link to postgres container. Any help would be appreciated, thanks.

The reason why your node application is not connecting is because it is trying to connect to itself as you are referencing localhost. Your database is in a second container which is not local so you need to reference it by service name which would be postgres.
So assuming your application is handling authentication another way, your config would be something like this:
// Update with your config settings.
module.exports = {
development: {
client: 'pg',
connection: 'postgres://postgres/db'
},
test: {
client: 'pg',
connection: 'postgres://postgres/test-db'
}
};
If you can, you should use the environment variables you assigned to the app container.

Docker-compose creates an internal network shared by the different containers it launches.
Since app and postgres are 2 separate containers, they are considered as 2 hosts. This causes app to look for postgres on the same container when you point it at localhost instead of the postgres container.
You can solve this by just changing localhost with postgres in your knex.js file.

Related

Docker - Redis connect ECONNREFUSED 127.0.0.1:6379

I know this is a common error, but I literally spent the entire day trying to get past this error, trying everything I could find online. But I cant find anything that works for me.
I am very new to Docker and using it for my NodeJS + Express + Postgresql + Redis application.
Here is what I have for my docker-compose file:
version: "3.8"
services:
db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=admin
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
cache:
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
api:
container_name: api
build:
context: .
# target: production
# image: api
depends_on:
- db
- cache
ports:
- 3000:3000
environment:
NODE_ENV: production
DB_HOST: db
DB_PORT: 5432
DB_USER: postgres
DB_PASSWORD: admin
DB_NAME: postgres
REDIS_HOST: cache
REDIS_PORT: 6379
links:
- db
- cache
volumes:
- ./:/src
volumes:
db:
driver: local
cache:
driver: local
Here is my app.js upper part:
const express = require('express')
const app = express()
const cors = require('cors')
const redis = require('redis')
const client = redis.createClient({
host: 'cache',
port: 6379,
legacyMode: true // Also tried without this line, same behavior
})
client.connect()
client.on('connect', () => {
log('Redis connected')
})
app.use(cors())
app.use(express.json())
And my Dockerfile:
FROM node:16.15-alpine3.14
WORKDIR ./
COPY package.json ./
RUN npm install
COPY ./ ./
EXPOSE 3000 6379
CMD [ "npm", "run", "serve" ]
npm run serve is nodemon ./app.js.
I also already tried to prune the system and network.
What am I missing? Help!
There are two things to put in mind here,
First of All Docker Network:
Containers are exposed to your localhost system, so as a "Server" you can access each of them directly through the browser or the command-line, But
Taken for granted that you only can access the containers because they are exposed to a default network that is accessible by the root of the system - the docker user which you can inspect by the way.
The deployed containers are not exposed to each other by default, so you need to define a virtual network and expose them to it so they can talk to each other through the ports or the host name -- which will be the container_name
So you need to do two things:
Add a container name to the redis, in the compose file just like you did on the API
Create a network and bind all the services to it, one way of doing that will be:
version: "3.8"
Network:
my-network:
name: my-network
services:
....
cache:
container_name: cache
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
networks: # add it in all containers that communicate together
- my-network
Then and only then you can call redis container name as the host, since docker network will create a host name for the service by the container name,
When deploying the whole compose file later, the containers will be created and all will be joined to the network by default on startup and that will allow you API app to communicate with Redis container via the docker container name as the host name
Refer to these resources for more details:
Networking on Docker Compose
Docker Network Overview
A side Unrelated note:
I personally used redis from npm for some testing projects, but I found that ioredis was much better with TypeScript projects and more expected in its behavior
To Avoid any problems with Redis, make sure to create a password and use it to connect, sometimes redis randomly considers the client as a ReadOnly client and fails to find a read replica, adding the password solved it for me

Running commands on docker container from the host

Code first, it will be easier to explain what I'm after.
docker-compose.yml
version: '3.4'
services:
db:
user: '${UID}:${GID}'
image: postgres
container_name: postgres
ports:
- '5432:5432'
restart: always
environment:
POSTGRES_HOST: db
POSTGRES_USER: root
POSTGRES_PASSWORD: secret
POSTGRES_DATABASE: foo
PGDATA: /var/lib/postgresql/data/db
volumes:
- ./db/data:/var/lib/postgresql/data/db
- db-init.sh:/docker-entrypoint-initdb.d/:ro
cache:
image: redis:alpine
container_name: redis
sysctls:
net.core.somaxconn: '511'
ports:
- '6379:6379'
command: ['--requirepass "secret"']
api:
image: node:alpine
container_name: api
working_dir: /var/www/app
command: sh -c "npm start"
ports:
- '5000:5000'
volumes:
- node_modules:/var/www/app/node_modules
- .:/var/www/app
env_file: .env
depends_on:
- db
- cache
volumes:
node_modules:
postgres connection settings for node.js app:
export const {
POSTGRES_USER = 'root',
POSTGRES_PASSWORD = 'secret',
POSTGRES_HOST = 'db',
POSTGRES_PORT = 5432,
POSTGRES_DATABASE = 'foo',
} = process.env
issue:
When using service or container name (db or postgres) for the POSTGRES_HOST setting of node app:
I can successfully connect to, and query, the database.
I'm not able to run commands from host which affect the container. For example, seeding db won't work:
npx knex --esm seed:run
This makes sense, as the DNS resolution for db / postgres is taken care of by docker and those only have meaning on the network connecting the containers. Commands run from the host and targeting that container will fail as host doesn't know how to resolve the DNS here.
On the other hand, when using localhost for the POSTGRES_HOST setting of node app:
Queries to postgres from api will fail.
Commands run from the host, like npx knex --esm seed:run, will succeed.
Again, this makes perfect sense. Addressing container as localhost from host will work thanks to the port forwarding in docker-compose.yml. But in the context of the container, it refers to that very container: for api localhost means itself, and its trying to find a database on localhost:5432 or api:5432.
I want to have working inter-container network and also run commands from the host, addressing the said containers. I'm aware of two approaches to achieve that:
Use container / service name as POSTGRES_HOST, and run commands against the containers with:
docker exec -it <container_name> <command>
Assign static ips to the containers and use those instead of service / container names.
Do I have any other options here?
since you are exposing the database ports on the host machine you can do the following.
Use service or container name (db or postgres) for the POSTGRES_HOST, this way it will work for Docker containers.
when you run the seed command form the host, overwrite the POSTGRES_HOST. This can be done in this way
$ export POSTGRES_HOST=127.0.0.1
$ npx knex --esm seed:run
or in one step
$ POSTGRES_HOST=127.0.0.1 npx knex --esm seed:run

How to connect a node docker container with postgres docker container

I have a CRUD app working on node, on my local machine. It is running on node, with postgres as the database, using knex.js as a query builder, etc.
I have created a docker file, and a docker-compose file, and the containers start, but the node container can't reach the postgres container. I suspect it has to do with the enviornment variables but I am not sure. here is my docker file:
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm ci --only=production
# Bundle app source
COPY . .
ENV PORT=8080
EXPOSE 8080
CMD [ "npm", "start" ]
This is the docker-compose file:
version: '2'
services:
postgres:
image: postgres:alpine
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: app
POSTGRES_DB: db
app:
build: .
depends_on:
- "postgres"
links:
- "postgres"
environment:
DB_PASSWORD: 'password'
DB_USER: 'app'
DB_NAME: 'db'
DB_HOST: 'postgres'
PORT: 8080
ports:
- '8080:8080'
command: npm start
also, here is the knex.js file on root that handles the db connections based on the environment:
// Update with your config settings.
module.exports = {
development: {
client: 'pg',
connection: 'postgres://localhost/db'
},
test: {
client: 'pg',
connection: 'postgres://localhost/test-db'
}
};
additionally when I check the hosts file on the node app inside the docker i don't see anything mentioning the link to postgres container. Any help would be appreciated, thanks.
The reason why your node application is not connecting is because it is trying to connect to itself as you are referencing localhost. Your database is in a second container which is not local so you need to reference it by service name which would be postgres.
So assuming your application is handling authentication another way, your config would be something like this:
// Update with your config settings.
module.exports = {
development: {
client: 'pg',
connection: 'postgres://postgres/db'
},
test: {
client: 'pg',
connection: 'postgres://postgres/test-db'
}
};
If you can, you should use the environment variables you assigned to the app container.
Docker-compose creates an internal network shared by the different containers it launches.
Since app and postgres are 2 separate containers, they are considered as 2 hosts. This causes app to look for postgres on the same container when you point it at localhost instead of the postgres container.
You can solve this by just changing localhost with postgres in your knex.js file.

Docker, mongodb don't connection nodejs

I am facing weird issue while connecting MongoDB running in a separate container from my nodejs container, it displays the following error while trying to connect to MongoDB.
My Dockerfile
FROM node:latest
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8000
CMD ["npm","start"]
enter code here
docker-compose
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
links:
- mongo
- redis
mongo:
image: mongo
ports:
- "49155:49155"
redis:
image: "redis:alpine"
mongo config
mongoose: { // MongoDB
// uri: mongodb://username:password#host:port/database?options
uri: `mongodb://localhost:27017/${DB_NAME}`,
options: {
},
seed: {
path: '/api/models/seeds/',
list: [
{
file: 'user.seed',
schema: 'User',
plant: 'once' // once - always - never
},
{
file: 'example.seed',
schema: 'Example',
plant: 'once'
}
]
},
},
Issue
enter image description here
I am only studied docker pls help me
On creating a new container, docker will attach that container to a default internal bridge.
https://docs.docker.com/network/ check network drivers.
To make it available for the localhost
you will have to set
network_mode: "host", in docker-compose in mongo section
indrajeet's answer will work, but since you are venturing into docker-compose, it is better if you treat each of your services/container as a host. In your case, the error is because your web app is trying to connect to a mongo service running on the same (web) localhost.
Instead you should connect to the mongo service/contaner, which docker-compose conveniently let your web container to access the mongo service/container/host via the hostname "mongo" ( the service name you specified in the docker-compose yaml)
My answer is to change the uri line to
uri: `mongodb://mongo:27017/${DB_NAME}`,

Link nodejs app to Rethinkdb from another container

I made two 2 containers, one for the RethinkDB and one for a nodejs app.
I want to connect my nodejs app to this RethinkDB but everytime I try get an error
Error:{"message":"Failed to connect to localhost:58015\nFull error:\n{\"code\":\"ECONNREFUSED\"
But I can connect the same nodejs app running without Docker to the RethinkDB, with the open port (58015).
My Docker compose config look like this
# Rethink DB
rethink:
build: docker/rethinkdb
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
# NodeJS
nodejs:
build: docker/nodejs
container_name: nodejs
ports:
- 53000:3000
- 55000:5000
depends_on:
- rethink
To connect my app to the db I set the host and port inside a JS config file
database: {
servers: [
{
host: process.env.DB_PORT_28015_TCP_ADDR || 'localhost',
port: process.env.DB_PORT_28015_TCP_PORT || 28015
}
],
name: 'atlas'
},
I tried with RethinkDB port (28015) and with my open port (58015) without success.
I tried to link this two containers with links, network_mode, without success too.
Every solutions I tried don't work.
I think my Rethink container is not ready when the nodejs app try to connect. I really don't understand the problem, if this not this.
The nodejs app is running with pm2
How can I made this app connect to my db ?
For you config, you should use
# Rethink DB
rethink:
build: docker/rethinkdb
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
# NodeJS
nodejs:
build: docker/nodejs
container_name: nodejs
ports:
- 53000:3000
- 55000:5000
links:
- rethink
depends_on:
- rethink
and in JS code
database: {
servers: [
{
host: process.env.DB_PORT_28015_TCP_ADDR || 'rethink',
port: process.env.DB_PORT_28015_TCP_PORT || 28015
}
],
name: 'atlas'
},
As far as I know, one Docker container will not see the other unless specifically linked and using the same net:
docker run \
--name ${NEWAPP} \
--restart=always \
--env MYAPPPAR=${PROJ} \
-v /var/log/docker/node/logs:/usr/src/app/log \
--link myapp_rethink_1:myapp_rethink_1 \
--net myapp_default \
-p ${PORT}:9000 \
-d ${NEWAPP}
So you need both --net and --link:
--link format is sourcecontainername:containeraliasname
--net so that containers can find each other with internal DNS / containername. You can check network with 'docker network ls'
When using newer versions of docker-compose, your services will be configured to run on the same network.
The top level service name in your 'docker-compose.yml' will become the host that needs to be specified, when connecting to RethinkDB from your app:
# docker-compose.yml
version: '3.2'
Services:
web:
build: .
links: db
...
db:
image: rethinkDB
...
Using the example above, you can connect to RethinkDB by using the host named 'db', within the app configuration file:
module.exports = {
rethinkdb: {
host: 'db',
port: 28015
}
};

Resources