I am trying to run a NodeJS application on Docker. However I get that error:
could not connect to postgres: Error: connect ECONNREFUSED
When I debug the problem I can see that my environment file is considered from the application and I can access the application endpoints until it tries to connect database.
What can be the reason?
This is my docker-compose.yml file:
version: "3.8"
services:
postgres:
image:postgres:12.4
restart: always
environment:
POSTGRES_USER: workflow
POSTGRES_DB: workflow
POSTGRES_PASSWORD: pass
ports:
- "5429:5432"
expose:
- 5429
networks:
- db
workflow:
image:workflow:0.1.0-SNAPSHOT
environment:
NODE_ENV: prod
env_file:
- ./.env.prod
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
ports:
- "3000:3000"
networks:
- db
depends_on:
- postgres
networks:
db:
volumes:
db-data:
This is my Dockerfile:
FROM node:16.14.2
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm ci --only=production && npm cache clean --force
COPY . .
CMD [ "npm", "run", "start"]
This is .env.prod file:
PORT=3000
DATABASE_URL=postgresql://workflow:pass#postgres:5429/workflow
Here is the related script from package.json:
"scripts": {
"start": "npm run migrate up && node src/app.js"
}
This is error output:
workflow_1 | Error: connect ECONNREFUSED 172.21.0.2:5429
workflow_1 | at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1157:16) {
workflow_1 | errno: -111,
workflow_1 | code: 'ECONNREFUSED',
workflow_1 | syscall: 'connect',
workflow_1 | address: '172.21.0.2',
workflow_1 | port: 5429
workflow_1 | }
This is from docker ps command:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8971741b19d postgres:12.4 "docker-entrypoint.s…" 50 seconds ago Up 49 seconds 5429/tcp, 0.0.0.0:5429->5432/tcp workflow_postgres_1
Based on the official Dockerfile of the postgres:12.4 database it exposes default port 5432. So unless you changed that port with the pg configuration your application needs to connect to port 5432, not the 5429.
Also based on your
ports:
- "5429:5432"
section of the docker compose. If you go to the official documentation it is
ports (HOST:CONTAINER)
for the syntax. So you are technically mapping port 5429 to the "outside/host" port. Meanwhile all the internal docker traffic still goes to the port 5432.
Related
I am creating an application in nodejs, with typescript, postgres and typeorm, to help in my development I put this all to start with docker. I'm using the network "bridge" and the connections between the containers are ok. The problem is when I want to run a migration using the TypeORM CLI. The CLI simply does not recognize my host that is set in my ormconfig.json as "postgres". The funny thing is that when I change the host on ormconfig.json to "localhost" the CLI works perfectly, but I lose the connection to the database, and when I put the host as "postgres" the connection to the database works but the CLI to run the migrations does not.
This is the TypeORM CLI error when trying to run migrations:
yarn typeorm migration:run
yarn run v1.22.0
$ ts-node-dev ./node_modules/typeorm/cli.js migration:run
Using ts-node version 8.10.2, typescript version 3.9.7
Error during migration run:
Error: getaddrinfo ENOTFOUND postgres
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:64:26) {
errno: 'ENOTFOUND',
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'postgres'
}
error Command failed with exit code 1.
docker-compose.yml:
version: '3.7'
services:
api_cadunico:
container_name: api_cadunico
build:
context: .
target: development
command: yarn start:dev
restart: unless-stopped
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
env_file:
- .env
ports:
- 3333:3333
networks:
- connection_cadunico
depends_on:
- postgres
postgres:
container_name: postgres
image: postgres:12
volumes:
- pgdata:/var/lib/postgresql/data
environment:
TZ: America/Fortaleza
POSTGRES_DB: ${DATABASE_NAME}
POSTGRES_USER: ${DATABASE_USER}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
PG_DATA: /var/lib/postgresql/data
ports:
- 5432:5432
networks:
- connection_cadunico
networks:
connection_cadunico:
driver: bridge
volumes:
pgdata:
ormconfig.json:
[
{
"name": "default",
"type": "postgres",
"host": "postgres",
"port": 5432,
"username": "postgres",
"password": "cadunico",
"database": "cadunico_atendimento",
"entities": [
"./src/entities/*.ts"
],
"migrations": [
"./src/database/migrations/*.ts"
],
"cli": {
"migrationsDir": "./src/database/migrations"
}
}
]
What do I need to do to run the migrations without having to change the host to localhost and normally use the docker's network host?
Well, this is kind of late. To anyone looking for answers to this question, you have to run all your typeorm commands inside your application's container by accessing it with docker exec -it your-container-name-or-id sh.
If you're using typescript, you need ts-node as devDependency and run typeorm's CLI with ./node_modules/.bin/ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js.
Here are some useful resources:
How to change default IP on mysql using Dockerfile
https://typeorm.io/#/faq/how-to-handle-outdir-typescript-compiler-option
No matter what I try I can't seem to get my node app to connect to redis between containers within the same docker-compose yml config. I've seen a lot of similar questions but none of the answers seem to work.
I'm using official images in both cases, not building my own
I am putting "redis" as my host and setting it as hostname in my docker compose YML config
const client = redis.createClient({ host: "redis" });
in my redis.conf I am using bind 0.0.0.0
This what the console is printing out:
Redis connection to redis:6379 failed - getaddrinfo ENOTFOUND redis redis:6379
Error: connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1107:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 6379
}
This is my docker-compose.yml
version: '3'
services:
server:
image: "node:10-alpine"
working_dir: /usr/src/app
user: "node"
command: "npm start"
volumes:
- .:/usr/src/app
ports:
- '3000:3000'
- '3001:3001'
- '9229:9229' # Node debugging port
environment:
- IS_DOCKER=1
- NODE_ENV=DEVELOPMENT
depends_on:
- db
db:
image: "redis:5.0-alpine"
expose:
- '6379'
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
- redis-data:/data
command:
- redis-server
- /usr/local/etc/redis/redis.conf
hostname: redis
volumes:
redis-data:
UPDATE
Here's my redis.conf, it's not much.
bind 0.0.0.0
appendonly yes
appendfilename "my_app.aof"
appendfsync always
UPDATE 2
Things I've noticed and tried
in my original setup, when I run docker inspect I can see they are both joined to the same network. when I exec.../bin/bash into the redis container I can successfully ping the server container but when I'm in the server container it can not ping the redis one.
network_mode: bridge -adding that to both containers does not work
I did get one baby step closer by trying out this:
server:
network_mode: host
redis:
network_mode: service:host
I'm on a Mac and in order to get host mode to work you need to do that. It does work in the sense that my server successfully connects to the redis container. However, hitting localhost:3000 does not work even though I'm forwarding the ports
version: '3'
services:
server:
image: "node:10-alpine"
#network_mode: bridge
#links is necessary if you use network_mode: bridge
#links: [redis]
networks:
- default
working_dir: /usr/src/app
user: "node"
command: "npm start"
volumes:
- .:/usr/src/app
ports:
- '3000:3000'
- '3001:3001'
- '9229:9229' # Node debugging port
environment:
- IS_DOCKER=1
- NODE_ENV=DEVELOPMENT
depends_on:
- redis
redis:
image: "redis:5.0-alpine"
#container_name: redis
#network_mode: bridge
networks:
- default
expose:
- '6379'
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
- redis-data:/data
command:
- redis-server
- /usr/local/etc/redis/redis.conf
volumes:
redis-data:
networks:
default:
Rename the container to the hostname you want to use: redis in your case instead of db. To make it accessible over the docker network you will have to put them on the same network like above or use network_mode: bridge and links: [redis] instead.
Try this to test your network:
docker ps to get the current container id or running name from the server container
docker exec -it id/name /bin/sh
Now you have a shell inside server and should be able to resolve redis via:
ping redis or nc -zv redis 6379
For those who still getting this error
i found that in new versions of redis - 4 and up
you need to configure the client like this:
const client = createClient({
socket: {
host: host,
port: process.env.REDIS_PORT,
},
});
it solved my problem
then in docker compose file you don't need to specify ports
version: "3.4"
services:
redis-server:
image: "redis:latest"
restart: always
api:
depends_on:
- redis-server
restart: always
build: .
ports:
- "5000:5000"
I have docker compose use Mongo, Redis and Node js.
Mongo and Redis running good the problem is node js cant connect to Redis.
When i test on my ubuntu laptop with docker is work fine. but not when i run docker in server (Centos 7).
i sure my redis work because when i do SSH Port Forwarding its work. i can access from my ubuntu.
version: '2.1'
services:
aqua-server:
image: aqua-server
build: .
command: pm2-runtime process.yml
container_name: "aqua-server"
working_dir: /app
environment:
NODE_ENV: production
AQUA_MONGO_IP: 172.20.0.1
AQUA_MONGO_PORT: 3002
AQUA_MONGO_DATABASE: aqua-server
AQUA_EXPRESS_PORT: 3000
AQUA_REDIS_PORT: 3003
AQUA_REDIS_HOST: 172.20.0.1
volumes:
- .:/app/
ports:
- 3001:3000
links:
- mongodb
- redis
depends_on:
- mongodb
- redis
networks:
frontend:
ipv4_address: 172.20.0.2
mongodb:
image: mongo:4.2.0-bionic
container_name: "aqua-mongo"
environment:
MONGO_LOG_DIR: /dev/null
volumes:
- /home/sagara/DockerVolume/MongoLocal:/data/db
ports:
- 3002:27017
networks:
frontend:
ipv4_address: 172.20.0.3
redis:
image: redis:5.0.5-alpine
container_name: "aqua-redis"
ports:
- 3003:6379
networks:
frontend:
ipv4_address: 172.20.0.4
networks:
frontend:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
This my DockerFile on Node js
FROM node:10.16.0-alpine
WORKDIR /app
COPY . .
RUN npm install
RUN npm install pm2 -g
EXPOSE 3000
CMD pm2-runtime process.yml
this proccess.yml
apps:
- script: index.js
instances: 1
exec_mode: cluster
name: 'aqua-server'
this my logs docker
aqua-server | Error: Redis connection to 172.20.0.1:3003 failed - connect EHOSTUNREACH 172.20.0.1:3003
aqua-server | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)
aqua-server | 2019-09-06T07:48:13: PM2 log: App name:aqua-server id:0 disconnected
aqua-server | 2019-09-06T07:48:13: PM2 log: App [aqua-server:0] exited with code [0] via signal [SIGINT]
aqua-server | 2019-09-06T07:48:13: PM2 log: App [aqua-server:0] starting in -cluster mode-
aqua-server | 2019-09-06T07:48:13: PM2 log: App [aqua-server:0] online
Redis Logs Docker
aqua-redis | 1:M 06 Sep 2019 07:47:12.512 * DB loaded from disk: 0.000 seconds
aqua-redis | 1:M 06 Sep 2019 07:47:12.512 * Ready to accept connections
Mongo Logs Docker
aqua-mongo | 2019-09-06T07:47:15.895+0000 I NETWORK [initandlisten] Listening on /tmp/mongodb-27017.sock
aqua-mongo | 2019-09-06T07:47:15.895+0000 I NETWORK [initandlisten] Listening on 0.0.0.0
aqua-mongo | 2019-09-06T07:47:15.895+0000 I NETWORK [initandlisten] waiting for connections on port 27017
That problem occur because you're trying to access from your Ubuntu, outside of Docker Network. And your Node container is inside that network so the way they communicate need to modify a bit to make it work. So I will refer my best practice way:
version: '2.1'
services:
aqua-server:
image: aqua-server
build: .
command: pm2-runtime process.yml
container_name: "aqua-server"
working_dir: /app
environment:
NODE_ENV: production
AQUA_MONGO_IP: mongodb
AQUA_MONGO_PORT: 27017
AQUA_MONGO_DATABASE: aqua-server
AQUA_EXPRESS_PORT: 3000
AQUA_REDIS_PORT: 6379
AQUA_REDIS_HOST: redis
volumes:
- .:/app/
ports:
- 3001:3000
links:
- mongodb
- redis
depends_on:
- mongodb
- redis
networks:
frontend:
ipv4_address: 172.20.0.2
mongodb:
image: mongo:4.2.0-bionic
container_name: "aqua-mongo"
environment:
MONGO_LOG_DIR: /dev/null
volumes:
- /home/sagara/DockerVolume/MongoLocal:/data/db
ports:
- 3002:27017
networks:
frontend:
ipv4_address: 172.20.0.3
redis:
image: redis:5.0.5-alpine
container_name: "aqua-redis"
ports:
- 3003:6379
networks:
frontend:
ipv4_address: 172.20.0.4
networks:
frontend:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
This way works because Docker handle the DNS for you and the domain names are the service name so that you can see in the docker-compose.yml above, I've changed AQUA_MONGO_IP to mongodb and AQUA_REDIS_HOST to redis. And I've changed the PORT too, Docker containers created by that docker-compose.yml will stay inside a network and they communicate by the PORT they exposed on their Dockerfile. So in this case other services can communicate with aqua-server via PORT=3000, aqua-server can communicate with mongodb service via PORT=27017 (this port is exposed inside Dockerfile of mongodb, EXPOSE 27017)
Hope that helps!
I'm running a React app on Node and Redis via docker compose:
version: "3"
services:
webapp:
build: ./
ports:
- "127.0.0.1:3000:9090"
depends_on:
- redis
command:
npm run start
nginx:
build: ./nginx
ports:
- "80:80"
environment:
- NGINX_HOST=127.0.0.1
- NGINX_PORT=80
command:
service nginx start
redis:
image: redis
ports:
- "6379:6379"
volumes:
- ./data:/data
Docker file:
FROM node:alpine
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 9090
RUN npm run build_prod
Server.js:
const redisClient = RedisClient.createClient(6379,'redis');
I get a Redis connection refused error when I run docker-compose up --build:
redis_1 | 1:M 15 Nov 13:55:19.865 * Ready to accept connections
webapp_1 |
webapp_1 | > web.globalmap.fatmap.com#0.0.1 start /usr/src/app
webapp_1 | > node ./build/server.js
webapp_1 |
webapp_1 | events.js:193
webapp_1 | throw er; // Unhandled 'error' event
webapp_1 | ^
webapp_1 |
webapp_1 | Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
webapp_1 | at Object._errnoException (util.js:1031:13)
webapp_1 | at _exceptionWithHostPort (util.js:1052:20)
webapp_1 | at TCPConnectWrap.afterConnect [as oncomplete]
I would like to know how to get docker to link both containers correctly.
Looks like redis does not resolve to the correct IP address.
Try a redis URI when creating the client.
// [redis:]//[[user][:password#]][host][:port][/db-number][?db=db-number[&password=bar[&option=value]]]
const redisClient = RedisClient.createClient('redis://redis:6379');
Provide the username, password, and db number if appropriate.
This solved my issue with node not being able to connect to redis using docker-compose.yml
const redis_client = redis.createClient({host: 'redis'});
then inside of my docker-compose.yml file i have the following
version: '3'
services:
redis:
image: redis
socket:
container_name: socket
build:
context: .
dockerfile: Dockerfile
depends_on:
- redis
links:
- redis
source: https://github.com/docker-library/redis/issues/45
I haven't found any issues using process.env. to access the environment variable of docker.
I have declared the variable in the docker-compose.yml file.
version: '3.7'
services:
api:
build: .
container_name: api-framework
depends_on:
- redis
ports:
- 8080:6002
networks:
- fronted
- backend
environment:
redis_server_addr: redis
redis:
container_name: my-redis
image: "redis:5"
ports:
- 6379
networks:
- backend
Then, the configuration file has the following line.
redis_host: process.env.redis_server_addr
It works for me.
I am having an issue with docker-compose containers. There is certainly an answer somewhere but it seems I don't have the vocabulary necessary to find it, sorry.
Here is my docker-compose file :
version: '2'
services:
gateway-linkcs:
image: node:7
ports:
- 3000:3000
volumes:
- ../Gateway:/usr/src/app
working_dir: /usr/src/app
command: yarn startdev
networks:
- back-linkcs
postgres-user-linkcs:
build:
context: ../User
dockerfile: docker/Dockerfile
networks:
- back-linkcs
dev-user-linkcs:
image: node:7
ports:
- 3001:3001
volumes_from:
- postgres-user-linkcs
volumes:
- ../User:/usr/src/app
working_dir: /usr/src/app
command: /usr/src/app/docker/scripts/start-node.sh
environment:
- NODE_ENV=development # change to 'production' for prod
- GATEWAY_ADDRESS=gateway-linkcs:3000
networks:
- back-linkcs
networks:
back-linkcs:
My node application in dev-user is suppose to use gateway as a proxy to connect an api on the internet, and dev-user actually finds gateway, and is able to connect to it (using hostname 'gateway-linkcs')
However, my gateway app cannot connect to the external API :
Error: getaddrinfo ENOTFOUND api.xxx api.xxx:80
at errnoException (dns.js:28:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:73:26)
I have tested starting the gateway outside of docker, (node index.js) and it works perfectly well.
I think I am missing something with docker-compose networks, can someone help ?
Thank you very much, Giltho.