I'm running a React app on Node and Redis via docker compose:
version: "3"
services:
webapp:
build: ./
ports:
- "127.0.0.1:3000:9090"
depends_on:
- redis
command:
npm run start
nginx:
build: ./nginx
ports:
- "80:80"
environment:
- NGINX_HOST=127.0.0.1
- NGINX_PORT=80
command:
service nginx start
redis:
image: redis
ports:
- "6379:6379"
volumes:
- ./data:/data
Docker file:
FROM node:alpine
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 9090
RUN npm run build_prod
Server.js:
const redisClient = RedisClient.createClient(6379,'redis');
I get a Redis connection refused error when I run docker-compose up --build:
redis_1 | 1:M 15 Nov 13:55:19.865 * Ready to accept connections
webapp_1 |
webapp_1 | > web.globalmap.fatmap.com#0.0.1 start /usr/src/app
webapp_1 | > node ./build/server.js
webapp_1 |
webapp_1 | events.js:193
webapp_1 | throw er; // Unhandled 'error' event
webapp_1 | ^
webapp_1 |
webapp_1 | Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
webapp_1 | at Object._errnoException (util.js:1031:13)
webapp_1 | at _exceptionWithHostPort (util.js:1052:20)
webapp_1 | at TCPConnectWrap.afterConnect [as oncomplete]
I would like to know how to get docker to link both containers correctly.
Looks like redis does not resolve to the correct IP address.
Try a redis URI when creating the client.
// [redis:]//[[user][:password#]][host][:port][/db-number][?db=db-number[&password=bar[&option=value]]]
const redisClient = RedisClient.createClient('redis://redis:6379');
Provide the username, password, and db number if appropriate.
This solved my issue with node not being able to connect to redis using docker-compose.yml
const redis_client = redis.createClient({host: 'redis'});
then inside of my docker-compose.yml file i have the following
version: '3'
services:
redis:
image: redis
socket:
container_name: socket
build:
context: .
dockerfile: Dockerfile
depends_on:
- redis
links:
- redis
source: https://github.com/docker-library/redis/issues/45
I haven't found any issues using process.env. to access the environment variable of docker.
I have declared the variable in the docker-compose.yml file.
version: '3.7'
services:
api:
build: .
container_name: api-framework
depends_on:
- redis
ports:
- 8080:6002
networks:
- fronted
- backend
environment:
redis_server_addr: redis
redis:
container_name: my-redis
image: "redis:5"
ports:
- 6379
networks:
- backend
Then, the configuration file has the following line.
redis_host: process.env.redis_server_addr
It works for me.
Related
I am trying to run a NodeJS application on Docker. However I get that error:
could not connect to postgres: Error: connect ECONNREFUSED
When I debug the problem I can see that my environment file is considered from the application and I can access the application endpoints until it tries to connect database.
What can be the reason?
This is my docker-compose.yml file:
version: "3.8"
services:
postgres:
image:postgres:12.4
restart: always
environment:
POSTGRES_USER: workflow
POSTGRES_DB: workflow
POSTGRES_PASSWORD: pass
ports:
- "5429:5432"
expose:
- 5429
networks:
- db
workflow:
image:workflow:0.1.0-SNAPSHOT
environment:
NODE_ENV: prod
env_file:
- ./.env.prod
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
ports:
- "3000:3000"
networks:
- db
depends_on:
- postgres
networks:
db:
volumes:
db-data:
This is my Dockerfile:
FROM node:16.14.2
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm ci --only=production && npm cache clean --force
COPY . .
CMD [ "npm", "run", "start"]
This is .env.prod file:
PORT=3000
DATABASE_URL=postgresql://workflow:pass#postgres:5429/workflow
Here is the related script from package.json:
"scripts": {
"start": "npm run migrate up && node src/app.js"
}
This is error output:
workflow_1 | Error: connect ECONNREFUSED 172.21.0.2:5429
workflow_1 | at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1157:16) {
workflow_1 | errno: -111,
workflow_1 | code: 'ECONNREFUSED',
workflow_1 | syscall: 'connect',
workflow_1 | address: '172.21.0.2',
workflow_1 | port: 5429
workflow_1 | }
This is from docker ps command:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8971741b19d postgres:12.4 "docker-entrypoint.s…" 50 seconds ago Up 49 seconds 5429/tcp, 0.0.0.0:5429->5432/tcp workflow_postgres_1
Based on the official Dockerfile of the postgres:12.4 database it exposes default port 5432. So unless you changed that port with the pg configuration your application needs to connect to port 5432, not the 5429.
Also based on your
ports:
- "5429:5432"
section of the docker compose. If you go to the official documentation it is
ports (HOST:CONTAINER)
for the syntax. So you are technically mapping port 5429 to the "outside/host" port. Meanwhile all the internal docker traffic still goes to the port 5432.
I have been trying to dockerize an api . But redis crashes. Nodejs and mongodb work.
Docker-compose.yaml file
version: '3'
services:
mongo:
container_name: mongo
image: mongo
networks:
- webnet
ports:
- '27017:27017'
redis:
image: redis
container_name: redis
command: ["redis-server","--bind","redis","--port","6379"]
ports:
- '6379:6379'
hostname: redis
app:
container_name: password-manager-docker
restart: always
build: .
networks:
- webnet
ports:
- '80:5000'
links:
- mongo
- redis
environment:
MONGODB_URI: ${MONGODB_URI}
clientID: ${clientID}
clientSecret : ${clientSecret}
PORT: ${PORT}
NODE_ENV : ${NODE_ENV}
JWT_SECRET_KEY: ${JWT_SECRET_KEY}
JWT_EXPIRE: ${JWT_EXPIRE}
REFRESH_TOKEN: ${REFRESH_TOKEN}
JWT_REFRESH_SECRET_KEY: ${JWT_REFRESH_SECRET_KEY}
JWT_REFRESH_EXPIRE: ${JWT_REFRESH_EXPIRE}
JWT_COOKIE: ${JWT_COOKIE}
networks:
webnet:
Docker file
FROM node:latest
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm","start"]
The error is Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379.
How can I fix this ?
by default all containers in a compose file will join a default network where they can communicate with each other, but if you specify a network then you have to specify it for all. So you can either remove the network declaration from the app and mongo services or specify it on the redis service.
-- Docker FIle
FROM node:4.4.2
EXPOSE 3000
ENV NODE_ENV test
WORKDIR /app
COPY ["package.json", "npm-shrinkwrap.json*", "./"]
RUN npm install --silent
COPY . /app
CMD node server.js
-- DOCKER-COMPOSE
version: "2"
services:
postgresql:
image: postgres:9.5.21
container_name: postgresql
volumes:
- '$HOME/DOCKER_VOLUME/db-data1:$HOME/DOCKER_VOLUME/var/lib/postgres'
environment:
POSTGRES_PASSWORD: <PWD>
POSTGRES_DB: <DB>
POSTGRES_USER: <UERS>
ports:
- '5432:5432'
redis:
image: redis
container_name: redis
ports:
- '6379:6379'
broker:
image: eclipse-mosquitto
container_name: mosquitto
volumes:
- $HOME/DOCKER_VOLUME/etc/mosquitto:$HOME/DOCKER_VOLUME/etc/mosquitto:ro
- $HOME/DOCKER_VOLUME/var/log/mosquitto:$HOME/DOCKER_VOLUME/var/log/mosquitto:rw
ports:
- '1883:1883'
app:
container_name: ms_user_manager
environment:
NODE_ENV: test
build: .
volumes:
- .:/app
- /app/node_modules
depends_on:
- postgresql
- redis
- broker
ports:
- '3000:3000'
I am able to access postgres, redis outside of container but nodejs app unable to connect to both & giving error
Redis - connection to 0.0.0.0:6379 failed - connect ECONNREFUSED 0.0.0.0:6379"
Postgres - Error: connect ECONNREFUSED 0.0.0.0:5432
So any idea, what I am doing wrong or what i missed.
Your Node.js app should try to connect to redis:6379. Change this in your application configuration. At the moment it looks like it is trying to connect to localhost (which works probably when you run your server locally on the host).
I have docker compose use Mongo, Redis and Node js.
Mongo and Redis running good the problem is node js cant connect to Redis.
When i test on my ubuntu laptop with docker is work fine. but not when i run docker in server (Centos 7).
i sure my redis work because when i do SSH Port Forwarding its work. i can access from my ubuntu.
version: '2.1'
services:
aqua-server:
image: aqua-server
build: .
command: pm2-runtime process.yml
container_name: "aqua-server"
working_dir: /app
environment:
NODE_ENV: production
AQUA_MONGO_IP: 172.20.0.1
AQUA_MONGO_PORT: 3002
AQUA_MONGO_DATABASE: aqua-server
AQUA_EXPRESS_PORT: 3000
AQUA_REDIS_PORT: 3003
AQUA_REDIS_HOST: 172.20.0.1
volumes:
- .:/app/
ports:
- 3001:3000
links:
- mongodb
- redis
depends_on:
- mongodb
- redis
networks:
frontend:
ipv4_address: 172.20.0.2
mongodb:
image: mongo:4.2.0-bionic
container_name: "aqua-mongo"
environment:
MONGO_LOG_DIR: /dev/null
volumes:
- /home/sagara/DockerVolume/MongoLocal:/data/db
ports:
- 3002:27017
networks:
frontend:
ipv4_address: 172.20.0.3
redis:
image: redis:5.0.5-alpine
container_name: "aqua-redis"
ports:
- 3003:6379
networks:
frontend:
ipv4_address: 172.20.0.4
networks:
frontend:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
This my DockerFile on Node js
FROM node:10.16.0-alpine
WORKDIR /app
COPY . .
RUN npm install
RUN npm install pm2 -g
EXPOSE 3000
CMD pm2-runtime process.yml
this proccess.yml
apps:
- script: index.js
instances: 1
exec_mode: cluster
name: 'aqua-server'
this my logs docker
aqua-server | Error: Redis connection to 172.20.0.1:3003 failed - connect EHOSTUNREACH 172.20.0.1:3003
aqua-server | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)
aqua-server | 2019-09-06T07:48:13: PM2 log: App name:aqua-server id:0 disconnected
aqua-server | 2019-09-06T07:48:13: PM2 log: App [aqua-server:0] exited with code [0] via signal [SIGINT]
aqua-server | 2019-09-06T07:48:13: PM2 log: App [aqua-server:0] starting in -cluster mode-
aqua-server | 2019-09-06T07:48:13: PM2 log: App [aqua-server:0] online
Redis Logs Docker
aqua-redis | 1:M 06 Sep 2019 07:47:12.512 * DB loaded from disk: 0.000 seconds
aqua-redis | 1:M 06 Sep 2019 07:47:12.512 * Ready to accept connections
Mongo Logs Docker
aqua-mongo | 2019-09-06T07:47:15.895+0000 I NETWORK [initandlisten] Listening on /tmp/mongodb-27017.sock
aqua-mongo | 2019-09-06T07:47:15.895+0000 I NETWORK [initandlisten] Listening on 0.0.0.0
aqua-mongo | 2019-09-06T07:47:15.895+0000 I NETWORK [initandlisten] waiting for connections on port 27017
That problem occur because you're trying to access from your Ubuntu, outside of Docker Network. And your Node container is inside that network so the way they communicate need to modify a bit to make it work. So I will refer my best practice way:
version: '2.1'
services:
aqua-server:
image: aqua-server
build: .
command: pm2-runtime process.yml
container_name: "aqua-server"
working_dir: /app
environment:
NODE_ENV: production
AQUA_MONGO_IP: mongodb
AQUA_MONGO_PORT: 27017
AQUA_MONGO_DATABASE: aqua-server
AQUA_EXPRESS_PORT: 3000
AQUA_REDIS_PORT: 6379
AQUA_REDIS_HOST: redis
volumes:
- .:/app/
ports:
- 3001:3000
links:
- mongodb
- redis
depends_on:
- mongodb
- redis
networks:
frontend:
ipv4_address: 172.20.0.2
mongodb:
image: mongo:4.2.0-bionic
container_name: "aqua-mongo"
environment:
MONGO_LOG_DIR: /dev/null
volumes:
- /home/sagara/DockerVolume/MongoLocal:/data/db
ports:
- 3002:27017
networks:
frontend:
ipv4_address: 172.20.0.3
redis:
image: redis:5.0.5-alpine
container_name: "aqua-redis"
ports:
- 3003:6379
networks:
frontend:
ipv4_address: 172.20.0.4
networks:
frontend:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
This way works because Docker handle the DNS for you and the domain names are the service name so that you can see in the docker-compose.yml above, I've changed AQUA_MONGO_IP to mongodb and AQUA_REDIS_HOST to redis. And I've changed the PORT too, Docker containers created by that docker-compose.yml will stay inside a network and they communicate by the PORT they exposed on their Dockerfile. So in this case other services can communicate with aqua-server via PORT=3000, aqua-server can communicate with mongodb service via PORT=27017 (this port is exposed inside Dockerfile of mongodb, EXPOSE 27017)
Hope that helps!
I am working with 3 Services:
api-knotain [main api service]
api-mongo [mongo db for api service]
api-redis [redis for api service]
The Dockerfile for api-knotain looks as follows
FROM node:latest
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
CMD [ "npm", "start" ]
my docker-compose file as such:
version: '3.3'
services:
api-knotain:
container_name: api-knotain
restart: always
build: ../notify.apiV2/src
ports:
- "7777:7777"
links:
- api-mongo
- api-redis
environment:
- REDIS_URI=api-redis
- REDIS_PORT=32770
- MONGO_URI=api-mongo
- MONGO_PORT=27017
- RESEED=true
- NODE_TLS_REJECT_UNAUTHORIZED=0
api-mongo:
container_name: api-mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
api-redis:
container_name: api-redis
image: "redis:alpine"
ports:
- "32770:32770"
runnin
docker-compose build
docker-compose up
output:
api-knotain | connecting mongo ...: mongodb://api-mongo:27017/notify
api-knotain | Redis error: Error: Redis connection to api-redis:32770 failed - connect ECONNREFUSED 172.21.0.2:32770
api-knotain | mongo error:MongoNetworkError: failed to connect to server [api-mongo:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 172.21.0.3:27017]
api-knotain | Example app listening on port 7777!
neither mongo nor redis can be connected.
I tried the following things:
use localhost instead of container name
use different ports
use expose vs port
always with the same result
note:
i can connect without issue to both mongo & redis through local cli 'localhost:port'
what am I missing?
Probably redis and mongo containers start later than you application and therefore your app will not see those. To counter this you must wait for those services to be ready.
Also links is a legacy feature of Docker. You should use depends_on to control startup order and user-defined networks if you want to isolate your database and redis containers from external network.
version: '3.3'
services:
api-knotain:
depends_on:
- api-mongo
- api-redis
container_name: api-knotain
restart: always
build: ../notify.apiV2/src
ports:
- "7777:7777"
links:
- api-mongo
- api-redis
environment:
- REDIS_URI=api-redis
- REDIS_PORT=32770
- MONGO_URI=api-mongo
- MONGO_PORT=27017
- RESEED=true
- NODE_TLS_REJECT_UNAUTHORIZED=0
api-mongo:
container_name: api-mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
api-redis:
container_name: api-redis
image: "redis:alpine"
ports:
- "32770:32770"
It looks like depend_on did not properly work in 3.3 version of docker compose. after updating the version to 3.7 all works perfectly without any changes to the compose file.