I have docker compose use Mongo, Redis and Node js.
Mongo and Redis running good the problem is node js cant connect to Redis.
When i test on my ubuntu laptop with docker is work fine. but not when i run docker in server (Centos 7).
i sure my redis work because when i do SSH Port Forwarding its work. i can access from my ubuntu.
version: '2.1'
services:
aqua-server:
image: aqua-server
build: .
command: pm2-runtime process.yml
container_name: "aqua-server"
working_dir: /app
environment:
NODE_ENV: production
AQUA_MONGO_IP: 172.20.0.1
AQUA_MONGO_PORT: 3002
AQUA_MONGO_DATABASE: aqua-server
AQUA_EXPRESS_PORT: 3000
AQUA_REDIS_PORT: 3003
AQUA_REDIS_HOST: 172.20.0.1
volumes:
- .:/app/
ports:
- 3001:3000
links:
- mongodb
- redis
depends_on:
- mongodb
- redis
networks:
frontend:
ipv4_address: 172.20.0.2
mongodb:
image: mongo:4.2.0-bionic
container_name: "aqua-mongo"
environment:
MONGO_LOG_DIR: /dev/null
volumes:
- /home/sagara/DockerVolume/MongoLocal:/data/db
ports:
- 3002:27017
networks:
frontend:
ipv4_address: 172.20.0.3
redis:
image: redis:5.0.5-alpine
container_name: "aqua-redis"
ports:
- 3003:6379
networks:
frontend:
ipv4_address: 172.20.0.4
networks:
frontend:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
This my DockerFile on Node js
FROM node:10.16.0-alpine
WORKDIR /app
COPY . .
RUN npm install
RUN npm install pm2 -g
EXPOSE 3000
CMD pm2-runtime process.yml
this proccess.yml
apps:
- script: index.js
instances: 1
exec_mode: cluster
name: 'aqua-server'
this my logs docker
aqua-server | Error: Redis connection to 172.20.0.1:3003 failed - connect EHOSTUNREACH 172.20.0.1:3003
aqua-server | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)
aqua-server | 2019-09-06T07:48:13: PM2 log: App name:aqua-server id:0 disconnected
aqua-server | 2019-09-06T07:48:13: PM2 log: App [aqua-server:0] exited with code [0] via signal [SIGINT]
aqua-server | 2019-09-06T07:48:13: PM2 log: App [aqua-server:0] starting in -cluster mode-
aqua-server | 2019-09-06T07:48:13: PM2 log: App [aqua-server:0] online
Redis Logs Docker
aqua-redis | 1:M 06 Sep 2019 07:47:12.512 * DB loaded from disk: 0.000 seconds
aqua-redis | 1:M 06 Sep 2019 07:47:12.512 * Ready to accept connections
Mongo Logs Docker
aqua-mongo | 2019-09-06T07:47:15.895+0000 I NETWORK [initandlisten] Listening on /tmp/mongodb-27017.sock
aqua-mongo | 2019-09-06T07:47:15.895+0000 I NETWORK [initandlisten] Listening on 0.0.0.0
aqua-mongo | 2019-09-06T07:47:15.895+0000 I NETWORK [initandlisten] waiting for connections on port 27017
That problem occur because you're trying to access from your Ubuntu, outside of Docker Network. And your Node container is inside that network so the way they communicate need to modify a bit to make it work. So I will refer my best practice way:
version: '2.1'
services:
aqua-server:
image: aqua-server
build: .
command: pm2-runtime process.yml
container_name: "aqua-server"
working_dir: /app
environment:
NODE_ENV: production
AQUA_MONGO_IP: mongodb
AQUA_MONGO_PORT: 27017
AQUA_MONGO_DATABASE: aqua-server
AQUA_EXPRESS_PORT: 3000
AQUA_REDIS_PORT: 6379
AQUA_REDIS_HOST: redis
volumes:
- .:/app/
ports:
- 3001:3000
links:
- mongodb
- redis
depends_on:
- mongodb
- redis
networks:
frontend:
ipv4_address: 172.20.0.2
mongodb:
image: mongo:4.2.0-bionic
container_name: "aqua-mongo"
environment:
MONGO_LOG_DIR: /dev/null
volumes:
- /home/sagara/DockerVolume/MongoLocal:/data/db
ports:
- 3002:27017
networks:
frontend:
ipv4_address: 172.20.0.3
redis:
image: redis:5.0.5-alpine
container_name: "aqua-redis"
ports:
- 3003:6379
networks:
frontend:
ipv4_address: 172.20.0.4
networks:
frontend:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
This way works because Docker handle the DNS for you and the domain names are the service name so that you can see in the docker-compose.yml above, I've changed AQUA_MONGO_IP to mongodb and AQUA_REDIS_HOST to redis. And I've changed the PORT too, Docker containers created by that docker-compose.yml will stay inside a network and they communicate by the PORT they exposed on their Dockerfile. So in this case other services can communicate with aqua-server via PORT=3000, aqua-server can communicate with mongodb service via PORT=27017 (this port is exposed inside Dockerfile of mongodb, EXPOSE 27017)
Hope that helps!
Related
I created a JS app with Docker Compose with a front, a back and a common component with Yarn Workspaces. It works on Linux. I am out of ideas to make it work on WSL.
The Docker Compose :
# Use postgres/example user/password credentials
version: '3.1'
services:
postgres:
image: postgres:latest
# restart: always
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: caddie_app
ports:
- '5432:5432'
backend:
image: node:16
volumes:
- '.:/app'
ports:
- '3001:3001' # Nest
depends_on:
- postgres
working_dir: /app
command: ["yarn", "workspace", "#caddie/backend", "start:dev"]
environment:
# with docker we listen to the postgres network, but it is reachable at #localhost on our post
DATABASE_URL: postgresql://postgres:password#postgres:5432/caddie_app?schema=public
frontend:
image: node:16
volumes:
- '.:/app'
ports:
- '3000:3000' # React
depends_on:
- backend
working_dir: /app
command: ["yarn", "workspace", "#caddie/frontend", "start"]
I can reach the database with DBeaver, I can fetch the React JS scripts on localhost:3000, but I cannot request the NestJS server on localhost:3001.
The NestJS server is listening on 0.0.0.0
await app.listen(3001, '0.0.0.0');
I allowed the ports 3000 & 3001 on the Firewall. I tried to request directly the NodeJS through the IP of WSL found in ipconfig but the problem remains. I can't figure out what's wrong.
Thanks !
I have been trying to dockerize an api . But redis crashes. Nodejs and mongodb work.
Docker-compose.yaml file
version: '3'
services:
mongo:
container_name: mongo
image: mongo
networks:
- webnet
ports:
- '27017:27017'
redis:
image: redis
container_name: redis
command: ["redis-server","--bind","redis","--port","6379"]
ports:
- '6379:6379'
hostname: redis
app:
container_name: password-manager-docker
restart: always
build: .
networks:
- webnet
ports:
- '80:5000'
links:
- mongo
- redis
environment:
MONGODB_URI: ${MONGODB_URI}
clientID: ${clientID}
clientSecret : ${clientSecret}
PORT: ${PORT}
NODE_ENV : ${NODE_ENV}
JWT_SECRET_KEY: ${JWT_SECRET_KEY}
JWT_EXPIRE: ${JWT_EXPIRE}
REFRESH_TOKEN: ${REFRESH_TOKEN}
JWT_REFRESH_SECRET_KEY: ${JWT_REFRESH_SECRET_KEY}
JWT_REFRESH_EXPIRE: ${JWT_REFRESH_EXPIRE}
JWT_COOKIE: ${JWT_COOKIE}
networks:
webnet:
Docker file
FROM node:latest
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm","start"]
The error is Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379.
How can I fix this ?
by default all containers in a compose file will join a default network where they can communicate with each other, but if you specify a network then you have to specify it for all. So you can either remove the network declaration from the app and mongo services or specify it on the redis service.
No matter what I try I can't seem to get my node app to connect to redis between containers within the same docker-compose yml config. I've seen a lot of similar questions but none of the answers seem to work.
I'm using official images in both cases, not building my own
I am putting "redis" as my host and setting it as hostname in my docker compose YML config
const client = redis.createClient({ host: "redis" });
in my redis.conf I am using bind 0.0.0.0
This what the console is printing out:
Redis connection to redis:6379 failed - getaddrinfo ENOTFOUND redis redis:6379
Error: connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1107:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 6379
}
This is my docker-compose.yml
version: '3'
services:
server:
image: "node:10-alpine"
working_dir: /usr/src/app
user: "node"
command: "npm start"
volumes:
- .:/usr/src/app
ports:
- '3000:3000'
- '3001:3001'
- '9229:9229' # Node debugging port
environment:
- IS_DOCKER=1
- NODE_ENV=DEVELOPMENT
depends_on:
- db
db:
image: "redis:5.0-alpine"
expose:
- '6379'
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
- redis-data:/data
command:
- redis-server
- /usr/local/etc/redis/redis.conf
hostname: redis
volumes:
redis-data:
UPDATE
Here's my redis.conf, it's not much.
bind 0.0.0.0
appendonly yes
appendfilename "my_app.aof"
appendfsync always
UPDATE 2
Things I've noticed and tried
in my original setup, when I run docker inspect I can see they are both joined to the same network. when I exec.../bin/bash into the redis container I can successfully ping the server container but when I'm in the server container it can not ping the redis one.
network_mode: bridge -adding that to both containers does not work
I did get one baby step closer by trying out this:
server:
network_mode: host
redis:
network_mode: service:host
I'm on a Mac and in order to get host mode to work you need to do that. It does work in the sense that my server successfully connects to the redis container. However, hitting localhost:3000 does not work even though I'm forwarding the ports
version: '3'
services:
server:
image: "node:10-alpine"
#network_mode: bridge
#links is necessary if you use network_mode: bridge
#links: [redis]
networks:
- default
working_dir: /usr/src/app
user: "node"
command: "npm start"
volumes:
- .:/usr/src/app
ports:
- '3000:3000'
- '3001:3001'
- '9229:9229' # Node debugging port
environment:
- IS_DOCKER=1
- NODE_ENV=DEVELOPMENT
depends_on:
- redis
redis:
image: "redis:5.0-alpine"
#container_name: redis
#network_mode: bridge
networks:
- default
expose:
- '6379'
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
- redis-data:/data
command:
- redis-server
- /usr/local/etc/redis/redis.conf
volumes:
redis-data:
networks:
default:
Rename the container to the hostname you want to use: redis in your case instead of db. To make it accessible over the docker network you will have to put them on the same network like above or use network_mode: bridge and links: [redis] instead.
Try this to test your network:
docker ps to get the current container id or running name from the server container
docker exec -it id/name /bin/sh
Now you have a shell inside server and should be able to resolve redis via:
ping redis or nc -zv redis 6379
For those who still getting this error
i found that in new versions of redis - 4 and up
you need to configure the client like this:
const client = createClient({
socket: {
host: host,
port: process.env.REDIS_PORT,
},
});
it solved my problem
then in docker compose file you don't need to specify ports
version: "3.4"
services:
redis-server:
image: "redis:latest"
restart: always
api:
depends_on:
- redis-server
restart: always
build: .
ports:
- "5000:5000"
-- Docker FIle
FROM node:4.4.2
EXPOSE 3000
ENV NODE_ENV test
WORKDIR /app
COPY ["package.json", "npm-shrinkwrap.json*", "./"]
RUN npm install --silent
COPY . /app
CMD node server.js
-- DOCKER-COMPOSE
version: "2"
services:
postgresql:
image: postgres:9.5.21
container_name: postgresql
volumes:
- '$HOME/DOCKER_VOLUME/db-data1:$HOME/DOCKER_VOLUME/var/lib/postgres'
environment:
POSTGRES_PASSWORD: <PWD>
POSTGRES_DB: <DB>
POSTGRES_USER: <UERS>
ports:
- '5432:5432'
redis:
image: redis
container_name: redis
ports:
- '6379:6379'
broker:
image: eclipse-mosquitto
container_name: mosquitto
volumes:
- $HOME/DOCKER_VOLUME/etc/mosquitto:$HOME/DOCKER_VOLUME/etc/mosquitto:ro
- $HOME/DOCKER_VOLUME/var/log/mosquitto:$HOME/DOCKER_VOLUME/var/log/mosquitto:rw
ports:
- '1883:1883'
app:
container_name: ms_user_manager
environment:
NODE_ENV: test
build: .
volumes:
- .:/app
- /app/node_modules
depends_on:
- postgresql
- redis
- broker
ports:
- '3000:3000'
I am able to access postgres, redis outside of container but nodejs app unable to connect to both & giving error
Redis - connection to 0.0.0.0:6379 failed - connect ECONNREFUSED 0.0.0.0:6379"
Postgres - Error: connect ECONNREFUSED 0.0.0.0:5432
So any idea, what I am doing wrong or what i missed.
Your Node.js app should try to connect to redis:6379. Change this in your application configuration. At the moment it looks like it is trying to connect to localhost (which works probably when you run your server locally on the host).
I have a docker-compose file with mongo and an node container, mongo works great, but the node feathers container is not accessable from localhost:3030 (have also tried 127.0.0.1:3030 and 0.0.0.0:3030
version: "3"
services:
app:
image: node:lts-alpine
volumes:
- ./feathers-full:/app
working_dir: /app
depends_on:
- mongo
environment:
NODE_ENV: development
command: npm run dev
ports:
- 3030:3030
expose:
- "3030"
mongo:
image: mongo
ports:
- 27017:27017
expose:
- "27017"
volumes:
- ./data/db:/data/db
Are you binding to 127.0.0.1 in the Feathers server? If so, you won't be able to access the server from outside the container. You need to bind to 0.0.0.0. See https://pythonspeed.com/articles/docker-connection-refused/ for explanation why.