Docker and Rabbitmq: ECONNREFUSED between containers - node.js

I am trying to setup separate docker containers for rabbitmq and the consumer for the container, i.e., the process that would listen on the queue and perform the necessary tasks. I created the yml file, and the docker file.
I am able to run the yml file, however when I check the docker-compose logs I see where there are ECONNREFUSED errors.
NewUserNotification.js:
require('seneca')()
.use('seneca-amqp-transport')
.add('action:new_user_notification’, function(message, done) {
…
return done(null, {
pid: process.pid,
status: `Process ${process.pid} status: OK`
})
.listen({
type: 'amqp',
pin: ['action:new_user_notification’],
name: 'seneca.new_user_notification.queue',
url: process.env.AMQP_RECEIVE_URL,
timeout: 99999
});
error message in docker-compose log:
{"notice":"seneca: Action hook:listen,role:transport,type:amqp failed: connect ECONNREFUSED 127.0.0.1:5672.","code":
"act_execute","err":{"cause":{"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1",
"port":5672},"isOperational":true,"errno":"ECONNREFUSED","code":"act_execute","syscall":"connect","address":"127.0.0.1",
"port":5672,"eraro":true,"orig":{"cause":{"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1",
"port":5672},"isOperational":true,"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5672},
"seneca":true,"package":"seneca","msg":"seneca: Action hook:listen,role:transport,type:amqp failed: connect ECONNREFUSED 127.0.0.1:5672.",
"details":{"message":"connect ECONNREFUSED 127.0.0.1:5672","pattern":"hook:listen,role:transport,type:amqp","instance":"Seneca/…………/…………/1/3.4.3/-“,
”orig$":{"cause":{"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5672},"isOperational":true,
"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5672}
sample docker-compose.yml file:
version: '2.1'
services:
rabbitmq:
container_name: "4340_rabbitmq"
tty: true
image: rabbitmq:management
ports:
- 15672:15672
- 15671:15671
- 5672:5672
volumes:
- /rabbitmq/lib:/var/lib/rabbitmq
- /rabbitmq/log:/var/log/rabbitmq
- /rabbitmq/conf:/etc/rabbitmq/
account:
container_name: "account"
build:
context: .
dockerfile: ./Account/Dockerfile
ports:
- 3000:3000
links:
- "mongo"
- "rabbitmq"
depends_on:
- "mongo"
- "rabbitmq"
new_user_notification:
container_name: "app_new_user_notification"
build:
context: .
dockerfile: ./Account/dev.newusernotification.Dockerfile
links:
- "mongo"
- "rabbitmq"
depends_on:
- "mongo"
- "rabbitmq"
command: ["./wait-for-it.sh", "rabbitmq:5672", "-t", "90", "--", "node", “newusernotification.js"]
amqp connection string:
(I tried both ways, with and without a user/pass)
amqp://username:password#rabbitmq:5672
I added the link attribute to the docker-compose file and referenced the name in the .env file(rabbitmq). I tried to run the NewUserNotification.js file from outside the container and it started fine. What could be causing this problem? Connection string issue? Docker-Compose.yml configuration issue? Other?

Seems the environment variable AMQP_RECEIVE_URL is not constructed properly. According to error log the listener is trying to connect to localhost(127.0.0.1) which is not the rabbitmq service container IP. Find the modified configurations for a working sample.
1 docker-compose.yml
version: '2.1'
services:
rabbitmq:
container_name: "4340_rabbitmq"
tty: true
image: rabbitmq:management
ports:
- 15672:15672
- 15671:15671
- 5672:5672
volumes:
- ./rabbitmq/lib:/var/lib/rabbitmq
new_user_notification:
container_name: "app_new_user_notification"
build:
context: .
dockerfile: Dockerfile
env_file:
- ./un.env
links:
- rabbitmq
depends_on:
- rabbitmq
command: ["./wait-for-it.sh", "rabbitmq:5672", "-t", "120", "--", "node", "newusernotification.js"]
2 un.env
AMQP_RECEIVE_URL=amqp://guest:guest#rabbitmq:5672
Note that I've passed the AMQP_RECEIVE_URL as an environment variable to new_user_notification service using env_file and got rid of the account service
3 Dockerfile
FROM node:7
WORKDIR /app
COPY newusernotification.js /app
COPY wait-for-it.sh /app
RUN npm install --save seneca
RUN npm install --save seneca-amqp-transport
4 newusernotification.js use the same file in the question.
5 wait-for-it.sh

It is possible that your RabbitMQ service is not fully up, at the time the connection is attempted from the consuming service.
If this is the case, in Docker Compose, you can wait for services to come up using a container called dadarek/wait-for-dependencies.
1). Add a new service waitforrabbit to your docker-compose.yml
waitforrabbit:
image: dadarek/wait-for-dependencies
depends_on:
- rabbitmq
command: rabbitmq:5672
2). Include this service in the depends_on section of the service that requires RabbitMQ to be up.
depends_on:
- waitforrabbit
3). Startup compose
docker-compose run --rm waitforrabbit
docker-compose up -d account new_user_notification
Starting compose in this manner will essentially wait for RabbitMQ to be fully up before the connection from the consuming service is made.

Related

Docker NodeJS app cant connect to postgres database

I am farely new to docker and docker-compose. I tried to spin up a few services using docker which contain of a nodejs (Nest.js) api, a postgres db and pgadmin. Without the API (nodejs) app beeing dockerized I could connect to the docker database containers, but now that I also have dockerized the node app, it is not connecting anymore and I am clueless why. Is there anything wrong with the way I have set it up?
Here is my docker-compose file
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '5000:5000'
depends_on:
- postgres
volumes:
- .:/app
- /app/node_modules
networks:
- postgres
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/data/postgres
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
volumes:
- /data/pgadmin:/root/.pgadmin
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
This is the nodejs app Dockerfile which builds successfully and in the logs I see the app is trying to connect to the databse but it cant (no specific error) just that it doesnt find the db.
# Image source
FROM node:14-alpine
# Docker working directory
WORKDIR /app
# Copying file into APP directory of docker
COPY ./package.json /app/
RUN apk update && \
apk add git
# Then install the NPM module
RUN yarn install
# Copy current directory to APP folder
COPY . /app/
EXPOSE 5000
CMD ["npm", "run", "start:dev"]
I have 2 env files in my projecs root directory.
.env
docker.env
As mentioned above, when I remove the "nftapi" service from docker and run the nodejs up with a simple npm start it is connecting to the postgres container.
TypeOrmModule.forRoot({
type: 'postgres',
host: process.env.POSTGRES_HOST,
port: Number(process.env.POSTGRES_PORT),
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_DB,
synchronize:true,
entities: ['dist/**/*.entity{.ts,.js}'],
}),
The host from the .env file that is used in the typeorm module is localhost
When using networks with docker-compose you should use the name of the service as you hostname.
so in your case the hostname should be postgres and not localhost
You can read more about at here:
https://docs.docker.com/compose/networking/

How to run one server docker container before client container

I have a fairly complex application where I have next js on the client and on the backend I have graphql and I have nginx as a reverse proxy.
I am using next JS incremental static site regeneration functionality on the index page so that's why I want my server up and running before my client container start building because when I run npm run build it is going to fetch some data from the graphql server here is my docker compose file
version: "3"
services:
mynginx:
container_name: mynginx
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
graphql:
container_name: graphql_server
depends_on:
- mynginx
build:
context: ./server
dockerfile: Dockerfile
mynextjs:
container_name: nextjs_server
depends_on:
- graphql
build:
context: ./client
dockerfile: Dockerfile
depends_on with healthcheck, to start container when another already works
https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
something like this
services:
mynginx:
container_name: mynginx
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
healthcheck:
test: ["CMD-SHELL", "wget -O /dev/null http://localhost || exit 1"]
timeout: 10s
graphql:
...
depends_on:
depends_on:
mynginx:
condition: service_healthy
In simple terms, what you need is to wait until graphql service is properly finished starting before you run mynextjs service.
I'm afraid that depends_on (or links) might not work out of the box in this case. The reason is, although Compose does start graphql service before it starts mynextjs service, it doesn't wait until graphql service is in READY state to start the dependant services.
Basically, what you need is to find a way to tell Compose to wait until graphql service is in READY state.
The solution is described in Control startup and shutdown order in Compose documentation
I hope this helps you. Cheers 🍻 !!!
If I understand your problem correctly, you need to delay beginning the build of the image for the frontend container, until the backend containers are running and ready.
The easiest way that comes to mind would be to use profiles to allow starting these independently.
Eg:
version: "3"
services:
mynginx:
container_name: mynginx
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
profiles: ["backend"]
graphql:
container_name: graphql_server
depends_on:
- mynginx
build:
context: ./server
dockerfile: Dockerfile
profiles: ["backend"]
mynextjs:
container_name: nextjs_server
depends_on:
- graphql
build:
context: ./client
dockerfile: Dockerfile
profiles: ["frontend"]
Then chain the starting something like:
docker-compose --profile backend -d up && docker-compose --profile frontend -d up
Another option might be splitting into two separate compose files, with a shared docker network between them.
ref: https://docs.docker.com/compose/profiles/

docker-compose network error cant connect to other host

I'm new to docker and I'm having issues with connecting to my managed database cluster on the cloud services which was separated from the docker machine and network.
So recently I attempted to use docker-compose because manually writing docker run command every update is a hassle so I configure the yml file.
Whenever I use docker compose, I'm having issues connecting to the database with this error
Unhandled error event: Error: connect ENOENT %22rediss://default:password#test.ondigitalocean.com:25061%22
But if I run it on the actual docker run command with the ENV in dockerfile, then everything will work fine.
docker run -d -p 4000:4000 --restart always test
But I don't want to expose all the confidential data to the code repository with all the details on the dockerfile.
Here is my dockerfile and docker-compose
dockerfile
FROM node:14.3.0
WORKDIR /kpb
COPY package.json /kpb
RUN npm install
COPY . /kpb
CMD ["npm", "start"]
docker-compose
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION="${PRODUCTION}"
- DB_SSL="${DB_SSL}"
- DB_CERT="${DB_CERT}"
- DB_URL="${DB_URL}"
- REDIS_URL="${REDIS_URL}"
- SESSION_KEY="${SESSION_KEY}"
- AWS_BUCKET_REGION="${AWS_BUCKET_REGION}"
- AWS_BUCKET="${AWS_BUCKET}"
- AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}"
- AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}"
You should not include the " for the values of your environment variables in your docker-compose.
This should work:
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION=${PRODUCTION}
- DB_SSL=${DB_SSL}
- DB_CERT=${DB_CERT}
- DB_URL=${DB_URL}
- REDIS_URL=${REDIS_URL}
- SESSION_KEY=${SESSION_KEY}
- AWS_BUCKET_REGION=${AWS_BUCKET_REGION}
- AWS_BUCKET=${AWS_BUCKET}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}

Running a container stops another container

I've created two basic MEAN stack apps with a common database (mongo db). I've also built docker for these apps.
Problem:
When i start a mean stack container(example-app-1) using
docker-compose up -d --build
The container runs smoothly and I'm also able to hit the container and view my page locally.
When i try to start another mean stack container(example-app-2) using
docker-compose up -d --build
my previous container is stopped and the current container works without any flaw.
Required:
I want both these containers to run simultaneously using a shared database. I need help in achieving this.
docker-compose.yml Example app -1
version: '3'
services:
example_app_1:
build:
dockerfile: dockerfile
context: ../../
image: example_app_1:1.0.0
backend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/backend/example-app-1-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_1
- BACKEND_PORT=8888
- BACKEND_IP=0.0.0.0
restart: always
ports:
- '8888:8888'
command: ['node', 'main.js']
networks:
- default
expose:
- 8888
frontend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/frontend/example_app_1
ports:
- '5200:5200'
command: ['http-server', '-p', '5200', '-o', '/app/example_app_1/frontend/example-app-1']
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose.yml for Example app 2
version: '3'
services:
example-app-2:
build:
dockerfile: dockerfile
context: ../../
image: example_app_2:1.0.0
backend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/backend/example-app-2-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_2
- BACKEND_PORT=3333
- BACKEND_IP=0.0.0.0
restart: always
networks:
- default
ports:
- '3333:3333'
command: ['node', 'main.js']
expose:
- 3333
frontend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/frontend/example-app-2
ports:
- '4200:4200'
command: ['http-server', '-p', '4200', '-o', '/app/example_app_2/frontend/example-app-2
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose will create containers named project_service. The project, by default, comes from the last component of the directory. So when you start the second docker-compose, it will stop the containers with those names, and start new ones.
Either move these two docker-compose files to separate directories so the container names are different, or run docker-compose with --project-name flag containing separate project names so the containers can be started with different names.

Docker-Compose: Unable to connect NodeJS with Mongo & Redis [Connection Refused]

I am working with 3 Services:
api-knotain [main api service]
api-mongo [mongo db for api service]
api-redis [redis for api service]
The Dockerfile for api-knotain looks as follows
FROM node:latest
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
CMD [ "npm", "start" ]
my docker-compose file as such:
version: '3.3'
services:
api-knotain:
container_name: api-knotain
restart: always
build: ../notify.apiV2/src
ports:
- "7777:7777"
links:
- api-mongo
- api-redis
environment:
- REDIS_URI=api-redis
- REDIS_PORT=32770
- MONGO_URI=api-mongo
- MONGO_PORT=27017
- RESEED=true
- NODE_TLS_REJECT_UNAUTHORIZED=0
api-mongo:
container_name: api-mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
api-redis:
container_name: api-redis
image: "redis:alpine"
ports:
- "32770:32770"
runnin
docker-compose build
docker-compose up
output:
api-knotain | connecting mongo ...: mongodb://api-mongo:27017/notify
api-knotain | Redis error: Error: Redis connection to api-redis:32770 failed - connect ECONNREFUSED 172.21.0.2:32770
api-knotain | mongo error:MongoNetworkError: failed to connect to server [api-mongo:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 172.21.0.3:27017]
api-knotain | Example app listening on port 7777!
neither mongo nor redis can be connected.
I tried the following things:
use localhost instead of container name
use different ports
use expose vs port
always with the same result
note:
i can connect without issue to both mongo & redis through local cli 'localhost:port'
what am I missing?
Probably redis and mongo containers start later than you application and therefore your app will not see those. To counter this you must wait for those services to be ready.
Also links is a legacy feature of Docker. You should use depends_on to control startup order and user-defined networks if you want to isolate your database and redis containers from external network.
version: '3.3'
services:
api-knotain:
depends_on:
- api-mongo
- api-redis
container_name: api-knotain
restart: always
build: ../notify.apiV2/src
ports:
- "7777:7777"
links:
- api-mongo
- api-redis
environment:
- REDIS_URI=api-redis
- REDIS_PORT=32770
- MONGO_URI=api-mongo
- MONGO_PORT=27017
- RESEED=true
- NODE_TLS_REJECT_UNAUTHORIZED=0
api-mongo:
container_name: api-mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
api-redis:
container_name: api-redis
image: "redis:alpine"
ports:
- "32770:32770"
It looks like depend_on did not properly work in 3.3 version of docker compose. after updating the version to 3.7 all works perfectly without any changes to the compose file.

Resources