I have a nodejs api dockerized running on an EC2 instance. The app itself is running inside the container on port 5000 and is mapped via docker-compose to 5000:5000.
I actually wanted the API to listen on the port I can access via https so I tried mapping the ports in the docker-compose like this "443:5000". I set an inbound rule in the EC2 security group for SSL allow access from anywhere but when I hit the IP or DNS name into my browser it is not responding.
# Image source
FROM node:14-alpine
# Docker working directory
WORKDIR /app
# Copying file into APP directory of docker
COPY ./package.json /app/
RUN apk update && \
apk add git
# Then install the NPM module
RUN yarn install
# Copy current directory to APP folder
COPY . /app/
EXPOSE 5000
CMD ["npm", "run", "start:dev"]
Is there anything I was missing?
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '443:5000'
depends_on:
- postgres
volumes:
- .:/app
- /app/node_modules
networks:
- postgres
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/data/postgres
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
volumes:
- /data/pgadmin:/root/.pgadmin
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
I'm trying to get a basic nodeJs and postgres system up and running but for some reason the node container keeps receiving a SIGTERM signal and shutting down only to be started back up due to the restart policy and then shutdown again. The cycle goes on and on.
What am I missing here? I ran the same code in non-swarm mode and it worked fine, the container was healthy and stayed up. One more thing I did notice during swarm mode is that inspite of asking docker to keep 1 replica, the docker stack services service_name always returns 0/1 replicas
Posting my dockerfile and docker-compose.yml file here
# BASE stage
FROM node:14-alpine as base
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install --frozen-lockfile --prod
FROM node:14-alpine
ENV NODE_ENV development
ENV VERSION V1
WORKDIR /usr/src/app
RUN apk --no-cache add curl
COPY src src/
# Other copy commands
COPY --from=base /usr/src/app/node_modules /usr/src/app/node_modules
# check every 5s to ensure this service returns HTTP 200
HEALTHCHECK --interval=5s --timeout=3s --start-period=10s --retries=3 \
CMD curl -fs http://localhost/health
ENTRYPOINT [ "node", "src/index.js" ]
version: "3.7"
services:
api:
image: demo/hobby:v1
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 5
window: 120s
rollback_config:
parallelism: 1
delay: 20s
order: start-first
update_config:
parallelism: 1
delay: 1s
failure_action: rollback
order: start-first
env_file:
- ./.env
ports:
- target: 9200
published: 80
mode: host
networks:
- verse
postgres:
image: "postgres:12.3-alpine"
container_name: "test-db-dev"
networks:
- verse
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_USER=${DB_USER}
expose:
- "5432"
ports:
- "5432:5432"
restart: "unless-stopped"
networks:
verse:
driver: overlay
external: false
I assume that your container is being killed every 30 seconds. If this is true, then this is the cause:
The HEALTHCHECK is trying to curl http://localhost/health (defaults to port 80).
Although you expose the app to the host on port 80, the HEALTHCHECK is performed from the container perspective, where no service is listening on that port.
Assuming that your node app is listening on port 9200 and that a GET performed on /health returns status 200, the Dockerfile should be built like this:
[...]
HEALTHCHECK --interval=5s --timeout=3s --start-period=10s --retries=3 \
CMD curl -fs http://localhost:9200/health
[...]
Docker newbie here. Docker-compose file builds without any issues but when I try to run my app on localhost:4200, I get a message - localhost didn't send any data on chrome and the server unexpectedly dropped the connection in safari. I am working on MacOs Catalina. Here is my yml file:
version: '3.0'
services:
my-portal:
build: .
ports:
- "4200:4200"
depends_on:
- backend
backend:
build: ./backend
ports:
- "3000:3000"
environment:
POSTGRES_HOST: host.docker.internal
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
depends_on:
-db
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: mydb
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
POSTGRES_HOST: host.docker.internal
ports:
- 5432:5432
restart: always
volumes:
- ./docker/db/data:/var/lib/postgresql/data
Log for Angular:
/docker-entrypoint.sh: Configuration complete; ready for start up
Log for Node: db connected
Log for Postgres: database system is ready to accept connections
Below are my Angular and Node Docker files:
FROM node:latest AS builder
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
EXPOSE 4200
# Stage 2
FROM nginx:alpine
COPY --from=builder /app/dist/* /usr/share/nginx/html/
Node:
FROM node:12
WORKDIR /backend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]
When I created Angular image and ran my app on localhost:4200 it worked fine. Please let me know if I am missing anything.
Your Angular container is built FROM nginx, and you use the default Nginx configuration from the Docker Hub nginx image. That listens on port 80, so that's the port number you need to use in use ports: directive:
services:
quickcoms-portal:
build: .
ports:
- "4200:80" # <-- second port must match nginx image's port
depends_on:
- backend
The EXPOSE directive in the first stage is completely ignored and you can delete it. The FROM nginx line causes docker build to basically completely start over from a new base image, so your final image is stock Nginx plus the files you COPY --from=builder.
I have a microservice based node app. I am using docker, docker-compose and traefik for service discovery.
I have 2 microservices at this moment:
the server app: running at node-app.localhost:8000
the search microservice running at search-microservice.localhost:8002
The issue I can't make a request from one microservice to another.
Here are my docker compose config:
# all variables used in this file are defined in the .env file
version: "2.2"
services:
node-app-0:
container_name: node-app
restart: always
build: ./backend/server
links:
- ${DB_HOST}
depends_on:
- ${DB_HOST}
ports:
- "8000:3000"
labels:
- "traefik.port=80"
- "traefik.frontend.rule=Host:node-app.localhost"
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker # Enables the web UI and tells Traefik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock
search-microservice:
container_name: ${CONTAINER_NAME_SEARCH}
restart: always
build: ./backend/search-service
links:
- ${DB_HOST}
depends_on:
- ${DB_HOST}
ports:
- "8002:3000"
labels:
- "traefik.port=80"
- "traefik.frontend.rule=Host:search-microservice.localhost"
volumes:
node-ts-app-volume:
external: true
Both the node-app and the search-microservice expose the port 3000.
Why can't I call http://search-microservice.localhost:8002 from the node app ? calling it from the browser works though.
Because node-app is a container and to access other containers it has to use service name and internal port.
In your case it is search-microservice:3000.
To access host PC and exposed ports, you have to use host.docker.internal name for all services and external port.
If you want to access other services from in a different container with their hostnames, you can use the "extra_hosts" parameter in your docker-compose.yml file. Also, you have to use the "ipv4_address" parameter under the network parameter for each all services.
For example;
services:
node-app-1:
container_name: node-app
networks:
apps:
ipv4_address: 10.1.3.1
extra_hosts:
"search-microservice.localhost:10.1.3.2"
node-app-2:
container_name: search-microservice
networks:
apps:
ipv4_address: 10.1.3.2
extra_hosts:
"node-app.localhost:10.1.3.1"
Extra hosts in docker-compose
I am trying to setup separate docker containers for rabbitmq and the consumer for the container, i.e., the process that would listen on the queue and perform the necessary tasks. I created the yml file, and the docker file.
I am able to run the yml file, however when I check the docker-compose logs I see where there are ECONNREFUSED errors.
NewUserNotification.js:
require('seneca')()
.use('seneca-amqp-transport')
.add('action:new_user_notification’, function(message, done) {
…
return done(null, {
pid: process.pid,
status: `Process ${process.pid} status: OK`
})
.listen({
type: 'amqp',
pin: ['action:new_user_notification’],
name: 'seneca.new_user_notification.queue',
url: process.env.AMQP_RECEIVE_URL,
timeout: 99999
});
error message in docker-compose log:
{"notice":"seneca: Action hook:listen,role:transport,type:amqp failed: connect ECONNREFUSED 127.0.0.1:5672.","code":
"act_execute","err":{"cause":{"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1",
"port":5672},"isOperational":true,"errno":"ECONNREFUSED","code":"act_execute","syscall":"connect","address":"127.0.0.1",
"port":5672,"eraro":true,"orig":{"cause":{"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1",
"port":5672},"isOperational":true,"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5672},
"seneca":true,"package":"seneca","msg":"seneca: Action hook:listen,role:transport,type:amqp failed: connect ECONNREFUSED 127.0.0.1:5672.",
"details":{"message":"connect ECONNREFUSED 127.0.0.1:5672","pattern":"hook:listen,role:transport,type:amqp","instance":"Seneca/…………/…………/1/3.4.3/-“,
”orig$":{"cause":{"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5672},"isOperational":true,
"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5672}
sample docker-compose.yml file:
version: '2.1'
services:
rabbitmq:
container_name: "4340_rabbitmq"
tty: true
image: rabbitmq:management
ports:
- 15672:15672
- 15671:15671
- 5672:5672
volumes:
- /rabbitmq/lib:/var/lib/rabbitmq
- /rabbitmq/log:/var/log/rabbitmq
- /rabbitmq/conf:/etc/rabbitmq/
account:
container_name: "account"
build:
context: .
dockerfile: ./Account/Dockerfile
ports:
- 3000:3000
links:
- "mongo"
- "rabbitmq"
depends_on:
- "mongo"
- "rabbitmq"
new_user_notification:
container_name: "app_new_user_notification"
build:
context: .
dockerfile: ./Account/dev.newusernotification.Dockerfile
links:
- "mongo"
- "rabbitmq"
depends_on:
- "mongo"
- "rabbitmq"
command: ["./wait-for-it.sh", "rabbitmq:5672", "-t", "90", "--", "node", “newusernotification.js"]
amqp connection string:
(I tried both ways, with and without a user/pass)
amqp://username:password#rabbitmq:5672
I added the link attribute to the docker-compose file and referenced the name in the .env file(rabbitmq). I tried to run the NewUserNotification.js file from outside the container and it started fine. What could be causing this problem? Connection string issue? Docker-Compose.yml configuration issue? Other?
Seems the environment variable AMQP_RECEIVE_URL is not constructed properly. According to error log the listener is trying to connect to localhost(127.0.0.1) which is not the rabbitmq service container IP. Find the modified configurations for a working sample.
1 docker-compose.yml
version: '2.1'
services:
rabbitmq:
container_name: "4340_rabbitmq"
tty: true
image: rabbitmq:management
ports:
- 15672:15672
- 15671:15671
- 5672:5672
volumes:
- ./rabbitmq/lib:/var/lib/rabbitmq
new_user_notification:
container_name: "app_new_user_notification"
build:
context: .
dockerfile: Dockerfile
env_file:
- ./un.env
links:
- rabbitmq
depends_on:
- rabbitmq
command: ["./wait-for-it.sh", "rabbitmq:5672", "-t", "120", "--", "node", "newusernotification.js"]
2 un.env
AMQP_RECEIVE_URL=amqp://guest:guest#rabbitmq:5672
Note that I've passed the AMQP_RECEIVE_URL as an environment variable to new_user_notification service using env_file and got rid of the account service
3 Dockerfile
FROM node:7
WORKDIR /app
COPY newusernotification.js /app
COPY wait-for-it.sh /app
RUN npm install --save seneca
RUN npm install --save seneca-amqp-transport
4 newusernotification.js use the same file in the question.
5 wait-for-it.sh
It is possible that your RabbitMQ service is not fully up, at the time the connection is attempted from the consuming service.
If this is the case, in Docker Compose, you can wait for services to come up using a container called dadarek/wait-for-dependencies.
1). Add a new service waitforrabbit to your docker-compose.yml
waitforrabbit:
image: dadarek/wait-for-dependencies
depends_on:
- rabbitmq
command: rabbitmq:5672
2). Include this service in the depends_on section of the service that requires RabbitMQ to be up.
depends_on:
- waitforrabbit
3). Startup compose
docker-compose run --rm waitforrabbit
docker-compose up -d account new_user_notification
Starting compose in this manner will essentially wait for RabbitMQ to be fully up before the connection from the consuming service is made.