I'm having a tough time connecting my nodejs backend service to the redis service (both running in swarm mode).
The backend server fails to start as it complains that it was not able to reach redis with the following error:
But on the contrary, if I try to reach the redis service from within my backend container using curl redis-dev.localhost:6379, I get the response curl: (52) Empty reply from server. Which implies that the redis service is available to the backend container:
Below is the backend's docker-compose yaml:
version: "3.8"
services:
backend:
image: backend-app:latest
command: node dist/main.js
ports:
- 4000:4000
environment:
- REDIS_HOST='redis-dev.localhost'
- APP_PORT=4000
deploy:
restart_policy:
condition: on-failure
update_config:
parallelism: 1
delay: 15s
labels:
- "traefik.enable=true"
- "traefik.http.routers.backend-app.rule=Host(`backend-app.localhost`)"
- "traefik.http.services.backend-app.loadbalancer.server.port=4000"
- "traefik.docker.network=traefik-public"
networks:
- traefik-public
networks:
traefik-public:
external: true
driver: overlay
And following is my redis's compose file:
version: "3.8"
services:
redis:
image: redis:7.0.4-alpine3.16
hostname: "redis-dev.localhost"
ports:
- 6379:6379
deploy:
restart_policy:
condition: on-failure
update_config:
parallelism: 1
delay: 15s
networks:
- traefik-public
networks:
traefik-public:
external: true
driver: overlay
The issue was with the backend's docker compose file. The REDIS_HOST environment variable should have been redis-dev.localhost instead of 'redis-dev.localhost'. That's why the backend app was complaining that it was not able to reach the redis host.
It's funny how elementary these things can be.
Related
I have a microservice based node app. I am using docker, docker-compose and traefik for service discovery.
I have 2 microservices at this moment:
the server app: running at node-app.localhost:8000
the search microservice running at search-microservice.localhost:8002
The issue I can't make a request from one microservice to another.
Here are my docker compose config:
# all variables used in this file are defined in the .env file
version: "2.2"
services:
node-app-0:
container_name: node-app
restart: always
build: ./backend/server
links:
- ${DB_HOST}
depends_on:
- ${DB_HOST}
ports:
- "8000:3000"
labels:
- "traefik.port=80"
- "traefik.frontend.rule=Host:node-app.localhost"
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker # Enables the web UI and tells Traefik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock
search-microservice:
container_name: ${CONTAINER_NAME_SEARCH}
restart: always
build: ./backend/search-service
links:
- ${DB_HOST}
depends_on:
- ${DB_HOST}
ports:
- "8002:3000"
labels:
- "traefik.port=80"
- "traefik.frontend.rule=Host:search-microservice.localhost"
volumes:
node-ts-app-volume:
external: true
Both the node-app and the search-microservice expose the port 3000.
Why can't I call http://search-microservice.localhost:8002 from the node app ? calling it from the browser works though.
Because node-app is a container and to access other containers it has to use service name and internal port.
In your case it is search-microservice:3000.
To access host PC and exposed ports, you have to use host.docker.internal name for all services and external port.
If you want to access other services from in a different container with their hostnames, you can use the "extra_hosts" parameter in your docker-compose.yml file. Also, you have to use the "ipv4_address" parameter under the network parameter for each all services.
For example;
services:
node-app-1:
container_name: node-app
networks:
apps:
ipv4_address: 10.1.3.1
extra_hosts:
"search-microservice.localhost:10.1.3.2"
node-app-2:
container_name: search-microservice
networks:
apps:
ipv4_address: 10.1.3.2
extra_hosts:
"node-app.localhost:10.1.3.1"
Extra hosts in docker-compose
i am running application with multi containers as below..
feeder - is a simple nodejs container from image node:alpine
api - is nodejs container with expressjs from image node:alpine
ui-app - is react app container from image node:alpine
i am trying to call the api service in ui-app i am getting error as below
image to Console Log
image to Console Log
not sure what is causing the problem
if i access the services as http://192.168.99.100/ping it works (that is my docker machine default ip)...
but if i use container name like http://api:3200/ping it is not working...? please help..
the below is my docker-compose.
version: '3'
services:
feeder:
build: ./feeder
container_name: feeder
tty: true
depends_on:
- redis
links:
- redis
environment:
- IS_FROM_DOCKER=true
ports:
- "3100:3100"
networks:
- hmdanet
api:
build: ./api
container_name: api
tty: true
depends_on:
- feeder
links:
- redis
environment:
- IS_FROM_DOCKER=true
ports:
- "3200:3200"
networks:
hmdanet:
aliases:
- "hmda-api"
ui-app:
build: ./ui-app
container_name: ui-app
tty: true
depends_on:
- api
links:
- api
environment:
- IS_FROM_DOCKER=true
ports:
- "3000:3000"
networks:
- hmdanet
redis:
image: redis:latest
ports:
- '6379:6379'
networks:
- hmdanet
networks:
hmdanet:
driver: bridge
You can only use service name as a domain name when you are inside a container. In you case it's your browser making the call, it does not know what api is. In you web app, you should have an env like base url set to the ip of your docker machine or localhost.
I have container for two NodeJS services and one Nginx for reverse-proxy.
I have make NGINX on port 80 so it's publicly available via localhost on my browser
I also use reverse-proxy to proxy_pass to each responsible service.
location /api/v1/service1/ {
proxy_pass http://service1:3000/;
}
location /api/v1/service2/ {
proxy_pass http://service2:3000/;
}
In my service 1, there is an axios module that wants to call to service 2 by making a request to localhost/api/v1/service2
But, it says that connection is refused. I doubt if the localhost in service 1 refer to its container, not the docker host.
version: '3'
services:
service1:
build: './service1'
networks:
- backend
service2:
build: './service2'
networks:
- backend
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
- backend
networks:
backend:
driver: bridge
Even after using network, it still says ECONNREFUSED.
Please help.
Try adding the depends_on in your docker-compose file for the nginx, like below:
version: '3'
services:
service1:
build: './service1'
expose:
- "3000"
networks:
- backend
service2:
build: './service2'
expose:
- "3000"
networks:
- backend
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
- backend
depends_on:
- service1
- service2
networks:
backend:
driver: bridge
This would make sure that both services are running first before the nginx container attempts to connect to them. Perhaps the connection is refused because the nginx container keeps crashing due to it not finding the two services running when it executes its conf file and connect to the backends.
I have setup a docker stack (see docker-compose.yml) below.
It has 2 Services:
- A nodejs (loopback) service running in one (or more) containers.
- A mongodb running in another container.
What HOST do I use in my node application to allow it to connect to the mongodb in another container (currently on same node, but could be on different node if I setup a swarm).
I have tried bridge, webnet, hosts IP etc.. but no luck.
Note: I am passing the host into the nodejs app with environment variable "MONGODB_SERVICE_SERVICE_HOST".
Thanks in advance!!
version: "3"
services:
web:
image: myaccount/loopback-app:latest
deploy:
replicas: 2
restart_policy:
condition: on-failure
ports:
- "8060:3000"
environment:
MONGODB_SERVICE_SERVICE_HOST: "webnet"
depends_on:
- mongo
networks:
- webnet
mongo-database:
image: mongo
ports:
- "27017:27017"
volumes:
- "/Users/jason/workspace/mongodb/db:/data/db"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
webnet is not the host. This is mongo-database.
So change webnet to mongo-database.
ENV MONGO_URL "mongodb://containerName:27017/dbName"
To check mongo-database communication, enter into the nodejs container, and try to ping mongo-database :
ping mongo-database
If it works, you know that your server can communicate with your mongo instance.
How can I configure my digital ocean boxes to have the correct firewall settings?
I've followed the official guide for getting Digital Ocean and Docker containers working together.
I have 3 docker nodes that I can see when I docker-machine ls. I have created a master docker node and have joined the other docker nodes as workers. However, if I attempt to visit the url of the node, the connection hangs. This setup is working on local.
Here is my docker-compose that I am using for production.
version: "3"
services:
api:
image: "api"
command: rails server -b "0.0.0.0" -e production
depends_on:
- db
- redis
deploy:
replicas: 3
resources:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
env_file:
- .env-prod
networks:
- apinet
ports:
- "3000:3000"
client:
image: "client"
depends_on:
- api
deploy:
restart_policy:
condition: on-failure
env_file:
- .env-prod
networks:
- apinet
- clientnet
ports:
- "4200:4200"
- "35730:35730"
db:
deploy:
placement:
constaints: [node.role == manager]
restart_policy:
condition: on-failure
env_file: .env-prod
image: mysql
ports:
- "3306:3306"
volumes:
- ~/.docker-volumes/app/mysql/data:/var/lib/mysql/data
redis:
deploy:
placement:
constaints: [node.role == manager]
restart_policy:
condition: on-failure
image: redis:alpine
ports:
- "6379:6379"
volumes:
- ~/.docker-volumes/app/redis/data:/var/lib/redis/data
nginx:
image: app_nginx
deploy:
restart_policy:
condition: on-failure
env_file: .env-prod
depends_on:
- client
- api
networks:
- apinet
- clientnet
ports:
- "80:80"
networks:
apinet:
driver: overlay
clientnet:
driver: overlay
I'm pretty confident that the problem is with the firewall settings. I'm not sure however, the ports that need to be open though. I've consulted this guide.