Communication between microservices with docker-compose and traefik - node.js

I have a microservice based node app. I am using docker, docker-compose and traefik for service discovery.
I have 2 microservices at this moment:
the server app: running at node-app.localhost:8000
the search microservice running at search-microservice.localhost:8002
The issue I can't make a request from one microservice to another.
Here are my docker compose config:
# all variables used in this file are defined in the .env file
version: "2.2"
services:
node-app-0:
container_name: node-app
restart: always
build: ./backend/server
links:
- ${DB_HOST}
depends_on:
- ${DB_HOST}
ports:
- "8000:3000"
labels:
- "traefik.port=80"
- "traefik.frontend.rule=Host:node-app.localhost"
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker # Enables the web UI and tells Traefik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock
search-microservice:
container_name: ${CONTAINER_NAME_SEARCH}
restart: always
build: ./backend/search-service
links:
- ${DB_HOST}
depends_on:
- ${DB_HOST}
ports:
- "8002:3000"
labels:
- "traefik.port=80"
- "traefik.frontend.rule=Host:search-microservice.localhost"
volumes:
node-ts-app-volume:
external: true
Both the node-app and the search-microservice expose the port 3000.
Why can't I call http://search-microservice.localhost:8002 from the node app ? calling it from the browser works though.

Because node-app is a container and to access other containers it has to use service name and internal port.
In your case it is search-microservice:3000.
To access host PC and exposed ports, you have to use host.docker.internal name for all services and external port.

If you want to access other services from in a different container with their hostnames, you can use the "extra_hosts" parameter in your docker-compose.yml file. Also, you have to use the "ipv4_address" parameter under the network parameter for each all services.
For example;
services:
node-app-1:
container_name: node-app
networks:
apps:
ipv4_address: 10.1.3.1
extra_hosts:
"search-microservice.localhost:10.1.3.2"
node-app-2:
container_name: search-microservice
networks:
apps:
ipv4_address: 10.1.3.2
extra_hosts:
"node-app.localhost:10.1.3.1"
Extra hosts in docker-compose

Related

Docker Django services not communicating

I am trying to get my listener service to POST at an API in the app service. The app service is running at 0.0.0.0:8000 and the listener service is trying to post at this address but the data isn't getting added to the db.
I am not sure if I have configured this correctly, help would be much appreciated.
docker-compose.yml
services:
db:
image: postgres:13.4
env_file:
- ./docker/env.db
ports:
- 5432:5432
volumes:
- db_data:/var/lib/postgresql/data
app:
build:
context: .
dockerfile: ./docker/Dockerfile
env_file:
- ./docker/env.db
- ./.env
volumes:
- .:/opt/code
ports:
- 8000:8000
- 3000:3000
depends_on:
- db
command: bash ./scripts/runserver.sh
listener:
image: scooprank_app
env_file:
- ./.env
container_name: listener
command: npm run listen
depends_on:
- app
restart: always
If the services are on the same docker-compose file or the same docker network. specify the container name for each service and use the container name in place of the IP address when making GET or POST requests.
if the container name is django_app then in your listener application you can send POST requests to http://django_app:8000

Azure web app multi container (MEAN app), what is the URL to connect to node backend container from front end container?

Trying to learn to deploy angular app to azure web app using multi-container, the frontend loads fine but cant connect to the backend node container, I want to add the url of the node backend to my angular frontend but i cant figure out what it is. I've tried https://rojesh.azure.io:3000, https://rojesh.azurewebsites.net:3000, http://server:3000 and more but nothing seems to work. Website Hostname: https://rojesh.azurewebsites.net and the acr name is rojesh.azurecr.io which has 3 images. This is my config file for compose in azure:
version: '3.3'
services:
db:
image: rojesh.azurecr.io/db:latest
ports:
- "27017:27017"
restart: always
networks:
- app-network
server:
image: rojesh.azurecr.io/server:latest
depends_on:
- db
ports:
- "3000:3000"
restart: always
networks:
- app-network
app:
depends_on:
- server
image: rojesh.azurecr.io/app:latest
environment:
NGINX_HOST: rojesh.azurewebsites.net
NGINX_PORT: 80
ports:
- "80:80"
restart: always
networks:
- app-network
networks:
app-network:
driver: bridge
The app works fine locally using docker compose which is:
version: '3.9'
services:
docker-app:
build:
context: app
dockerfile: Dockerfile.dev
ports:
- '4200:4200'
volumes:
- ./app/src:/app/src
docker-server:
build:
context: server
dockerfile: Dockerfile
environment:
PORT: 3000
MONGODB_URI: mongodb://mongo:27017/rojesh
JWT_SECRET: secret
ports:
- 3000:3000
depends_on:
- mongo
volumes:
- ./server:/server
mongo:
container_name: mongo-server
image: mongo:latest
ports:
- 27017:27017
Thanks # ajkuma-msft. Azure App Service only exposes ports 80 and 443. Yes, incoming requests from client would just be over 443/80 which should be mapped to the exposed port of the container.
App Service will attempt to detect which port to bind to your container. If you want to bind to your container the WEBSITES_PORT app setting and configure it with a value of the port.
Web App for Containers currently allows you to expose only one port to the outside world. The container can only listen for HTTP requests on a single port.
From Docker compose configuration stand-point : Ports other than 80 and 8080 are ignored.
Refer Docker Compose options lists shows supported and unsupported Docker Compose configuration options.
Refer here

How to host docker-compose Application on MS Azure?

I want to deploy a containerized Web-App to MS Azure. Therefore I've written the following docker-compose file:
version: "3"
services:
frontend:
image: fitnessappcontainerregistry.azurecr.io/crushit:frontend
build:
context: ./frontend
dockerfile: Dockerfile
networks:
- crushit
ports:
- "80:3000"
restart: unless-stopped
wikihow:
image: fitnessappcontainerregistry.azurecr.io/crushit:backend-wikihow
build:
context: ./backend
dockerfile: ./services/WikiHow Service/Dockerfile
networks:
- crushit
restart: unless-stopped
training:
image: fitnessappcontainerregistry.azurecr.io/crushit:backend-training
build:
context: ./backend
dockerfile: ./services/Training Service/Dockerfile
networks:
- crushit
restart: unless-stopped
eventpublisher:
image: fitnessappcontainerregistry.azurecr.io/crushit:backend-eventpublisher
build:
context: ./backend
dockerfile: ./services/EventPublisher/Dockerfile
networks:
- crushit
restart: unless-stopped
accountservice:
image: fitnessappcontainerregistry.azurecr.io/crushit:backend-account
build:
context: ./backend
dockerfile: ./services/Account Service/Dockerfile
networks:
- crushit
restart: unless-stopped
proxy:
image: fitnessappcontainerregistry.azurecr.io/crushit:backend-proxy
build:
context: ./backend
dockerfile: ./proxy/Dockerfile
networks:
- crushit
ports:
- "5000:5000"
restart: unless-stopped
networks:
crushit:
driver: bridge
For the App to function, two Containers (frontend, proxy) have to be exposed to the outside. Additionally all Containers have to be able to communicate with each other (therefore I added a custom network to all services). All Container-Images are stored in an Azure Container Registry.
Now I'm wondering how I can host my App on Azure. I already tried "Web App for Containers" which is the same as "App Services" I think. That didn't work fully, as I was not able to expose a "non standard" Port like 5000.
Is there any other way to host my App on Azure using this docker-compose file, or do I have to use Kubernetes instead?
You can use the docker-compose.yml file to deploy containers in the Azure Web App for container. And there is something you need to know.
First, the Azure Web App only can expose one port to the outside. So it means you can only expose one container to the outside, not two.
Second, if you expose a port other than 80 or 443, then you can use the environment variable WEBSITES_PORT to let Azure know. Here is the documentation.
Third, the Web App only supports a part of the options of the docker-compose, here you can see the supported options and the unsupported options.
The last one is that all the containers in the Web App can communicate with each other via the port that the container exposes. So you don't need to set the network option.

Calling service from one docker container to other container is causing net::ERR_NAME_NOT_RESOLVED

i am running application with multi containers as below..
feeder - is a simple nodejs container from image node:alpine
api - is nodejs container with expressjs from image node:alpine
ui-app - is react app container from image node:alpine
i am trying to call the api service in ui-app i am getting error as below
image to Console Log
image to Console Log
not sure what is causing the problem
if i access the services as http://192.168.99.100/ping it works (that is my docker machine default ip)...
but if i use container name like http://api:3200/ping it is not working...? please help..
the below is my docker-compose.
version: '3'
services:
feeder:
build: ./feeder
container_name: feeder
tty: true
depends_on:
- redis
links:
- redis
environment:
- IS_FROM_DOCKER=true
ports:
- "3100:3100"
networks:
- hmdanet
api:
build: ./api
container_name: api
tty: true
depends_on:
- feeder
links:
- redis
environment:
- IS_FROM_DOCKER=true
ports:
- "3200:3200"
networks:
hmdanet:
aliases:
- "hmda-api"
ui-app:
build: ./ui-app
container_name: ui-app
tty: true
depends_on:
- api
links:
- api
environment:
- IS_FROM_DOCKER=true
ports:
- "3000:3000"
networks:
- hmdanet
redis:
image: redis:latest
ports:
- '6379:6379'
networks:
- hmdanet
networks:
hmdanet:
driver: bridge
You can only use service name as a domain name when you are inside a container. In you case it's your browser making the call, it does not know what api is. In you web app, you should have an env like base url set to the ip of your docker machine or localhost.

Docker from a container calls to another container (Connection Refused)

I have container for two NodeJS services and one Nginx for reverse-proxy.
I have make NGINX on port 80 so it's publicly available via localhost on my browser
I also use reverse-proxy to proxy_pass to each responsible service.
location /api/v1/service1/ {
proxy_pass http://service1:3000/;
}
location /api/v1/service2/ {
proxy_pass http://service2:3000/;
}
In my service 1, there is an axios module that wants to call to service 2 by making a request to localhost/api/v1/service2
But, it says that connection is refused. I doubt if the localhost in service 1 refer to its container, not the docker host.
version: '3'
services:
service1:
build: './service1'
networks:
- backend
service2:
build: './service2'
networks:
- backend
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
- backend
networks:
backend:
driver: bridge
Even after using network, it still says ECONNREFUSED.
Please help.
Try adding the depends_on in your docker-compose file for the nginx, like below:
version: '3'
services:
service1:
build: './service1'
expose:
- "3000"
networks:
- backend
service2:
build: './service2'
expose:
- "3000"
networks:
- backend
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
- backend
depends_on:
- service1
- service2
networks:
backend:
driver: bridge
This would make sure that both services are running first before the nginx container attempts to connect to them. Perhaps the connection is refused because the nginx container keeps crashing due to it not finding the two services running when it executes its conf file and connect to the backends.

Resources