How to run one server docker container before client container - node.js

I have a fairly complex application where I have next js on the client and on the backend I have graphql and I have nginx as a reverse proxy.
I am using next JS incremental static site regeneration functionality on the index page so that's why I want my server up and running before my client container start building because when I run npm run build it is going to fetch some data from the graphql server here is my docker compose file
version: "3"
services:
mynginx:
container_name: mynginx
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
graphql:
container_name: graphql_server
depends_on:
- mynginx
build:
context: ./server
dockerfile: Dockerfile
mynextjs:
container_name: nextjs_server
depends_on:
- graphql
build:
context: ./client
dockerfile: Dockerfile

depends_on with healthcheck, to start container when another already works
https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
something like this
services:
mynginx:
container_name: mynginx
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
healthcheck:
test: ["CMD-SHELL", "wget -O /dev/null http://localhost || exit 1"]
timeout: 10s
graphql:
...
depends_on:
depends_on:
mynginx:
condition: service_healthy

In simple terms, what you need is to wait until graphql service is properly finished starting before you run mynextjs service.
I'm afraid that depends_on (or links) might not work out of the box in this case. The reason is, although Compose does start graphql service before it starts mynextjs service, it doesn't wait until graphql service is in READY state to start the dependant services.
Basically, what you need is to find a way to tell Compose to wait until graphql service is in READY state.
The solution is described in Control startup and shutdown order in Compose documentation
I hope this helps you. Cheers 🍻 !!!

If I understand your problem correctly, you need to delay beginning the build of the image for the frontend container, until the backend containers are running and ready.
The easiest way that comes to mind would be to use profiles to allow starting these independently.
Eg:
version: "3"
services:
mynginx:
container_name: mynginx
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
profiles: ["backend"]
graphql:
container_name: graphql_server
depends_on:
- mynginx
build:
context: ./server
dockerfile: Dockerfile
profiles: ["backend"]
mynextjs:
container_name: nextjs_server
depends_on:
- graphql
build:
context: ./client
dockerfile: Dockerfile
profiles: ["frontend"]
Then chain the starting something like:
docker-compose --profile backend -d up && docker-compose --profile frontend -d up
Another option might be splitting into two separate compose files, with a shared docker network between them.
ref: https://docs.docker.com/compose/profiles/

Related

--network=host works in docker build but not in docker-compose

I have a node.js application and in Ubuntu once it runs npm install it is giving timeout error like below.
Docker build npm install error network timeout
Solution is adding --network=host
docker build -t cassiamani/nodeapp --network=host .
But I have a docker-compose.yaml file like below;
version: '3.8'
services:
nodejs-server:
network_mode: "host"
build:
context: ./api
ports:
- "8000:8000"
container_name: node-api
volumes:
- ./api:/usr/src/app/api
- /usr/src/app/api/node_modules
react-ui:
network_mode: "host"
build:
context: ./web/web-app
ports:
- "3000:3000"
container_name: react-ui
stdin_open: true
volumes:
- ./web/web-app:/usr/src/app/my-app
- /usr/src/app/my-app/node_modules
And it still stucks on npm install command, and adding network_mode: "host" did not work. Am I missing something here?
The way you've done it specifies the network settings at run-time. To specify them at build time, you need to have network: under the build: section, like this
build:
context: ./api
network: host

How to host docker-compose Application on MS Azure?

I want to deploy a containerized Web-App to MS Azure. Therefore I've written the following docker-compose file:
version: "3"
services:
frontend:
image: fitnessappcontainerregistry.azurecr.io/crushit:frontend
build:
context: ./frontend
dockerfile: Dockerfile
networks:
- crushit
ports:
- "80:3000"
restart: unless-stopped
wikihow:
image: fitnessappcontainerregistry.azurecr.io/crushit:backend-wikihow
build:
context: ./backend
dockerfile: ./services/WikiHow Service/Dockerfile
networks:
- crushit
restart: unless-stopped
training:
image: fitnessappcontainerregistry.azurecr.io/crushit:backend-training
build:
context: ./backend
dockerfile: ./services/Training Service/Dockerfile
networks:
- crushit
restart: unless-stopped
eventpublisher:
image: fitnessappcontainerregistry.azurecr.io/crushit:backend-eventpublisher
build:
context: ./backend
dockerfile: ./services/EventPublisher/Dockerfile
networks:
- crushit
restart: unless-stopped
accountservice:
image: fitnessappcontainerregistry.azurecr.io/crushit:backend-account
build:
context: ./backend
dockerfile: ./services/Account Service/Dockerfile
networks:
- crushit
restart: unless-stopped
proxy:
image: fitnessappcontainerregistry.azurecr.io/crushit:backend-proxy
build:
context: ./backend
dockerfile: ./proxy/Dockerfile
networks:
- crushit
ports:
- "5000:5000"
restart: unless-stopped
networks:
crushit:
driver: bridge
For the App to function, two Containers (frontend, proxy) have to be exposed to the outside. Additionally all Containers have to be able to communicate with each other (therefore I added a custom network to all services). All Container-Images are stored in an Azure Container Registry.
Now I'm wondering how I can host my App on Azure. I already tried "Web App for Containers" which is the same as "App Services" I think. That didn't work fully, as I was not able to expose a "non standard" Port like 5000.
Is there any other way to host my App on Azure using this docker-compose file, or do I have to use Kubernetes instead?
You can use the docker-compose.yml file to deploy containers in the Azure Web App for container. And there is something you need to know.
First, the Azure Web App only can expose one port to the outside. So it means you can only expose one container to the outside, not two.
Second, if you expose a port other than 80 or 443, then you can use the environment variable WEBSITES_PORT to let Azure know. Here is the documentation.
Third, the Web App only supports a part of the options of the docker-compose, here you can see the supported options and the unsupported options.
The last one is that all the containers in the Web App can communicate with each other via the port that the container exposes. So you don't need to set the network option.

Running a container stops another container

I've created two basic MEAN stack apps with a common database (mongo db). I've also built docker for these apps.
Problem:
When i start a mean stack container(example-app-1) using
docker-compose up -d --build
The container runs smoothly and I'm also able to hit the container and view my page locally.
When i try to start another mean stack container(example-app-2) using
docker-compose up -d --build
my previous container is stopped and the current container works without any flaw.
Required:
I want both these containers to run simultaneously using a shared database. I need help in achieving this.
docker-compose.yml Example app -1
version: '3'
services:
example_app_1:
build:
dockerfile: dockerfile
context: ../../
image: example_app_1:1.0.0
backend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/backend/example-app-1-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_1
- BACKEND_PORT=8888
- BACKEND_IP=0.0.0.0
restart: always
ports:
- '8888:8888'
command: ['node', 'main.js']
networks:
- default
expose:
- 8888
frontend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/frontend/example_app_1
ports:
- '5200:5200'
command: ['http-server', '-p', '5200', '-o', '/app/example_app_1/frontend/example-app-1']
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose.yml for Example app 2
version: '3'
services:
example-app-2:
build:
dockerfile: dockerfile
context: ../../
image: example_app_2:1.0.0
backend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/backend/example-app-2-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_2
- BACKEND_PORT=3333
- BACKEND_IP=0.0.0.0
restart: always
networks:
- default
ports:
- '3333:3333'
command: ['node', 'main.js']
expose:
- 3333
frontend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/frontend/example-app-2
ports:
- '4200:4200'
command: ['http-server', '-p', '4200', '-o', '/app/example_app_2/frontend/example-app-2
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose will create containers named project_service. The project, by default, comes from the last component of the directory. So when you start the second docker-compose, it will stop the containers with those names, and start new ones.
Either move these two docker-compose files to separate directories so the container names are different, or run docker-compose with --project-name flag containing separate project names so the containers can be started with different names.

Docker and Rabbitmq: ECONNREFUSED between containers

I am trying to setup separate docker containers for rabbitmq and the consumer for the container, i.e., the process that would listen on the queue and perform the necessary tasks. I created the yml file, and the docker file.
I am able to run the yml file, however when I check the docker-compose logs I see where there are ECONNREFUSED errors.
NewUserNotification.js:
require('seneca')()
.use('seneca-amqp-transport')
.add('action:new_user_notification’, function(message, done) {
…
return done(null, {
pid: process.pid,
status: `Process ${process.pid} status: OK`
})
.listen({
type: 'amqp',
pin: ['action:new_user_notification’],
name: 'seneca.new_user_notification.queue',
url: process.env.AMQP_RECEIVE_URL,
timeout: 99999
});
error message in docker-compose log:
{"notice":"seneca: Action hook:listen,role:transport,type:amqp failed: connect ECONNREFUSED 127.0.0.1:5672.","code":
"act_execute","err":{"cause":{"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1",
"port":5672},"isOperational":true,"errno":"ECONNREFUSED","code":"act_execute","syscall":"connect","address":"127.0.0.1",
"port":5672,"eraro":true,"orig":{"cause":{"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1",
"port":5672},"isOperational":true,"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5672},
"seneca":true,"package":"seneca","msg":"seneca: Action hook:listen,role:transport,type:amqp failed: connect ECONNREFUSED 127.0.0.1:5672.",
"details":{"message":"connect ECONNREFUSED 127.0.0.1:5672","pattern":"hook:listen,role:transport,type:amqp","instance":"Seneca/…………/…………/1/3.4.3/-“,
”orig$":{"cause":{"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5672},"isOperational":true,
"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5672}
sample docker-compose.yml file:
version: '2.1'
services:
rabbitmq:
container_name: "4340_rabbitmq"
tty: true
image: rabbitmq:management
ports:
- 15672:15672
- 15671:15671
- 5672:5672
volumes:
- /rabbitmq/lib:/var/lib/rabbitmq
- /rabbitmq/log:/var/log/rabbitmq
- /rabbitmq/conf:/etc/rabbitmq/
account:
container_name: "account"
build:
context: .
dockerfile: ./Account/Dockerfile
ports:
- 3000:3000
links:
- "mongo"
- "rabbitmq"
depends_on:
- "mongo"
- "rabbitmq"
new_user_notification:
container_name: "app_new_user_notification"
build:
context: .
dockerfile: ./Account/dev.newusernotification.Dockerfile
links:
- "mongo"
- "rabbitmq"
depends_on:
- "mongo"
- "rabbitmq"
command: ["./wait-for-it.sh", "rabbitmq:5672", "-t", "90", "--", "node", “newusernotification.js"]
amqp connection string:
(I tried both ways, with and without a user/pass)
amqp://username:password#rabbitmq:5672
I added the link attribute to the docker-compose file and referenced the name in the .env file(rabbitmq). I tried to run the NewUserNotification.js file from outside the container and it started fine. What could be causing this problem? Connection string issue? Docker-Compose.yml configuration issue? Other?
Seems the environment variable AMQP_RECEIVE_URL is not constructed properly. According to error log the listener is trying to connect to localhost(127.0.0.1) which is not the rabbitmq service container IP. Find the modified configurations for a working sample.
1 docker-compose.yml
version: '2.1'
services:
rabbitmq:
container_name: "4340_rabbitmq"
tty: true
image: rabbitmq:management
ports:
- 15672:15672
- 15671:15671
- 5672:5672
volumes:
- ./rabbitmq/lib:/var/lib/rabbitmq
new_user_notification:
container_name: "app_new_user_notification"
build:
context: .
dockerfile: Dockerfile
env_file:
- ./un.env
links:
- rabbitmq
depends_on:
- rabbitmq
command: ["./wait-for-it.sh", "rabbitmq:5672", "-t", "120", "--", "node", "newusernotification.js"]
2 un.env
AMQP_RECEIVE_URL=amqp://guest:guest#rabbitmq:5672
Note that I've passed the AMQP_RECEIVE_URL as an environment variable to new_user_notification service using env_file and got rid of the account service
3 Dockerfile
FROM node:7
WORKDIR /app
COPY newusernotification.js /app
COPY wait-for-it.sh /app
RUN npm install --save seneca
RUN npm install --save seneca-amqp-transport
4 newusernotification.js use the same file in the question.
5 wait-for-it.sh
It is possible that your RabbitMQ service is not fully up, at the time the connection is attempted from the consuming service.
If this is the case, in Docker Compose, you can wait for services to come up using a container called dadarek/wait-for-dependencies.
1). Add a new service waitforrabbit to your docker-compose.yml
waitforrabbit:
image: dadarek/wait-for-dependencies
depends_on:
- rabbitmq
command: rabbitmq:5672
2). Include this service in the depends_on section of the service that requires RabbitMQ to be up.
depends_on:
- waitforrabbit
3). Startup compose
docker-compose run --rm waitforrabbit
docker-compose up -d account new_user_notification
Starting compose in this manner will essentially wait for RabbitMQ to be fully up before the connection from the consuming service is made.

Docker-compose networking, access host port from container

This is my compose file:
version: '3'
services:
web:
container_name: dash
build:
context: .
dockerfile: Dockerfile
args:
webpackVersion: 2.2.1
nodeVersion: "6.x"
ports:
- "3036:3036"
links:
- mongodb:dbhost
depends_on:
- mongodb
mongodb:
container_name: mongodb
build:
context: .
dockerfile: Dockerfile-mongodb
Right now web has access to mongodb container where I keep app configs. But I also need to be able access port 3306 on my local machine where I'm running docker-compose, from web.
I tried to follow the documentation, but I'm new in docker so it looks pretty complicated for me, how to use networking in docker-compose.
If any one can help me to understand this I'll be really grateful!
I found only one way to open all host ports, is to use network_mode: host
it should be also possible by using network but in my case first solution was enough.
version: '3'
services:
web:
container_name: dash
network_mode: host
build:
context: .
dockerfile: Dockerfile
args:
webpackVersion: 2.2.1
nodeVersion: "6.x"
ports:
- "3036:3036"
links:
- mongodb:dbhost
depends_on:
- mongodb
mongodb:
container_name: mongodb
network_mode: host
build:
context: .
dockerfile: Dockerfile-mongodb
network_mode: host won't work on mac, you should run docker in VB

Resources