Docker - How to wait for container running - node.js

i want launch three containers for my web application.
The containers are: frontend, backend and mongo database.
To do this i write the following docker-compose.yml
version: '3.7'
services:
web:
image: node
container_name: web
ports:
- "3000:3000"
working_dir: /node/client
volumes:
- ./client:/node/client
links:
- api
depends_on:
- api
command: npm start
api:
image: node
container_name: api
ports:
- "3001:3001"
working_dir: /node/api
volumes:
- ./server:/node/api
links:
- mongodb
depends_on:
- mongodb
command: npm start
mongodb:
restart: always
image: mongo
container_name: mongodb
ports:
- "27017:27017"
volumes:
- ./database/data:/data/db
- ./database/config:/data/configdb
and update connection string on my .env file
MONGO_URI = 'mongodb://mongodb:27017/test'
I run it with docker-compose up -d and all go on.
The problem is when i run docker logs api -f for monitoring the backend status: i have MongoNetworkError: failed to connect to server [mongodb:27017] on first connect error, because my mongodb container is up but not in waiting connections (he goes up after backend try to connect).
How can i check if mongodb is in waiting connections status before run api container?
Thanks in advance

Several possible solutions in order of preference:
Configure your application to retry after a short delay and eventually timeout after too many connection failures. This is an ideal solution for portability and can also be used to handle the database restarting after your application is already running and connected.
Use an entrypoint that waits for mongo to become available. You can attempt a full mongo client connect + login, or a simple tcp port check with a script like wait-for-it. Once that check finishes (or times out and fails) you can continue the entrypoint to launching your application.
Configure docker to retry starting your application with a restart policy, or deploy it with orchestration that automatically recovers when the application crashes. This is a less than ideal solution, but extremely easy to implement.
Here's an example of option 3:
api:
image: node
deploy:
restart_policy:
condition: unless-stopped
Note, looking at your compose file, you have a mix of v2 and v3 syntax in your compose file, and many options like depends_on, links, and container_name, are not valid with swarm mode. You are also defining settings like working_dir, which should really be done in your Dockerfile instead.

Related

I can't connect a nodejs app to a redis server using docker

Good morning guys.
I'm having a problem connecting a nodejs application, in a container, to another container that contains a redis server. On my local machine I can connect the application to this redis container without any problem. However, when trying to upload this application in a container, a timeout error is returned.
I'm new to docker and I don't understand why I can connect to this docker container in the application running locally on my machine but that same connection doesn't work when I upload the application in a container.
I tried using docker-compose, but from what I understand it will upload in another container to the redis server, instead of using the redis container that is already in docker.
To connect to redis I'm using the following code:
createClient({
socket: {
host: process.env.REDIS_HOST,
port: Number(process.env.REDIS_PORT)
}
});
Where REDIS_HOST is the address of my container running on the server and REDIS_PORT is the port where this container is running on my server.
To run redis on docker I used the following guide: https://redis.io/docs/stack/get-started/install/docker/
I apologize if my problem was not very clear, I'm still studying docker.
You mentioned you are using Docker Compose. Here's an example showing how to start Redis in a container, and make your Node application wait for that container then use an environment variable in your Node application to specify the name of the host to connect to Redis on. In this example it connects to the container running Redis that I've called "redis":
version: "3.9"
services:
redis:
container_name: redis_kaboom
image: "redislabs/redismod"
ports:
- 6379:6379
volumes:
- ./redisdata:/data
entrypoint:
redis-server
--loadmodule /usr/lib/redis/modules/rejson.so
--appendonly yes
deploy:
replicas: 1
restart_policy:
condition: on-failure
node:
container_name: node_kaboom
build: .
volumes:
- .:/app
- /app/node_modules
command: sh -c "npm run load && npm run dev"
depends_on:
- redis
ports:
- 8080:8080
environment:
- REDIS_HOST=redis
So in your Node code you'd then use the value of process.env.REDIS_HOST to connect to the right Redis host. Here, I'm not using a password or a non-standard port, you could also supply those as environment variables that match the configuration of the Redis container in Docker Compose too if you needed to.
Disclosure: I work for Redis.

GET request not responding in docker container

I have a fastapi app running in a docker container that is connected to a PostgreSQL DB which is running as a container too. I have both the infos in the docker-compose.yml file.
In the app, I have a POST endpoint that is requesting data from an external API (https://restcountries.com/v2/all) using the requests library. Once the data is extracted, I am trying to save it in a table in the DB. When I trigger the endpoint from docker container, it takes forever and the data is not being extracted from the API. But at the same time, when I run the same code outside the docker container, it gets executed instantly and the data is received.
The docker-compose.yml file:
version: "3.6"
services:
backend-api:
build:
context: .
dockerfile: docker/Dockerfile.api
volumes:
- ./:/srv/recruiting/
command: uvicorn --reload --reload-dir "/srv/recruiting/backend" --host 0.0.0.0 --port 8080 --log-level "debug" "backend.main:app"
ports:
- "8080:8080"
networks:
- backend_network
depends_on:
- postgres
postgres:
image: "postgres:13"
volumes:
- postgres_data:/var/lib/postgresql/data/pgdata:delegated
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_PASSWORD=pgpw12
expose:
- "5432"
ports:
- "5432:5432"
networks:
- backend_network
volumes:
postgres_data: {}
networks:
backend_network: {}
The code that is making the request:
req = requests.get('https://restcountries.com/v2/all')
json_data = req.json()
Am I missing something or doing something wrong?
I often use python requests inside my python containers, and have come across this problem a couple of times. I haven't found a proper solution, but restarting Docker Desktop (entirely restarting it, not just restarting your container) seems to work every time. What I have found helpful in the past, is to specify a timeout period in the invocation of the HTTP call:
requests.request(method="POST", url=url, json=data, headers=headers, timeout=2)
When the server does not send data within the timeout period, an exception will be raised. At least you'll be aware of the issue, and will waste less time identifying when this occurs again.

Docker -- restart service if node app crashes

I have a dockerized nodejs service which unfortunatly due to 3rd party modules used keeps crashing randomly like once or twice per day. The problem is I need to have that service up all the time. I tried to use the "restart" option in my compose file, but it doesn't do anything.
Is there anything I am doing wrong, or missed in order to restart the docker container if the node process crashes?
Here is my compose snippet
version: "3"
services:
api:
env_file:
- .env
build:
context: .
ports:
- '5000:5000'
depends_on:
- postgres
networks:
- postgres
restart: always

how to access a node container through another oracle-jet container?

I want my UI to get data through my node server, where each is an independent container. How to connect them using docker-compose file?
i have three container that run mongodb, node server and oracle jet for ui,
i wanna access the node APIs through the oraclejet UI so and the mongo db through the node so i defined this docker-compose.
the link between mongo and node work.
version: "2"
services:
jet:
container_name: sam-jet
image: amazus/sam-ui
ports:
- "8000:8000"
links:
- app
depends_on:
- app
app:
container_name: sam-node
restart: always
image: amazus/sam-apis
ports:
- "3000:3000"
links:
- mongo
depends_on:
- mongo
mongo:
container_name: sam-mongo
image: amazus/sam-data
ports:
- "27017:27017"
In my oracle jet i define the URI as "http://localhost:3000/sam" but didn't work and i tried "http://app:3000/sam" after reading this one accessing a docker container from another another container but also i doesn't work
OJET just needs a working end point to fetch data, really does not matter where and what stack it is hosted. Once you are able to get the nodeJS service working and the URL works from a browser, it should work from the JET app as well. There may be CORS related headers that you need to set up at server side as well.

NodeJS, Rabbitmq & Docker: the service using Seneca seems to start before RabbitMQ

I am using Docker to create multiple containers, one of which contains a RabbitMQ instance and another contains the node.js action that should respond to queue activity. Traversing the docker-compose logs, I see a lot of ECONNREFUSED errors, before I see where the line begins indicating that RabbitMQ has started in its container. This seems to indicate that RabbitMQ seems to be starting after the service that needs it.
As a sidebar, just to eliminate any other possible causes here is the connection string for node.js to connect to RabbitMQ:
amqp://rabbitmq:5672
and here is the entry for RabbitMQ in the docker-compose.yaml file:
rabbitmq:
container_name: "myapp_rabbitmq"
tty: true
image: rabbitmq:management
ports:
- 15672:15672
- 15671:15671
- 5672:5672
volumes:
- /rabbitmq/lib:/var/lib/rabbitmq
- /rabbitmq/log:/var/log/rabbitmq
- /rabbitmq/conf:/etc/rabbitmq/
service1:
container_name: "service1"
build:
context: .
dockerfile: ./service1.dockerfile
links:
- mongo
- rabbitmq
depends_on:
- mongo
- rabbitmq
service2:
container_name: "service2"
build:
context: .
dockerfile: ./service2/dockerfile
links:
- mongo
- rabbitmq
depends_on:
- mongo
- rabbitmq
What is the fix for this timing issue?
How could I get RabbitMQ to start before the consuming container starts?
Might this not be a timing issue, but a configuration issue in the docker-compose.yml entry I have listed?
It doesn't look like you have included a complete docker-compose file. I would expect to also see your node container in the compose. I think the problem is that you need a
depends_on:
- "rabbitmq"
In the node container part of your docker compose
More info on compose dependancies here: https://docs.docker.com/compose/startup-order/
note, as this page suggests you should do this in conjunction with making your app resilient to outages on external services.
You need to control the boot-up process of your dependent containers. Below documents the same
https://docs.docker.com/compose/startup-order/
I usually use wait-for-it.sh file from below project
https://github.com/vishnubob/wait-for-it
So I will have a below command in my service1
wait-for-it.sh rabbitmq:5672 -t 90 -- command with args to launch service1

Resources