I have a fastapi app running in a docker container that is connected to a PostgreSQL DB which is running as a container too. I have both the infos in the docker-compose.yml file.
In the app, I have a POST endpoint that is requesting data from an external API (https://restcountries.com/v2/all) using the requests library. Once the data is extracted, I am trying to save it in a table in the DB. When I trigger the endpoint from docker container, it takes forever and the data is not being extracted from the API. But at the same time, when I run the same code outside the docker container, it gets executed instantly and the data is received.
The docker-compose.yml file:
version: "3.6"
services:
backend-api:
build:
context: .
dockerfile: docker/Dockerfile.api
volumes:
- ./:/srv/recruiting/
command: uvicorn --reload --reload-dir "/srv/recruiting/backend" --host 0.0.0.0 --port 8080 --log-level "debug" "backend.main:app"
ports:
- "8080:8080"
networks:
- backend_network
depends_on:
- postgres
postgres:
image: "postgres:13"
volumes:
- postgres_data:/var/lib/postgresql/data/pgdata:delegated
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_PASSWORD=pgpw12
expose:
- "5432"
ports:
- "5432:5432"
networks:
- backend_network
volumes:
postgres_data: {}
networks:
backend_network: {}
The code that is making the request:
req = requests.get('https://restcountries.com/v2/all')
json_data = req.json()
Am I missing something or doing something wrong?
I often use python requests inside my python containers, and have come across this problem a couple of times. I haven't found a proper solution, but restarting Docker Desktop (entirely restarting it, not just restarting your container) seems to work every time. What I have found helpful in the past, is to specify a timeout period in the invocation of the HTTP call:
requests.request(method="POST", url=url, json=data, headers=headers, timeout=2)
When the server does not send data within the timeout period, an exception will be raised. At least you'll be aware of the issue, and will waste less time identifying when this occurs again.
Related
I use docker compose for my project. It includes these containers:
Nginx
PostgreSQL
Backend (Node.js)
Frontend (SvelteKit)
I use SvelteKit's Load function to send request to my backend. In short, it sends http request to the backend container either on client-side or server-side. Which means, the request can be send not only by browser but also by container itself.
I can't get it to work on both client-side and server-side fetch. Only one of them is working.
I tried these URLs:
http://api.localhost/articles (only client-side request works)
http://api.host.docker.internal/articles (only server-side request works)
http://backend:8080/articles (only server-side request works)
I get this error:
From SvelteKit:
FetchError: request to http://api.localhost/articles failed, reason: connect ECONNREFUSED 127.0.0.1:80
From Nginx:
Timeout error
Docker-compose.yml file:
version: '3.8'
services:
webserver:
restart: unless-stopped
image: nginx:latest
ports:
- 80:80
- 443:443
depends_on:
- frontend
- backend
networks:
- webserver
volumes:
- ./webserver/nginx/conf/:/etc/nginx/conf.d/
- ./webserver/certbot/www:/var/www/certbot/:ro
- ./webserver/certbot/conf/:/etc/nginx/ssl/:ro
backend:
restart: unless-stopped
build:
context: ./backend
target: development
ports:
- 8080:8080
depends_on:
- db
networks:
- database
- webserver
volumes:
- ./backend:/app
frontend:
restart: unless-stopped
build:
context: ./frontend
target: development
ports:
- 3000:3000
depends_on:
- backend
networks:
- webserver
networks:
database:
driver: bridge
webserver:
driver: bridge
How can I send server-side request to docker container by using http://api.localhost/articles as URL? I also want my container to be accesible by other containers as http://backend:8080 if possible.
Use SvelteKit's externalFetch hook to have a different and overridden API URL in frontend and backend.
In docker-compose, the containers should be able to access each other by name if they are in the same Docker network.
Your frontend docker SSR should be able to call your backend docker by using the URL:
http://backend:8080
Web browser should be able to call your backend by using the URL:
(whatever reads in your Nginx configuration files)
Naturally, there are many reasons why this could fail. The best way to tackle this is to test URLs one by one, server by server using curl and entering addresses to the web browser address. It's not possible to answer the exact reason why it fails, because the question does not contain enough information, or generally repeatable recipe for the issue.
For further information, here is our sample configuration for a dockerised SvelteKit frontend. The internal backend shortcut is defined using hooks and configuration variables. Here is our externalFetch example.
From a docker compose you will be able to CURL from one container using the dns (service name you gave in the compose file)
CURL -XGET backend:8080
You can achieve this also by running all of these containers on host driver network.
Regarding the http://api.localhost/articles
You can change the /etc/hosts
And specify the IP you want your computer to try to communicate with when this url : http://api.localhost/articles is used.
I have a dockerized nodejs service which unfortunatly due to 3rd party modules used keeps crashing randomly like once or twice per day. The problem is I need to have that service up all the time. I tried to use the "restart" option in my compose file, but it doesn't do anything.
Is there anything I am doing wrong, or missed in order to restart the docker container if the node process crashes?
Here is my compose snippet
version: "3"
services:
api:
env_file:
- .env
build:
context: .
ports:
- '5000:5000'
depends_on:
- postgres
networks:
- postgres
restart: always
i want launch three containers for my web application.
The containers are: frontend, backend and mongo database.
To do this i write the following docker-compose.yml
version: '3.7'
services:
web:
image: node
container_name: web
ports:
- "3000:3000"
working_dir: /node/client
volumes:
- ./client:/node/client
links:
- api
depends_on:
- api
command: npm start
api:
image: node
container_name: api
ports:
- "3001:3001"
working_dir: /node/api
volumes:
- ./server:/node/api
links:
- mongodb
depends_on:
- mongodb
command: npm start
mongodb:
restart: always
image: mongo
container_name: mongodb
ports:
- "27017:27017"
volumes:
- ./database/data:/data/db
- ./database/config:/data/configdb
and update connection string on my .env file
MONGO_URI = 'mongodb://mongodb:27017/test'
I run it with docker-compose up -d and all go on.
The problem is when i run docker logs api -f for monitoring the backend status: i have MongoNetworkError: failed to connect to server [mongodb:27017] on first connect error, because my mongodb container is up but not in waiting connections (he goes up after backend try to connect).
How can i check if mongodb is in waiting connections status before run api container?
Thanks in advance
Several possible solutions in order of preference:
Configure your application to retry after a short delay and eventually timeout after too many connection failures. This is an ideal solution for portability and can also be used to handle the database restarting after your application is already running and connected.
Use an entrypoint that waits for mongo to become available. You can attempt a full mongo client connect + login, or a simple tcp port check with a script like wait-for-it. Once that check finishes (or times out and fails) you can continue the entrypoint to launching your application.
Configure docker to retry starting your application with a restart policy, or deploy it with orchestration that automatically recovers when the application crashes. This is a less than ideal solution, but extremely easy to implement.
Here's an example of option 3:
api:
image: node
deploy:
restart_policy:
condition: unless-stopped
Note, looking at your compose file, you have a mix of v2 and v3 syntax in your compose file, and many options like depends_on, links, and container_name, are not valid with swarm mode. You are also defining settings like working_dir, which should really be done in your Dockerfile instead.
I am using Docker to create multiple containers, one of which contains a RabbitMQ instance and another contains the node.js action that should respond to queue activity. Traversing the docker-compose logs, I see a lot of ECONNREFUSED errors, before I see where the line begins indicating that RabbitMQ has started in its container. This seems to indicate that RabbitMQ seems to be starting after the service that needs it.
As a sidebar, just to eliminate any other possible causes here is the connection string for node.js to connect to RabbitMQ:
amqp://rabbitmq:5672
and here is the entry for RabbitMQ in the docker-compose.yaml file:
rabbitmq:
container_name: "myapp_rabbitmq"
tty: true
image: rabbitmq:management
ports:
- 15672:15672
- 15671:15671
- 5672:5672
volumes:
- /rabbitmq/lib:/var/lib/rabbitmq
- /rabbitmq/log:/var/log/rabbitmq
- /rabbitmq/conf:/etc/rabbitmq/
service1:
container_name: "service1"
build:
context: .
dockerfile: ./service1.dockerfile
links:
- mongo
- rabbitmq
depends_on:
- mongo
- rabbitmq
service2:
container_name: "service2"
build:
context: .
dockerfile: ./service2/dockerfile
links:
- mongo
- rabbitmq
depends_on:
- mongo
- rabbitmq
What is the fix for this timing issue?
How could I get RabbitMQ to start before the consuming container starts?
Might this not be a timing issue, but a configuration issue in the docker-compose.yml entry I have listed?
It doesn't look like you have included a complete docker-compose file. I would expect to also see your node container in the compose. I think the problem is that you need a
depends_on:
- "rabbitmq"
In the node container part of your docker compose
More info on compose dependancies here: https://docs.docker.com/compose/startup-order/
note, as this page suggests you should do this in conjunction with making your app resilient to outages on external services.
You need to control the boot-up process of your dependent containers. Below documents the same
https://docs.docker.com/compose/startup-order/
I usually use wait-for-it.sh file from below project
https://github.com/vishnubob/wait-for-it
So I will have a below command in my service1
wait-for-it.sh rabbitmq:5672 -t 90 -- command with args to launch service1
I have a two container and How I can access another container in python file with django web server
Docker-compose.yml file
version: '2'
services:
web:
build: ./web/
command: python3 manage.py runserver 0.0.0.0:8001
volumes:
- ./web:/code
ports:
- "8001:80"
networks:
- dock_net
container_name: con_web
depends_on:
- "api"
links:
- api
api:
build: ./api/
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- ./api:/code
ports:
- "8000:80"
networks:
- dock_net
container_name: con_api
networks:
dock_net:
driver: bridge
Python File:
I can get mail_string from form
mail = request.POST.get('mail_string', '')
url = 'http://con_api:8000/api/'+mail+'/?format=json'
resp = requests.get(url=url)
return HttpResponse(resp.text)
I request api container and get value but I dont know ip address
Updated Answer
In your python file, you can use url = 'http://api/'+mail+'/?format=json'. This will enable you to access the url you are trying to get request from.
Original Answer
If the two containers are independent, then you can create a network and when you make both the containers a part of same network then you can access them using their hostname which you can specify by --hostname=HOSTNAME.
Another easy way is to use docker-compose file which creates a network by default and all the services declared in the file are a part of that network. By that you can access other services by their service name. Simply like http://container1/ when your docker-compose file is like this:
version: '3'
services:
container1:
image: some_image
container2:
image: another_or_same_image
Now enter into container2 by:
docker exec -it container2 /bin/bash
and run ping http://contianer1
You will receive packets and therefore be able to access other container.