I have a VPS (Ubuntu 16.04) and deploy a website with docker-compose, and it worked fine before.
My docker-compose.yml file looks like:
version: '2'
services:
backend:
build: ./backend
restart: always
command: uwsgi --ini /opt/workspace/backend/uwsgi.ini
nginx:
image: nginx:latest
expose:
- "80:80"
restart: always
redis:
image: redis:latest
volumes:
- redis-data:/data
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
redis-data:
However, recently, it suffers DNS intermittent failure (every 2-3 days).
MySQL Client raise error:
Can't connect to MySQL server on 'xxx.xxx.com (it's in internet)
Redis Client raise error:
ConnectionError: Error -3 connecting to redis:6379. Temporary failure in name resolution.
When the problem happens, ping vps's ip is ok. But ssh is not.
What's wrong?
This is not a DNS issue, check the logs on your server, the server might be too busy to answer at any given point in time. There can be multiple reasons for server being busy. Eg. it could be made busy by bots, or some other process might be running.
And since you have publicly open mysql port, it will be the culprit mostly.
Related
I have a fastapi app running in a docker container that is connected to a PostgreSQL DB which is running as a container too. I have both the infos in the docker-compose.yml file.
In the app, I have a POST endpoint that is requesting data from an external API (https://restcountries.com/v2/all) using the requests library. Once the data is extracted, I am trying to save it in a table in the DB. When I trigger the endpoint from docker container, it takes forever and the data is not being extracted from the API. But at the same time, when I run the same code outside the docker container, it gets executed instantly and the data is received.
The docker-compose.yml file:
version: "3.6"
services:
backend-api:
build:
context: .
dockerfile: docker/Dockerfile.api
volumes:
- ./:/srv/recruiting/
command: uvicorn --reload --reload-dir "/srv/recruiting/backend" --host 0.0.0.0 --port 8080 --log-level "debug" "backend.main:app"
ports:
- "8080:8080"
networks:
- backend_network
depends_on:
- postgres
postgres:
image: "postgres:13"
volumes:
- postgres_data:/var/lib/postgresql/data/pgdata:delegated
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_PASSWORD=pgpw12
expose:
- "5432"
ports:
- "5432:5432"
networks:
- backend_network
volumes:
postgres_data: {}
networks:
backend_network: {}
The code that is making the request:
req = requests.get('https://restcountries.com/v2/all')
json_data = req.json()
Am I missing something or doing something wrong?
I often use python requests inside my python containers, and have come across this problem a couple of times. I haven't found a proper solution, but restarting Docker Desktop (entirely restarting it, not just restarting your container) seems to work every time. What I have found helpful in the past, is to specify a timeout period in the invocation of the HTTP call:
requests.request(method="POST", url=url, json=data, headers=headers, timeout=2)
When the server does not send data within the timeout period, an exception will be raised. At least you'll be aware of the issue, and will waste less time identifying when this occurs again.
I have a WebApp that runs in Linux Service Plan as docker-compose. My config is:
version: '3'
networks:
my-network:
driver: bridge
services:
web-site:
image: server.azurecr.io/site/site-production:latest
container_name: web-site
networks:
- my-network
nginx:
image: server.azurecr.io/nginx/nginx-production:latest
container_name: nginx
ports:
- "8080:8080"
networks:
- my-network
And I realize that my app is sometimes freezing for a while (usually less than 1 minute) and when I get to check on Diagnose (Linux - Number of Running Containers per Host) I can see this:
How could it be possible to have 20+ containers running?
Thanks.
I've created a new service plan (P2v2) for my app (and nothing else) and my app (which has just two containers - .net 3.1 and nginx) and it shows 4 containers... but this is not a problem for me... at all...
The problem I've found in Application Insigths was about a method that retrieves a blob to serve an image... blobs are really fast for uploads and downloads but they are terrible for search... my method was checking if the blob exists before sending it to api and this (assync) proccess was blocking my api responses... I just removed the check and my app is running as desired (all under 1sec - almost all under 250ms reponse).
Thanks for you help.
i want launch three containers for my web application.
The containers are: frontend, backend and mongo database.
To do this i write the following docker-compose.yml
version: '3.7'
services:
web:
image: node
container_name: web
ports:
- "3000:3000"
working_dir: /node/client
volumes:
- ./client:/node/client
links:
- api
depends_on:
- api
command: npm start
api:
image: node
container_name: api
ports:
- "3001:3001"
working_dir: /node/api
volumes:
- ./server:/node/api
links:
- mongodb
depends_on:
- mongodb
command: npm start
mongodb:
restart: always
image: mongo
container_name: mongodb
ports:
- "27017:27017"
volumes:
- ./database/data:/data/db
- ./database/config:/data/configdb
and update connection string on my .env file
MONGO_URI = 'mongodb://mongodb:27017/test'
I run it with docker-compose up -d and all go on.
The problem is when i run docker logs api -f for monitoring the backend status: i have MongoNetworkError: failed to connect to server [mongodb:27017] on first connect error, because my mongodb container is up but not in waiting connections (he goes up after backend try to connect).
How can i check if mongodb is in waiting connections status before run api container?
Thanks in advance
Several possible solutions in order of preference:
Configure your application to retry after a short delay and eventually timeout after too many connection failures. This is an ideal solution for portability and can also be used to handle the database restarting after your application is already running and connected.
Use an entrypoint that waits for mongo to become available. You can attempt a full mongo client connect + login, or a simple tcp port check with a script like wait-for-it. Once that check finishes (or times out and fails) you can continue the entrypoint to launching your application.
Configure docker to retry starting your application with a restart policy, or deploy it with orchestration that automatically recovers when the application crashes. This is a less than ideal solution, but extremely easy to implement.
Here's an example of option 3:
api:
image: node
deploy:
restart_policy:
condition: unless-stopped
Note, looking at your compose file, you have a mix of v2 and v3 syntax in your compose file, and many options like depends_on, links, and container_name, are not valid with swarm mode. You are also defining settings like working_dir, which should really be done in your Dockerfile instead.
I use VPS for testing my web apps online. And I use Docker to run many web apps in the same server. Here is my
docker-compose.yml
version: "3.7"
services:
gateway:
build:
context: ./gateway
dockerfile: Dockerfile
restart: always
ports:
- 80:3000
networks:
erealm:
ipv4_address: 10.5.0.2
db:
image: mysql/mysql-server:5.5
restart: always
environment:
MYSQL_ROOT_PASSWORD: 4lf483t0
networks:
erealm:
ipv4_address: 10.5.0.3
phpmyadmin:
image: nazarpc/phpmyadmin:latest
environment:
- MYSQL_HOST=10.5.0.3:3306
restart: always
depends_on:
- db
ports:
- 1234:80
networks:
erealm:
ipv4_address: 10.5.0.4
static:
build:
context: ./static
dockerfile: Dockerfile
restart: always
networks:
erealm:
ipv4_address: 10.5.0.5
onlinecv:
build:
context: ./onlinecv
dockerfile: Dockerfile
restart: always
ports:
- 81:3000
networks:
erealm:
ipv4_address: 10.5.0.10
speeqapi:
build:
context: ./speeq/api
dockerfile: Dockerfile
restart: always
environment:
MYSQL_SERVER: 10.5.0.3
MYSQL_PORT: 3306
MYSQL_USER: xxxxxxxxxx
MYSQL_PASSWORD: xxxxxxxxxx
MYSQL_DATABASE: xxxxxxxxxx
depends_on:
- db
networks:
erealm:
ipv4_address: 10.5.0.20
speeqfe:
build:
context: ./speeq/fe
dockerfile: Dockerfile
restart: always
environment:
REACT_APP_API_SERVER: 10.5.0.20:3000
REACT_APP_STATIC_SERVER: 10.5.0.5:3000
ports:
- 82:3000
depends_on:
- db
- static
- speeqapi
networks:
erealm:
ipv4_address: 10.5.0.21
networks:
erealm:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/24
The main ideia behind this scheme is having only HTTP ports open to the world, while all necessary services run protected by Docker internal network, unaccesible to the world outside.
I use the gateway service to map the HTTP requests coming for the different apps to different ports. So, I have my online CV mapped to CNAME cv.eddealmeida.net and this Speeq app mapped to CNAME speeq.eddealmeida.net in my DNS zone, both pointing to this server. When my server receives a request to http://cv.eddealmeida.net or http://speeq.eddealmeida.net, the Node/Express-based gateway application (listening to port 80) splits the HOST paraments of the request an applies a simple mapping to send the requests to port 81 and 82 respectively.
Well, everything is running fine, but for the internal requests. First I had a problem with nternal name resolution, which I solved by giving IPs to all services, as you may see.
Now my internal requests are going to their correct places, but... the fetch requests made by the speeq frontend are stalling. They just keep stalling, over and over again. I tested the API using curl and everything is fine, it aswers correctly my command line requests. So, there is no problem with my API / Database connection or something like that. Google Chrome gave me this explanation, but I can't see me fitting in any of the cases mentioned.
Have someone ever lived a situation like this to give me a hint? I've been fighting this for the last 24 hours and run out of ideas. I double-checked everything and it still won't work.
I have few assumptions that might help.
1- Regarding the usage of IPs, I would suggest trying to use network aliases instead of IPs and this is a long-term solution
2- I can see that you are using ReactJS as a front-end which is a client side - I am assuming that you are using static files after building your React application - in this case you need to expose the backend/api to public ip through port mapping or using domain name points to a public ip where your api is listening or any similar method in order to make the front-end application able to reach it when you open it from the browser (which is a different device in your case). So if speeqfe is a reactjs frontend you need to change the environment variables value which points to the other containers to a public ip in order to make it work properly after building the static files
since a couple of weeks I'm trying to fix an issue on my new laptop with fedora 28 KDE desktop!
I have two issues :
The container can't connect to the internet
The container doesn't see my hosts in /etc/hosts
I tried many solutions, disable firewalld, flusing iptables, accepting all connections in ip tables, enabling firewalld and changing network zones to "trusted"! also disbaled iptables using daemon.json! it still not working!!
please anyone can help, it's becoming a nightmare for me!
UPDATE #1:
even when I try to build an image it can't access the internet for some reason!, it seems the problem in the level of docker not only containers!
I tried to disable the firewall or changing zones, I also set all connections to "trusted" zone
anyone can help?
UPDATE #2:
When I turn on firewalld service and set wifi connection zone to 'external' now container/docker is able to access internet, but services can't access each other
Here is my yml file :
version: "3.4"
services:
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
deploy:
mode: replicated
replicas: 1
networks:
nabed: {}
volumes:
- "../nginx/etc/nginx/conf.d:/etc/nginx/conf.d"
- "../nginx/etc/nginx/ssl:/etc/nginx/ssl"
api:
image: nabed_backend:dev
hostname: api
command: api
extra_hosts:
- "nabed.local:172.17.0.1"
- "cms.nabed.local:172.17.0.1"
deploy:
mode: replicated
replicas: 1
env_file: .api.env
networks:
nabed: {}
cms:
image: nabedd/cms:master
hostname: cms
extra_hosts:
- "nabed.local:172.17.0.1"
- "api.nabed.local:172.17.0.1"
deploy:
mode: replicated
replicas: 1
env_file: .cms.env
volumes:
- "../admin-panel:/admin-panel"
networks:
nabed: {}
networks:
nabed:
driver: overlay
inside API container:
$ curl cms.nabed.local
curl: (7) Failed to connect to cms.nabed.local port 80: Connection timed out
inside CMS container:
$ curl api.nabed.local
curl: (7) Failed to connect to api.nabed.local port 80: Connection timed out
UPDATE #3:
I'm able to fix the issue by putting my hosts in my YAML file in extra_hosts options
then turning my all networks to 'trusted' mode
then restarting docker and Networkmanager
Note: for ppl who voted to close this question, please try help instead
Try very dirty solution - start your container in host network - docker run argument --net=host.
I guess, there will be also better solution, but you didn't provide details how are you starting your containers and which network is available for your containers.