Very slow queries between server and database in docker - node.js

I'm trying to setup some integrations tests with docker compose. I'm using docker compose to spin up a postgres database and a nodejs server. I'm then using jest to run http requests against the server.
For some reasons that I can't explain all SQL queries (even the simplest one) are extremely slow (+ 1s).
It sounds like a communication problem between the two containers, but I can't spot it. Am I doing something wrong?
Here's my docker-compose.yml file. The server is just a simple express app
version: "3.9"
services:
database:
image: postgres:12
env_file: .env
volumes:
- ./db-data:/var/lib/postgresql/data
healthcheck:
test: pg_isready -U test_user -d test_database
interval: 1s
timeout: 10s
retries: 3
start_period: 0s
server:
build: .
ports:
- "8080:8080"
depends_on:
database:
condition: service_healthy
env_file: .env
environment:
POSTGRES_HOST: database
NODE_ENV: test
init: true
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/healthcheck"]
interval: 1s
timeout: 10s
retries: 3
start_period: 0s
EDIT
I'm using
Docker version 20.10.2, build 2291f61
macOs BigSur 11.1 (20C69)

Try using Volume instead of Bind Mount for the data folder by changing this:
- ./db-data:/var/lib/postgresql/data
To this:
- db-data:/var/lib/postgresql/data
And adding this section to the end of the compose file:
volumes:
db-data:
You can read more about bind mounts vs volumes here

I found it. The issue is not in docker, but in the compiler. I'm using ncc and it breaks my postgres client.
I opened an issue in their repo with a minimal reproducible example
https://github.com/vercel/ncc/issues/646
Thanks a lot for your help

Related

Is it possible to mount my existing Mongo DataBase with docker-compose?

After some months of development I got to a point where it is better to dockerize my MERN application. I managed to create .yaml file and everything is working OK but the problem is that I already have big amount of data that is collected. I want to be able to mount this data to container but I don't know how to do it. Read a lot of stuff but still my data is not appearing after composing the applications. Here is how my docker-compose.yaml file looks-like:
version: '3.9'
services:
#MongoDB Service
mongo_db:
container_name: db_container
image: mongo:latest
restart: always
ports:
- 2717:27017
volumes:
- /mnt/c/temp/mongo/db:/data/db
#Node API Service
api:
build: .
ports:
- 4001:4001
environment:
PORT: 4001
MONGODB_URI: mongodb://db_container:27017
DB_NAME: project-system
depends_on:
- mongo_db
volumes:
mongo_db:
As you can see in this row:
volumes:
- /mnt/c/temp/mongo/db:/data/db
I am trying to point the path from my C:\ drive but this doesn't work. I also tried the same row in:
volumes:
mongo_db:
(the bottom of file) but again without success. Basically my existing DB is on
C:\data\db
How can I point this to be the source of MongoDB service?
First, you need to create the dump from local MongoDB and copy those files to docker MongoDB. You can use these commands to create:
mongodump --uri 'mongodb://localhost:27017/yourdatabase' --archive=<your file> --gzip
mongorestore --uri 'mongodb://remotehost:27017/yourdatabase' --archive=<your file> --gzip
You should be able to access the docker from local host.
Note: Reference this answer if you don't get it correct.
You can do these changes on the path you are mounting to make data persistent. Create a new folder C:/data/docker_mongo to make data persistent.
version: '3.9'
services:
#MongoDB Service
mongo_db:
container_name: db_container
image: mongo:latest
restart: always
ports:
- 2717:27017
volumes:
- C:/data/docker_mongo:/data/db
#Node API Service
api:
build: .
ports:
- 4001:4001
environment:
PORT: 4001
MONGODB_URI: mongodb://db_container:27017
DB_NAME: project-system
depends_on:
- mongo_db
volumes:
mongo_db:

nodejs app cant connect to redis on docker swarm

I'm having a tough time connecting my nodejs backend service to the redis service (both running in swarm mode).
The backend server fails to start as it complains that it was not able to reach redis with the following error:
But on the contrary, if I try to reach the redis service from within my backend container using curl redis-dev.localhost:6379, I get the response curl: (52) Empty reply from server. Which implies that the redis service is available to the backend container:
Below is the backend's docker-compose yaml:
version: "3.8"
services:
backend:
image: backend-app:latest
command: node dist/main.js
ports:
- 4000:4000
environment:
- REDIS_HOST='redis-dev.localhost'
- APP_PORT=4000
deploy:
restart_policy:
condition: on-failure
update_config:
parallelism: 1
delay: 15s
labels:
- "traefik.enable=true"
- "traefik.http.routers.backend-app.rule=Host(`backend-app.localhost`)"
- "traefik.http.services.backend-app.loadbalancer.server.port=4000"
- "traefik.docker.network=traefik-public"
networks:
- traefik-public
networks:
traefik-public:
external: true
driver: overlay
And following is my redis's compose file:
version: "3.8"
services:
redis:
image: redis:7.0.4-alpine3.16
hostname: "redis-dev.localhost"
ports:
- 6379:6379
deploy:
restart_policy:
condition: on-failure
update_config:
parallelism: 1
delay: 15s
networks:
- traefik-public
networks:
traefik-public:
external: true
driver: overlay
The issue was with the backend's docker compose file. The REDIS_HOST environment variable should have been redis-dev.localhost instead of 'redis-dev.localhost'. That's why the backend app was complaining that it was not able to reach the redis host.
It's funny how elementary these things can be.

postgres image does no update when I update docker-compose file

I am currently facing a problem with docker, docker-compose, and postgres that is driving me insane. I have updated my docker-compose with a new postgres password and I have updated my sqlalchemy create_all method with a new table model. But none of these changes are taking affect.
When I go to login to the database container it is still using the old password and the table columns have not been updated. I have run all the docker functions I can think of to no avail
docker-compose down --volumes
docker rmi $(docker images -a -q)
docker system prune -a
docker-compose build --no-cache
After running these commands I do verify that the docker image is gone. I have no images or containers living on my machine but the new postgres image still always is created using the previous password. Below is my docker-compose (I am aware that passwords in docker-compose files is a bad idea, this is a personal project and I intend to change it to pull a secret from KMS down the road)
services:
api:
# container_name: rebindme-api
build:
context: api/
restart: always
container_name: rebindme_api
environment:
- API_DEBUG=1
- PYTHONUNBUFFERED=1
- DATABASE_URL=postgresql://rebindme:password#db:5432/rebindme
# context: .
# dockerfile: api/Dockerfile
ports:
- "8443:8443"
volumes:
- "./api:/opt/rebindme/api:ro"
depends_on:
db:
condition: service_healthy
image: rebindme_api
networks:
web-app:
aliases:
- rebindme-api
db:
image: postgres
container_name: rebindme_db
# build:
# context: ./postgres
# dockerfile: db.Dockerfile
volumes:
- ./postgres-data:/var/lib/postgresql/data
# - ./sql/create_tables.sql:/docker-entrypoint-initdb.d/create_tables.sql
environment:
POSTGRES_USER: rebindme
POSTGRES_PASSWORD: password
POSTGRES_DB: rebindme
#03c72130-a807-491e-86aa-d4af52c2cdda
healthcheck:
test: ["CMD", "psql", "postgresql://rebindme:password#db:5432/rebindme"]
interval: 10s
timeout: 5s
retries: 5
restart: always
networks:
web-app:
aliases:
- postgres-network
client:
container_name: rebindme_client
build:
context: client/
volumes:
- "./client:/opt/rebindme/client"
# - nodemodules:/node_modules
# ports:
# - "80:80"
image: rebindme-client
networks:
web-app:
aliases:
- rebindme-client
nginx:
depends_on:
- client
- api
image: nginx
ports:
- "80:80"
volumes:
- "./nginx/default.conf:/etc/nginx/conf.d/default.conf"
- "./nginx/ssl:/etc/nginx/ssl"
networks:
web-app:
aliases:
- rebindme-proxy
# volumes:
# database_data:
# driver: local
# nodemodules:
# driver: local
networks:
web-app:
# name: db_network
# driver: bridge
The password commented out under POSTGRES_DB: rebindme is the one that it is still using somehow. I can post more code or whatever else is needed, just let me know. Thanks in advance for your help!
The answer ended up being that the images were still existing. The below command did not actually remove all containers just unused ones:
docker system prune -a
I did go ahead and delete the postgres data as Pooya recommended though I am not sure that was necessary as I had already done that which I forgot to mention. The real solution for me was:
docker image ls
docker rmi rebindme-client:latest
docker rmi rebindme-api:latest
Then finally the new config for postgres took.
Because you mount volume manually (Host volumes) and when using docker-compose down --volumes actually docker doesn't remove volume. If you don't need to volume and you want to remove that you have to delete this folder (It depends on the operation system) and then run docker-compose
Command docker-compose down -v just remove below volumes type:
Named volumes
Anonymous volumes
# Linux operation system
rm -rf ./postgres-data
docker-compose build --no-cache

Where to put test files for webdriverIO testing - using docker container?

I do not understand how to run webdriverIO e2e tests of my nodeJS application.
As you can see my nodeJS application is also running as a docker container.
But now I got stucked with some very basic things:
So where do I have to put the test files which I want to run? Do I have to copy them into webdriverio container? If yes, in which folder?
How do I run the tests then?
This is my docker compose setup for all needed docker container:
services:
webdriverio:
image: huli/webdriverio:latest
depends_on:
- chrome
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
hub:
image: selenium/hub
ports:
- 4444:4444
chrome:
image: selenium/node-chrome
ports:
- 5900
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
depends_on:
- hub
myApp:
container_name: myApp
image: 'registry.example.com/project/app:latest'
restart: always
links:
- 'mongodb'
environment:
- ROOT_URL=https://example.com
- MONGO_URL=mongodb://mongodb/db
mongodb:
container_name: mongodb
image: 'mongo:3.4'
restart: 'always'
volumes:
- '/opt/mongo/db/live:/data/db'
I have a complete minimal example in this link: Github
Q. So where do I have to put the test files which I want to run?
Put them in the container that will run Webdriver.io.
Q. Do I have to copy them into Webdriverio container? If yes, in which folder?
Yes. Put them wherever you want. Then you can run the sequentially, from a command: script if you like.
Q. How do I run the tests then?
With docker-compose up the test container will start and begin to run them all.

Connection slow from node to mongodb

I'm experimenting with docker and reflected a very slow connection from the nodejs (4.2.3) container to mongodb (3.2) container.
My setup, very basic, is this (docker-compose):
version: '2'
services:
web:
build: ./app
volumes:
- "./app:/src/app"
ports:
- "80:3000"
links:
- "db_cache:redis"
- "db:mongodb"
command: nodemon -L app/bin/www
db_cache:
image: redis
db:
image: mongo
My s.o. is OSX 10.10 and the docker version is 1.10.2.
The strange thing is that the connection time to the db is always 30 seconds.
Is there any automatic delay?
EDIT:
if I set ip address of mongodb container intead a "dns" (mongodb), the delay disappears!
Any ideas?
This does not completely solve the problem, but it allows you to restore the normal behavior.
The cause of this seems to be the version 2 of the docker-compose.yml.
If I remove the version 2 is completely eliminated the 30-second delay when connecting to mongodb:
web:
build: ./app
volumes:
- "./app:/src/app"
ports:
- "80:3000"
links:
- "db_cache:redis"
- "db:mongodb"
command: nodemon -L app/bin/www
db_cache:
image: redis
db:
image: mongo
I opened an issue here.

Resources