Unable to connect to the Postgres database in Docker Compose - node.js

I am building a Node JS Web application. I am using Postgres SQL and Sequelize for the database.
I have the docker-compose.yaml with the following content.
version: '3.8'
services:
web:
container_name: web-server
build: .
ports:
- 4000:4000
restart: always
depends_on:
- postgres
postgres:
container_name: app-db
image: postgres:11-alpine
ports:
- '5431:5431'
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=babe_manager
volumes:
- ./postgres-data:/var/lib/postgresql/data
build:
context: .
dockerfile: ./pgconfig/Dockerfile
volumes:
app-db-data:
I am using the following credentials to connect to the database.
DB_DIALECT=postgres
DB_HOST=postgres
DB_NAME=babe_manager
DB_USERNAME=docker
DB_PORT=5431
DB_PASSWORD=secret
When the application tries to connect to the database, I am getting the following error.
getaddrinfo ENOTFOUND postgres
I tried using this too.
DB_DIALECT=postgres
DB_HOST=localhost
DB_NAME=babe_manager
DB_USERNAME=docker
DB_PORT=5431
DB_PASSWORD=secret
This time I am getting the following error.
connect ECONNREFUSED 127.0.0.1:5431
My app server is not within the docker-compose.yaml. I just start it using "nodemon server.js". How can I fix it?

You need to make sure your Postgres container and your Node are on the same network. After they reside in the common network you can then use docker's DNS to resolve the database from the Node application in your case eighter postgres or test-db for hostname should work. More on container communication over docker networks.

Related

Can't access MongoDB container from NodeJS App

I'm running an instance of a web application in my Docker container and am also running a MongoDB container so when I launch the web app I can easily connect to the DB on the app's connection page.
The issue is that I'm not sure how to reach the Mongo container from my web app and am not sure if my host/port connection info is correct.
My Docker Setup
As you can see the container is up and running with both mongo and web app services running without errors
I build the two through docker-compose.yml
version: "3.3"
services:
web:
image: grafana-asw-v3
container_name: grafana-asw-v3
restart: always
build: .
ports:
- "13000:3000"
volumes:
- grafana-storage:/var/lib/grafana
stdin_open: true
tty: true
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
ports:
- "27018:27017"
volumes:
grafana-mongo-db: {}
grafana-storage: {}
Issue
With everything up and running I'm attempting to connect through the web app, but I seem to be using the wrong connection info...
I assumed to use "hostMachine:port" (roxane:27018), but it's not connecting. Is there something I overlooked here?
There were two changes I had to make to fix this issue:
Modify the bind_ip in mongod.conf via making this change to my docker-compose file
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
ports:
- "27018:27017"
command: mongod --bind_ip 0.0.0.0
I needed to refer to the IP address instead of the hostname in the cli in my we application. (Thanks to this answer for help with this one)
Short answer
db service is in the same network than web service not in host network.
As you named your services via container_name you shoud be able to use the connection string mongodb://mongo:27017
Explanation
By default, docker containers run under a bridge network allowing them to communicate without viewing your host network.
When using ports in a compose file, you define that you want to map an internal port of the container to the host port
"27018:27017" => I want to expose the container port number 27017 to the host port number 27018.
As a result, you could expose your web frontend without exposing your mongo service :
version: "3.3"
services:
web:
image: grafana-asw-v3
container_name: grafana-asw-v3
restart: always
build: .
ports:
- "13000:3000"
volumes:
- grafana-storage:/var/lib/grafana
stdin_open: true
tty: true
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
volumes:
grafana-mongo-db: {}
grafana-storage: {}

How to connect to Node API docker container from Angular Nginx container

I am currently working on an angular app using Rest API (Express, Nodejs) and Postgresql. Everything worked well when hosted on my local machine. After testing, I moved the images to Ubuntu server so the app can be hosted on an external port. I am able to access the angular frontend using the https://server-external-ip:80 but when trying to login, Nginx is not connecting to NodeApi. Here is my docker-compose file:
version: '3.0'
services:
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
ports:
- 5432:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
networks:
- my-network
backend: # name of the second service
image: myId/mynodeapi
ports:
- 3000:3000
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
depends_on:
- db
networks:
- my-network
command: bash -c "sleep 20 && node server.js"
myapp:
image: myId/myangularapp
ports:
- "80:80"
depends_on:
- backend
networks:
- my-network
networks:
my-network:
I am not sure what the apiUrl should be? I have tried the following and nothing worked:
apiUrl: "http://backend:3000/api"
apiUrl: "http://server-external-ip:3000/api"
apiUrl: "http://server-internal-ip:3000/api"
apiUrl: "http://localhost:3000/api"
I think you should use the docker-compose service as a DNS. It seems you've several docker hosts/ports available, there are the following in your docker-compose structure:
db:5432
http://backend:3000
http://myapp
Make sure to use db as POSTGRES_DB in the environment part for backend service.
Take a look to my repo, I think is the best way to learn how a similar project works and how to build several apps with nginx, you also can check my docker-compose.yml, it uses several services and are proxied using nginx and are worked together.
On this link you’ll find a nginx/default.conf file and it contains several nginx upstream configuration please take a look at how I used docker-compose service references there as hosts.
Inside the client/ directory, I also have another nginx as a web server of a react.js project.
On server/ directory, it has a Node.js API, It connects to Redis and Postgres SQL database also built from docker-compose.yml.
If you need set or redirect traffic to /api you can use some ngnix config like this
I think this use case can be useful for you and other users!

Node can't reach postgres server in docker compose

I'm running a NodeJS app and its related services (Redis, Postgres) through docker-compose. My NodeJS app can reach Redis just fine using its name & port from my docker-compose file, but for some reason I can't seem to reach Postgres:
Error: getaddrinfo EAI_AGAIN postgres
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26)
My docker-compose file:
services:
api:
build:
context: ./
dockerfile: Dockerfile
ports:
- "3001:3001"
depends_on:
- postgres
- redis
postgres:
image: postgres:11.1
ports:
- "5432:5432"
expose:
- "5432"
hostname: postgres
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: root
POSTGRES_DB: test
restart: on-failure
networks:
- integration-tests
redis:
image: 'docker.io/bitnami/redis:6.0-debian-10'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
hostname: redis
volumes:
- 'redis_data:/bitnami/redis/data'
I've tried both normal lts and lts-alpine base images for my NodeJS app. I'm using knex, which delegates connecting to the pg library... Anybody have any idea why it won't even connect? I've tried both running directly through docker-compose and through tilt.
By adding :
networks:
- integration-tests
Only for postgres, you create a separate network only for postgres.
By default, docker-compose create a network for all your container inside the same file with the name: <project-name>_default. It's why, when using docker-compose all the containers in the same file could communicate using their name.
By specifying a network for postgres, you "ask" to docker-compose to not use the default network for it.
You have 2 solutions:
- Remove the instruction to failback to the default network
- Add the networks instruction to all other containers in your project / or only those who need it
Note: By default, docker-compose will prefixe all your object (container, networks, volume) with the project name. The default project name is the name of the current directory.

NodeJS 14 in a Docker container can't connect to Postgres DB (in/out docker)

I'm making a React-Native app using Rest API (NodeJS, Express) and PostgreSQL.
Everything work good when hosted on my local machine.
Everything work good when API is host on my machine and PostgreSQL in docker container.
But when backend and frontend is both in docker, database is reachable from all my computer in local, but not by the backend.
I'm using docker-compose.
version: '3'
services:
wallnerbackend:
build:
context: ./backend/
dockerfile: ../Dockerfiles/server.dockerfile
ports:
- "8080:8080"
wallnerdatabase:
build:
context: .
dockerfile: ./Dockerfiles/postgresql.dockerfile
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
env_file: .env_docker
volumes:
db-data:
.env_docker and .env have the same parameters (just name changing).
Here is my dockerfiles:
Backend
FROM node:14.1
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
Database
FROM postgres:alpine
COPY ./wallnerdb.sql /docker-entrypoint-initdb.d/
I tried to change my hostname in connection url to postgres by using the name of the docker, my host IP address, localhost, but no results.
It's also the same .env (file in my node repo with db_name passwd etc) I do use in local to connect my backend to the db.
Since you are using NodeJS 14 in the Docker Container - make sure that you have the latest pg dependency installed:
https://github.com/brianc/node-postgres/issues/2180
Alternatively: Downgrade to Node 12.
Also make sure, that both the database and the "backend" are in the same network. Also: the backend should best "depend" on the database.
version: '3'
services:
wallnerbackend:
build:
context: ./backend/
dockerfile: ../Dockerfiles/server.dockerfile
ports:
- '8080:8080'
networks:
- default
depends_on:
- wallnerdatabase
wallnerdatabase:
build:
context: .
dockerfile: ./Dockerfiles/postgresql.dockerfile
ports:
- '5432:5432'
volumes:
- db-data:/var/lib/postgresql/data
env_file: .env_docker
networks:
- default
volumes:
db-data:
networks:
default:
This should not be necessary in you case - as pointed out in the comments - since Docker Compose already creates a default network
The container name "wallnerdatabase" is the host name of your database - if not configured otherwise.
I expect the issue to be in the database connection URL since you did not share it.
Containers in the same network in a docker-compose.yml can reach each other using the service name. In your case the service name of the database is wallnerdatabase so this is the hostname that you should use in the database connection URL.
The database connection URL that you should use in your backend service should be similar to this:
postgres://user:password#wallnerdatabase:5432/dbname
Also make sure that the backend code is calling the database using the hostname wallnerdatabase as it is defined in the docker-compose.yml file.
Here is the reference on Networking in Docker Compose.
You should access your DB using service name as hostname. Here is my working example - https://gitlab.com/gintsgints/vue-fullstack/-/blob/master/docker-compose.yml

Mongo service with specific port exposed in a bridge network docker

I have a sample app which consists of three parts:
mongo database
node api (server side)
angular web app (client side)
the goal is to containerize those three parts and run the app.
so to reach there I've created docker-compose.yml file like below:
# docker-compose -f docker-compose.prod.yml build
# docker-compose -f docker-compose.prod.yml up -d
# docker-compose -f docker-compose.prod.yml down
version: '3'
services:
mongodb:
image: mongo
container_name: mongodb-instance-microservices
ports:
- "27020:27017"
networks:
- microservices-network
client:
container_name: client-instance-microservices
image: client-microservices
build:
context: ./client
dockerfile: prod.dockerfile
ports:
- "8080:80"
- "443:443"
depends_on:
- api
networks:
- microservices-network
api:
container_name: api-instance-microservices
image: api-microservices
build:
context: ./server
dockerfile: server.dockerfile
environment:
- NODE_ENV=production
ports:
- "3000:3000"
depends_on:
- mongodb
networks:
- microservices-network
networks:
microservices-network:
driver: bridge
in the server side i am running the main app.js which is trying to connect to the mongodb using this connection string:
mongodb://mongodb-instance-microservices:27020/TestDatabase
the problem is the server can not connect to the mongo db container.
i tried to expose the default port for mongo like below:
mongodb:
image: mongo
container_name: mongodb-instance-microservices
ports:
- "27017:27017"
networks:
- microservices-network
and update the connection string in the app.js file like this:
mongodb://mongodb-instance-microservices:27017/TestDatabase
and it's work fine.
the question is how to expose different port for mongo container and make it work fine?
When you connect between services using a Docker-internal network, you always connect to the internal port of the service. You don't explicitly need to publish ports (in Compose, with a ports: directive); if you do, the port you'd connect to is the one on the right.
Style-wise, also note that you don't need to manually declare a private network with default options that will only be used within this Docker Compose file (Compose will do something very similar to that for you), and you don't need to declare container_name: just for inter-container connectivity (Compose will add a network alias that matches the name of the service).

Resources