Nodejs cannot connect to Postgresdb container - node.js

I am currently working on an angular app using Rest API (Express, Nodejs) and Postgresql. Everything worked well when hosted on my local machine. After testing, I moved the images to Ubuntu server so the app can be hosted on an external port. I am able to access the angular frontend using the https://serveripaddress:80 but when trying to login, the api is not connecting to Postgresql. I am getting an error message: ERR_CONNECTION_REFUSED. Here is my docker-compose file:
version: '3.0'
services:
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: myDatabase
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
ports:
- 5434:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
backend: # name of the second service
image: myid/nodeapi
ports:
- 3000:3000
environment:
POSTGRES_DB: myDatabase
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
depends_on:
- db
command: bash -c "sleep 20 && node server.js"
myapp-portal:
image: myId/angular-app
ports:
- "80:80"
depends_on:
- backend
volumes:
postgres-data:
The code to connect to database:
const { Client } = require('pg')
const client = new Client({
database: process.env.POSTGRES_DB,
user: 'postgres',
password: process.env.POSTGRES_PASSWORD,
host: process.env.POSTGRES_HOST,
port: process.env.POSTGRES_PORT
})
client.connect()
.then(
() => {
console.log("db connected");
})
and the docker-compose log for backend:
backend_1 | db connected
When I exec into the database docker container and connect to psql, I see that my database is created(used pg_dump manually) with all the tables and data. My guess is that node.js is connecting to the default Postgres database created at the time of the installation. I had the same issue on my local machine but I resolved it by creating a new server group in pgAdmin4 and creating a new db on port 5434. I prefer not to do this on server as it defeats the purpose of the concept of docker. Another thought is perhaps node.js is attempting to connect to the database even before it is up. That is the reason I added the line 'sleep 20' which worked on my local machine. Any thoughts on how I can fix this? TIA!

If you want to wait on the availability of a host and TCP port, you can use this script https://github.com/vishnubob/wait-for-it
In your docker file you can copy this file into container and change mode
RUN chmod +x wait-for-it.sh
Then in your docker compose run this script on service which you want to wait
entrypoint: bash -c "./wait-for-it.sh --timeout=0 service_name:service_port && node server.js"

Related

Docker postgres connection refused when trying to pg_restore

Internal connection between my Spring Boot app and postgres db works fine but when I want to connect to db through init_database.sh and pg_restore I get pg_restore: error: connection to server at "db" (192.168.224.2), port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections?. In this .sh script I use the same host and port as in application.properties (where connection works fine as I said before).
I simply want to do a pg_restore from init_database.sh in my Docker container.
docker-compose.yml
services:
db:
container_name: db
hostname: db
image: postgres:14-alpine
command: -c 'max_connections=250'
ports:
- "5430:5432"
environment:
POSTGRES_DB: endlessblow_db
POSTGRES_USER: kuba
POSTGRES_PASSWORD: pass
volumes:
- ./init.dump:/init.dump
- ./init_database.sh:/docker-entrypoint-initdb.d/init_database.sh
restart: always
app:
container_name: app
image: 'endlessblow-server:latest'
build: ./
ports:
- "8000:8080"
depends_on:
- db
init_database.sh
#!/bin/sh
pg_restore --no-privileges --no-owner -h db -p 5432 -U kuba -d endlessblow_db init.dump
application.properties
spring.datasource.jdbc-url=jdbc:postgresql://db:5432/endlessblow_db
spring.datasource.url=jdbc:postgresql://db:5432/endlessblow_db
spring.datasource.username=kuba
spring.datasource.password=pass
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.platform=postgres
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
How to solve that? Thank you!
in config of service "app", try following:
app:
...
depends_on:
- db
links:
- db
without links, docker network will isolate containers without explicitly exposing them to one another.

ECONNREFUSED 127.0.0.1:5432

I am a beginner in node, nest, and docker but somehow I got assigned a job to dockerized all the existing node js applications.
I followed one of the youtube tutorial and successfully deployed the basic hello world via docker but in the next youtube tutorial when I am trying to add Postgres to the docker I am facing some issues in connecting to Postgres.
I am using docker desktop on mac.
Here is my docker-compose.yml file code snippet
version: "3.9" # optional since v1.27.0
services:
api:
build:
dockerfile: Dockerfile
context: .
depends_on:
- postgres
environment:
DATABASE_URL: postgres://user:password#postgres:5432/db
NODE_ENV: developement
PORT: 3000
ports:
- "8080:3000"
postgres:
image: postgres:14.0
ports:
- "35000:5432"
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: db
Here is the entire error log
Github Repository of this project
Thank you for helping in advance :)
Your problem for a typo in DATABASE_URL. In code for connect database use DATABSE_URL word but in docker-compose used DATABASE_URL.
You should change url: process.env.DATABSE_URL to url: process.env.DATABASE_URL
Make sure your connection string is correct in your docker-compose.yml. Just pass the host, port, user and pass seperated and let TypeOrm handle the connection.
// app.module.ts
TypeOrmModule.forRoot({
type: 'postgres',
host: process.env.POSTGRES_HOST,
port: process.env.POSTGRES_PORT,
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_DB,
})
And your docker-compose.yml:
# docker-compose.yml
version: '3.9'
services:
api:
build:
dockerfile: Dockerfile
context: .
depends_on:
- postgres
environment:
- POSTGRES_HOST=postgres
- POSTGRES_PASSWORD=promo-pass
- POSTGRES_USER=promo-user
- POSTGRES_DB=promo-api-db
- POSTGRES_PORT=5432
postgres:
container_name: postgres
image: postgres
environment:
POSTGRES_USER: promo-user
POSTGRES_PASSWORD: promo-pass
POSTGRES_DB: promo-api-db
In the normal case, without Docker, i. e. you are using node and postgresql on you development or production machine, you just want to start postgres service and enable it if you want.
In order to start your postgres service, type the following command:
sudo systemctl start postgresql
To enable :
sudo systemctl enable postgresql
Note:
"Enable" means enbable the postgres server at boot time.
In my environment, i am using Red Hat. So if commands doesn't work, find the corresponding command on your linux distribution or on your specific OS.
I hope this can help someone else!

Running commands on docker container from the host

Code first, it will be easier to explain what I'm after.
docker-compose.yml
version: '3.4'
services:
db:
user: '${UID}:${GID}'
image: postgres
container_name: postgres
ports:
- '5432:5432'
restart: always
environment:
POSTGRES_HOST: db
POSTGRES_USER: root
POSTGRES_PASSWORD: secret
POSTGRES_DATABASE: foo
PGDATA: /var/lib/postgresql/data/db
volumes:
- ./db/data:/var/lib/postgresql/data/db
- db-init.sh:/docker-entrypoint-initdb.d/:ro
cache:
image: redis:alpine
container_name: redis
sysctls:
net.core.somaxconn: '511'
ports:
- '6379:6379'
command: ['--requirepass "secret"']
api:
image: node:alpine
container_name: api
working_dir: /var/www/app
command: sh -c "npm start"
ports:
- '5000:5000'
volumes:
- node_modules:/var/www/app/node_modules
- .:/var/www/app
env_file: .env
depends_on:
- db
- cache
volumes:
node_modules:
postgres connection settings for node.js app:
export const {
POSTGRES_USER = 'root',
POSTGRES_PASSWORD = 'secret',
POSTGRES_HOST = 'db',
POSTGRES_PORT = 5432,
POSTGRES_DATABASE = 'foo',
} = process.env
issue:
When using service or container name (db or postgres) for the POSTGRES_HOST setting of node app:
I can successfully connect to, and query, the database.
I'm not able to run commands from host which affect the container. For example, seeding db won't work:
npx knex --esm seed:run
This makes sense, as the DNS resolution for db / postgres is taken care of by docker and those only have meaning on the network connecting the containers. Commands run from the host and targeting that container will fail as host doesn't know how to resolve the DNS here.
On the other hand, when using localhost for the POSTGRES_HOST setting of node app:
Queries to postgres from api will fail.
Commands run from the host, like npx knex --esm seed:run, will succeed.
Again, this makes perfect sense. Addressing container as localhost from host will work thanks to the port forwarding in docker-compose.yml. But in the context of the container, it refers to that very container: for api localhost means itself, and its trying to find a database on localhost:5432 or api:5432.
I want to have working inter-container network and also run commands from the host, addressing the said containers. I'm aware of two approaches to achieve that:
Use container / service name as POSTGRES_HOST, and run commands against the containers with:
docker exec -it <container_name> <command>
Assign static ips to the containers and use those instead of service / container names.
Do I have any other options here?
since you are exposing the database ports on the host machine you can do the following.
Use service or container name (db or postgres) for the POSTGRES_HOST, this way it will work for Docker containers.
when you run the seed command form the host, overwrite the POSTGRES_HOST. This can be done in this way
$ export POSTGRES_HOST=127.0.0.1
$ npx knex --esm seed:run
or in one step
$ POSTGRES_HOST=127.0.0.1 npx knex --esm seed:run

Cant connect to postgres database inside docker container

My problem is I have a script that should scrap data and put it inside postgres database, however it has a problem to reach out postgres container.
When I run my docker-compose here is the result:
Name Command State Ports
------------------------------------------------------------------------------------------
orcsearch_dev-db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
orcsearch_flask_1 gunicorn wsgi:application ... Up 0.0.0.0:80->80/tcp, 8000/tcp
We can clearly see that postgres is on 5432 port.
This is my python script database setting:(ofcourse I removed password for obvious reason)
class App():
settings = {
'db_host': 'db',
'db_user': 'postgres',
'db_pass': '',
'db_db': 'orc',
}
db = None
proxies = None
and this is my docker-compose.yml
version: '2'
services:
flask:
build:
context: ./backend
dockerfile: Dockerfile.dev
volumes:
- ./backend:/app
- pip-cache:/root/.cache
ports:
- "80:80"
links:
- "dev-db:db"
environment:
- DATABASE_URL=postgresql://postgres#db:5432/postgres
stdin_open: true
command: gunicorn wsgi:application -w 1 --bind 0.0.0.0:80 --log-level debug --reload
networks:
app:
aliases:
- flask
dev-db:
image: postgres:9.5
ports:
- "5432:5432"
networks:
app:
aliases:
- dev-db
volumes:
pip-cache:
driver: local
networks:
app:
When going into exec flask bash(inside flask container) and running script command I get this error:
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Obviously there is postgres running on this port and I cant figure out what wrong do I do. Any help would be nice!
Probably you are using DSN instead of URI, and PostgreSQL thinks that "db" is not a host because it's hard to tell if "db" is host or path to socket. To fix it, use URI instead of DSN if you use >=9.2 version of PostgreSQL.
Example of URI:
postgresql://[user[:password]#][netloc][:port][/dbname][?param1=value1&...]
https://www.postgresql.org/docs/9.2/static/libpq-connect.html#LIBPQ-CONNSTRING
In your App class it should be 'db_host': 'dev-db', Seems like that hostname is exposed, not db.
I think that the problem is related to the fact that you're using the network and the link together. Try remove the link and change the postgres address to dev-db or change the alias to:
networks:
app:
aliases:
- dev-db
- db

Link nodejs app to Rethinkdb from another container

I made two 2 containers, one for the RethinkDB and one for a nodejs app.
I want to connect my nodejs app to this RethinkDB but everytime I try get an error
Error:{"message":"Failed to connect to localhost:58015\nFull error:\n{\"code\":\"ECONNREFUSED\"
But I can connect the same nodejs app running without Docker to the RethinkDB, with the open port (58015).
My Docker compose config look like this
# Rethink DB
rethink:
build: docker/rethinkdb
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
# NodeJS
nodejs:
build: docker/nodejs
container_name: nodejs
ports:
- 53000:3000
- 55000:5000
depends_on:
- rethink
To connect my app to the db I set the host and port inside a JS config file
database: {
servers: [
{
host: process.env.DB_PORT_28015_TCP_ADDR || 'localhost',
port: process.env.DB_PORT_28015_TCP_PORT || 28015
}
],
name: 'atlas'
},
I tried with RethinkDB port (28015) and with my open port (58015) without success.
I tried to link this two containers with links, network_mode, without success too.
Every solutions I tried don't work.
I think my Rethink container is not ready when the nodejs app try to connect. I really don't understand the problem, if this not this.
The nodejs app is running with pm2
How can I made this app connect to my db ?
For you config, you should use
# Rethink DB
rethink:
build: docker/rethinkdb
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
# NodeJS
nodejs:
build: docker/nodejs
container_name: nodejs
ports:
- 53000:3000
- 55000:5000
links:
- rethink
depends_on:
- rethink
and in JS code
database: {
servers: [
{
host: process.env.DB_PORT_28015_TCP_ADDR || 'rethink',
port: process.env.DB_PORT_28015_TCP_PORT || 28015
}
],
name: 'atlas'
},
As far as I know, one Docker container will not see the other unless specifically linked and using the same net:
docker run \
--name ${NEWAPP} \
--restart=always \
--env MYAPPPAR=${PROJ} \
-v /var/log/docker/node/logs:/usr/src/app/log \
--link myapp_rethink_1:myapp_rethink_1 \
--net myapp_default \
-p ${PORT}:9000 \
-d ${NEWAPP}
So you need both --net and --link:
--link format is sourcecontainername:containeraliasname
--net so that containers can find each other with internal DNS / containername. You can check network with 'docker network ls'
When using newer versions of docker-compose, your services will be configured to run on the same network.
The top level service name in your 'docker-compose.yml' will become the host that needs to be specified, when connecting to RethinkDB from your app:
# docker-compose.yml
version: '3.2'
Services:
web:
build: .
links: db
...
db:
image: rethinkDB
...
Using the example above, you can connect to RethinkDB by using the host named 'db', within the app configuration file:
module.exports = {
rethinkdb: {
host: 'db',
port: 28015
}
};

Resources