Node-postgres hangs inside docker container when running in ubuntu server - node.js

I have an application running on several containers and I am using docker-compose to run these services inside containers. I am using NodeJS with postgresql using node-postgresql library(pg).
When I run the application in my local machine it works perfectly fine. And also when I build the application in a aws e2 ubuntu server , it builds without any error. And when I try to check the endpoints it works pretty much well. But when I try an endpoint that involves a database call, no response is received. I tried to debug the application by adding some console logs and I think the problem is with the pg library. When the code reaches pool.query it just hangs there without giving any error. I tried to console.log , try catch errors but everytime the code stops at a pool.query() and it hangs. Any idea why this is happening ? Tried for hours to find a solution online but no luck.
I found another similar issue here https://github.com/brianc/node-postgres/issues/2069
My docker-compose.yml
version: '3'
services:
postgres:
container_name: postgres
image: mdillon/postgis
environment:
POSTGRES_MULTIPLE_DATABASES: user_data,gov_auth
POSTGRES_USER: commhawk
POSTGRES_PASSWORD: password
volumes:
- postgres:/data/postgres
- ./database:/docker-entrypoint-initdb.d
ports:
- "5433:5432"
restart: unless-stopped
rethinkdb:
container_name: rethinkdb
image: rethinkdb:latest
ports:
- "3004:8080"
- "29015:29015"
- "28015:28015"
volumes:
- rethinkdb:/data
user_data_service:
build: './user_data_service'
container_name: uds
ports:
- "3002:3000"
depends_on:
- postgres
environment:
DATABASE_URL: postgres://commhawk:password#postgres:5432/user_data
# volumes:
# - ./user_data_service:/src
# - container_node_modules:/src/node_modules
gov_authority_service:
build: './gov_authority_service'
container_name: gov_auth
ports:
- "3001:3000"
depends_on:
- postgres
environment:
DATABASE_URL: postgres://commhawk:password#postgres:5432/gov_auth
# volumes:
# - ./gov_authority_service:/src
socket_service:
build: './socket_service'
container_name: socket
ports:
- "3003:3000"
depends_on:
- rethinkdb
# volumes:
# - ./socket_service:/src
api_gateway:
image: express-gateway:latest
ports:
- "8080:8080"
depends_on:
- user_data_service
- socket_service
- gov_authority_service
volumes:
- ./api_gateway/gateway.config.yml:/var/lib/eg/gateway.config.yml
volumes:
postgres:
rethinkdb:
# container_node_modules:
Simply when I run like following, it stops without any result. When I include that inside try catch, no error is shown also. No way to deubug
let result = await pool.query("SELECT NOW()")
console.log(result)
Node-postgresql connection
const {Pool} = require("pg");
const pool = new Pool({
user: "commhawk",
host: "postgres",
database: "gov_auth",
password: "password",
port: "5432"
});
module.exports = pool;
Tried it with connectionString also. Still no luck !!
Can provide more info if needed. Thanks in advance
UPDATE (SOLUTION)
Okay so I figured out the error. When I built it on my local machine, it is using nodejs 13 and pg 8.0.2. But when I tried to built it on my server(like 1 month after building on the local), since I am using node:latest in my Dockerfile, it pulls the latest version which is nodejs 14. Nodejs 14 is incompatible with pg 8.0.2. So that is why the local versions runs well and the server version fails.
Solutions:
Downgrade nodejs to 13 to use with pg 8.0.2
Upgrade pg to 8.0.3 to use with nodejs 14

Related

Connecting to a neo4 driver (in a Docker container) from another Docker container

Normally I'd have an instance neo4j running in Docker, then in a script I access the driver like so:
self.driver = GraphDatabase.driver(uri="bolt://localhost:7687", auth=("username", "password"))
I'm now putting this script itself into a Docker container, but I now get the error message:
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv6Address(('::1', 7687, 0, 0)) (reason [Errno 99] Cannot assign requested address)
What uri (or other parameter) needs changing to access a neo4 Docker instance, from another docker container?
Within my docker-compose.yml, I have:
version: '3'
services:
neo4j:
container_name: neo4j
image: neo4j:3.5
restart: always
environment:
- NEO4J_dbms_memory_pagecache_size=2G
- dbms_connector_bolt_tls__level=OPTIONAL
- NEO4J_dbms_memory_heap_max__size=3500M
- NEO4J_AUTH=neo4j/start
volumes:
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/import:/import
- $HOME/neo4j/plugins:/plugins
ports:
- 7474:7474
- 7687:7687
appgui:
container_name: appgui
image: python:3.7.3-slim
build:
context: ./APPGUI/
volumes:
- ./APPGUI/:/usr/src/app/
restart: always
environment:
PORT: 5000
FLASK_DEBUG: 1
ports:
- 80:80
depends_on:
- neo4j
I also can't access my web app (http://localhost:5000)
Your service can't connect to localhost Neo4j, because it is inside a docker container, and localhost points to the docker containers instead of your local machine.
In this case, it is best to run both containers with docker-compose. You want to set the depends on feature in the other docker container. Here is an example docker-compose.yml file from my project.
version: '3.7'
services:
neo4j:
image: neo4j:4.1.2
restart: always
hostname: neo4jngs
container_name: neo4jngs
ports:
- 7474:7474
- 7687:7687
api:
build:
context: ./API
hostname: api
restart: always
container_name: api
ports:
- 3000:3000
depends_on:
- neo4j
As you can see, the api container is a service that will connect to Neo4j. Now you can change the driver settings to:
self.driver = GraphDatabase.driver(uri="bolt://neo4j:7687", auth=("username", "password"))
And you are good to go.
I solved it and it was actually a dumb mistake, but one that could happen to others I guess...
In the docker-compose.yml:
build: ./APP1/
needs to be in quotes, so:
build: './APP1/'
However Tomaž Bratanič provided me with some helpful tips to get a resolve.

Node can't reach postgres server in docker compose

I'm running a NodeJS app and its related services (Redis, Postgres) through docker-compose. My NodeJS app can reach Redis just fine using its name & port from my docker-compose file, but for some reason I can't seem to reach Postgres:
Error: getaddrinfo EAI_AGAIN postgres
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26)
My docker-compose file:
services:
api:
build:
context: ./
dockerfile: Dockerfile
ports:
- "3001:3001"
depends_on:
- postgres
- redis
postgres:
image: postgres:11.1
ports:
- "5432:5432"
expose:
- "5432"
hostname: postgres
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: root
POSTGRES_DB: test
restart: on-failure
networks:
- integration-tests
redis:
image: 'docker.io/bitnami/redis:6.0-debian-10'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
hostname: redis
volumes:
- 'redis_data:/bitnami/redis/data'
I've tried both normal lts and lts-alpine base images for my NodeJS app. I'm using knex, which delegates connecting to the pg library... Anybody have any idea why it won't even connect? I've tried both running directly through docker-compose and through tilt.
By adding :
networks:
- integration-tests
Only for postgres, you create a separate network only for postgres.
By default, docker-compose create a network for all your container inside the same file with the name: <project-name>_default. It's why, when using docker-compose all the containers in the same file could communicate using their name.
By specifying a network for postgres, you "ask" to docker-compose to not use the default network for it.
You have 2 solutions:
- Remove the instruction to failback to the default network
- Add the networks instruction to all other containers in your project / or only those who need it
Note: By default, docker-compose will prefixe all your object (container, networks, volume) with the project name. The default project name is the name of the current directory.

Dockerized MongoDB Failing to connect on Heroku from NodeJs Container

i have a node web service and I'm a bit new to docker, so please bear with me. Below is my docker-compose file. I have run this image locally and it works fine, but when trying to deploy it to Heroku I get the following error:
{ MongoError: failed to connect to server [mongo:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo mongo:27017]
I've found where I'm supposed to set the environment flags in Heroku, so I managed to check that and its fine. But it still can't find mongo:27017
What could I be doing wrong?
version: "3"
services:
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/charecters
ports:
- "27017:27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
mongo-seed:
build: ./mongo-seed
links:
- mongo
volumes:
- ./mongo-seed:/mongo-seed
command:
- /mongo-seed/script.sh
app:
container_name: app
restart: always
build: .
ports:
- "3000:3000"
links:
- mongo
environment:
- DB_SERVERS=mongo:27017
links:
- mongo
is depreciated try
depends_on:
- mongo
also, check that
mongo:
container_name: mongo, is properly referenced in DB setup
change command:
- /mongo-seed/script.sh
try command:
- 'node your-app-file'

Docker-Compose: Unable to connect NodeJS with Mongo & Redis [Connection Refused]

I am working with 3 Services:
api-knotain [main api service]
api-mongo [mongo db for api service]
api-redis [redis for api service]
The Dockerfile for api-knotain looks as follows
FROM node:latest
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
CMD [ "npm", "start" ]
my docker-compose file as such:
version: '3.3'
services:
api-knotain:
container_name: api-knotain
restart: always
build: ../notify.apiV2/src
ports:
- "7777:7777"
links:
- api-mongo
- api-redis
environment:
- REDIS_URI=api-redis
- REDIS_PORT=32770
- MONGO_URI=api-mongo
- MONGO_PORT=27017
- RESEED=true
- NODE_TLS_REJECT_UNAUTHORIZED=0
api-mongo:
container_name: api-mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
api-redis:
container_name: api-redis
image: "redis:alpine"
ports:
- "32770:32770"
runnin
docker-compose build
docker-compose up
output:
api-knotain | connecting mongo ...: mongodb://api-mongo:27017/notify
api-knotain | Redis error: Error: Redis connection to api-redis:32770 failed - connect ECONNREFUSED 172.21.0.2:32770
api-knotain | mongo error:MongoNetworkError: failed to connect to server [api-mongo:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 172.21.0.3:27017]
api-knotain | Example app listening on port 7777!
neither mongo nor redis can be connected.
I tried the following things:
use localhost instead of container name
use different ports
use expose vs port
always with the same result
note:
i can connect without issue to both mongo & redis through local cli 'localhost:port'
what am I missing?
Probably redis and mongo containers start later than you application and therefore your app will not see those. To counter this you must wait for those services to be ready.
Also links is a legacy feature of Docker. You should use depends_on to control startup order and user-defined networks if you want to isolate your database and redis containers from external network.
version: '3.3'
services:
api-knotain:
depends_on:
- api-mongo
- api-redis
container_name: api-knotain
restart: always
build: ../notify.apiV2/src
ports:
- "7777:7777"
links:
- api-mongo
- api-redis
environment:
- REDIS_URI=api-redis
- REDIS_PORT=32770
- MONGO_URI=api-mongo
- MONGO_PORT=27017
- RESEED=true
- NODE_TLS_REJECT_UNAUTHORIZED=0
api-mongo:
container_name: api-mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
api-redis:
container_name: api-redis
image: "redis:alpine"
ports:
- "32770:32770"
It looks like depend_on did not properly work in 3.3 version of docker compose. after updating the version to 3.7 all works perfectly without any changes to the compose file.

Connection slow from node to mongodb

I'm experimenting with docker and reflected a very slow connection from the nodejs (4.2.3) container to mongodb (3.2) container.
My setup, very basic, is this (docker-compose):
version: '2'
services:
web:
build: ./app
volumes:
- "./app:/src/app"
ports:
- "80:3000"
links:
- "db_cache:redis"
- "db:mongodb"
command: nodemon -L app/bin/www
db_cache:
image: redis
db:
image: mongo
My s.o. is OSX 10.10 and the docker version is 1.10.2.
The strange thing is that the connection time to the db is always 30 seconds.
Is there any automatic delay?
EDIT:
if I set ip address of mongodb container intead a "dns" (mongodb), the delay disappears!
Any ideas?
This does not completely solve the problem, but it allows you to restore the normal behavior.
The cause of this seems to be the version 2 of the docker-compose.yml.
If I remove the version 2 is completely eliminated the 30-second delay when connecting to mongodb:
web:
build: ./app
volumes:
- "./app:/src/app"
ports:
- "80:3000"
links:
- "db_cache:redis"
- "db:mongodb"
command: nodemon -L app/bin/www
db_cache:
image: redis
db:
image: mongo
I opened an issue here.

Resources