Docker file for my node app
FROM node:latest
COPY . .
RUN npm install
EXPOSE 5000
CMD ["npm", "start"]
docker compose file -
version: '3'
services:
pern-todo-backend:
image: pern-todo-backend
ports:
- 5000:5000
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; npm start'
depends_on:
- db
environment:
- DATABASE_URL=postgres://postgres:*****#db:5432/pern
- PORT=5000
db:
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=****
- POSTGRES_DB=pern
when i try to hit the endpoint from postman -
{
"errno": -111,
"code": "ECONNREFUSED",
"syscall": "connect",
"address": "172.26.0.3",
"port": 5432
}
I tried updating my pg pool hostname as the name of my container also
const pool = new Pool({
user : 'postgres',
password : 'subh1994',
host : 'localhost',
port : 5432,
database : 'pern'
})
I'm new with Docker , please help . thanks
Can you check your container name of db service by docker ps
It can be different than service name (db), you can try replacing it in DATABASE_URL connection string.
But one weird thing is db is getting resolved to 172.26.0.3 if the db is correct container name then you can try checking logs by docker logs db to get details on what might be wrong.
On side note if you don't want datbase exposed to host you can skip the ports mapping in db service.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
https://docs.docker.com/compose/networking/
Related
I am currently working on an angular app using Rest API (Express, Nodejs) and Postgresql. Everything worked well when hosted on my local machine. After testing, I moved the images to Ubuntu server so the app can be hosted on an external port. I am able to access the angular frontend using the https://serveripaddress:80 but when trying to login, the api is not connecting to Postgresql. I am getting an error message: ERR_CONNECTION_REFUSED. Here is my docker-compose file:
version: '3.0'
services:
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: myDatabase
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
ports:
- 5434:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
backend: # name of the second service
image: myid/nodeapi
ports:
- 3000:3000
environment:
POSTGRES_DB: myDatabase
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
depends_on:
- db
command: bash -c "sleep 20 && node server.js"
myapp-portal:
image: myId/angular-app
ports:
- "80:80"
depends_on:
- backend
volumes:
postgres-data:
The code to connect to database:
const { Client } = require('pg')
const client = new Client({
database: process.env.POSTGRES_DB,
user: 'postgres',
password: process.env.POSTGRES_PASSWORD,
host: process.env.POSTGRES_HOST,
port: process.env.POSTGRES_PORT
})
client.connect()
.then(
() => {
console.log("db connected");
})
and the docker-compose log for backend:
backend_1 | db connected
When I exec into the database docker container and connect to psql, I see that my database is created(used pg_dump manually) with all the tables and data. My guess is that node.js is connecting to the default Postgres database created at the time of the installation. I had the same issue on my local machine but I resolved it by creating a new server group in pgAdmin4 and creating a new db on port 5434. I prefer not to do this on server as it defeats the purpose of the concept of docker. Another thought is perhaps node.js is attempting to connect to the database even before it is up. That is the reason I added the line 'sleep 20' which worked on my local machine. Any thoughts on how I can fix this? TIA!
If you want to wait on the availability of a host and TCP port, you can use this script https://github.com/vishnubob/wait-for-it
In your docker file you can copy this file into container and change mode
RUN chmod +x wait-for-it.sh
Then in your docker compose run this script on service which you want to wait
entrypoint: bash -c "./wait-for-it.sh --timeout=0 service_name:service_port && node server.js"
Code first, it will be easier to explain what I'm after.
docker-compose.yml
version: '3.4'
services:
db:
user: '${UID}:${GID}'
image: postgres
container_name: postgres
ports:
- '5432:5432'
restart: always
environment:
POSTGRES_HOST: db
POSTGRES_USER: root
POSTGRES_PASSWORD: secret
POSTGRES_DATABASE: foo
PGDATA: /var/lib/postgresql/data/db
volumes:
- ./db/data:/var/lib/postgresql/data/db
- db-init.sh:/docker-entrypoint-initdb.d/:ro
cache:
image: redis:alpine
container_name: redis
sysctls:
net.core.somaxconn: '511'
ports:
- '6379:6379'
command: ['--requirepass "secret"']
api:
image: node:alpine
container_name: api
working_dir: /var/www/app
command: sh -c "npm start"
ports:
- '5000:5000'
volumes:
- node_modules:/var/www/app/node_modules
- .:/var/www/app
env_file: .env
depends_on:
- db
- cache
volumes:
node_modules:
postgres connection settings for node.js app:
export const {
POSTGRES_USER = 'root',
POSTGRES_PASSWORD = 'secret',
POSTGRES_HOST = 'db',
POSTGRES_PORT = 5432,
POSTGRES_DATABASE = 'foo',
} = process.env
issue:
When using service or container name (db or postgres) for the POSTGRES_HOST setting of node app:
I can successfully connect to, and query, the database.
I'm not able to run commands from host which affect the container. For example, seeding db won't work:
npx knex --esm seed:run
This makes sense, as the DNS resolution for db / postgres is taken care of by docker and those only have meaning on the network connecting the containers. Commands run from the host and targeting that container will fail as host doesn't know how to resolve the DNS here.
On the other hand, when using localhost for the POSTGRES_HOST setting of node app:
Queries to postgres from api will fail.
Commands run from the host, like npx knex --esm seed:run, will succeed.
Again, this makes perfect sense. Addressing container as localhost from host will work thanks to the port forwarding in docker-compose.yml. But in the context of the container, it refers to that very container: for api localhost means itself, and its trying to find a database on localhost:5432 or api:5432.
I want to have working inter-container network and also run commands from the host, addressing the said containers. I'm aware of two approaches to achieve that:
Use container / service name as POSTGRES_HOST, and run commands against the containers with:
docker exec -it <container_name> <command>
Assign static ips to the containers and use those instead of service / container names.
Do I have any other options here?
since you are exposing the database ports on the host machine you can do the following.
Use service or container name (db or postgres) for the POSTGRES_HOST, this way it will work for Docker containers.
when you run the seed command form the host, overwrite the POSTGRES_HOST. This can be done in this way
$ export POSTGRES_HOST=127.0.0.1
$ npx knex --esm seed:run
or in one step
$ POSTGRES_HOST=127.0.0.1 npx knex --esm seed:run
I am trying to connect postgresdb service with nodejs web service using docker compose
My docker-compose.yml file
version: "3"
services:
web:
build: ./
ports:
- "40000:3000"
depends_on:
- postgres
postgres:
image: kartoza/postgis:9.6-2.4
restart: always
volumes:
- postgresdata:/data/db
environment:
- POSTGRES_PASS=password
- POSTGRES_DBNAME=sticki
- POSTGRES_USER=renga
- ALLOW_IP_RANGE=0.0.0.0/0
ports:
- "1000:5432"
volumes:
postgresdata:
So when i do docker-compose up in my root directory both services are running and i can access web service using localhost:40000 and postgres service using postico on localhost:1000
But in Node Web service i have written code to access postgres using Sequelize as
const sequelize = new Sequelize('sticki', 'renga', 'password', {
host: 'postgres',
dialect: 'postgres',
});
But I get the following error
SequelizeConnectionRefusedError: connect ECONNREFUSED 172.18.0.2:1000
Why does postgres Connection is made to 172.18.0.2 instead of localhost(0.0.0.0)? What i am doing wrong?
For your web container postgres is a DNS name defined in compose as a service. It fetches the postgres DNS IP address via docker internal DNS & network, that's why it's resolving to 172.18.0.2. If you go to web container & ping postgres, you will get the same IP.
As a fix, configure your node service to connect to host postgres on port 5432 since it's the container port. Port 1000 is the host machine port, if you want to use port 1000, configure node service to connect to your MACHINE_IP:1000.
PS - Localhost within a container means the container itself & nothing else.
Service name is taken from container_name - which is fixed. In your case you do not have that and name is created from folder where docker-compose.yml is + _ + service name + _1.
With this DNS name you can reach your service on the default network that docker-compose will create, from one service to reach the other.
Thanks
My node.js express app cannot connect to the MongoDB in a Docker. I'm not that familiar with Docker.
node.js connection:
import mongodb from 'mongodb';
...
mongodb.MongoClient.connect('mongodb://localhost:27017', ... );
Dockerfile:
FROM node:argon
RUN mkdir /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: “2”
services:
web:
build: .
volumes:
— ./:/app
ports:
— “3000:3000”
links:
— mongo
mongo:
image: mongo
ports:
— “27017:27017”
Build command: docker build -t NAME .
Run command: docker run -ti -p 3000:3000 NAME
Connection error:
[MongoError: failed to connect to server [localhost:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]]
name: 'MongoError',
message: 'failed to connect to server [localhost:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]'
Try:
mongodb.MongoClient.connect('mongodb://mongo:27017', ... );
Change your docker-compose.yml:
version: "2"
services:
web:
build: .
volumes:
- ./:/app
ports:
- "3000:3000"
links:
- mongo
mongo:
image: mongo
ports:
- "27017:27017"
And use some docker compose commands:
docker-compose down
docker-compose build
docker-compose up -d mongo
docker-compose up web
Try this.
When using linked docker containers you should use the name of the container in this case for example your connection to mongodb should be mongodb.MongoClient.connect('mongodb://mongo:27017', ... ); instead of mongodb.MongoClient.connect('mongodb://localhost:27017', ... );. The reason for changing it to mongo is because you used the links attribute to mongo in your docker-compose.yml. That would result to a hostname of mongo in your /etc/hosts of the web docker container. Reference linking-containers.
The docker-compose.yml seems to be lacking an indention. On the mongo attribute should be the same level as web.
version: '2'
services:
web:
build: .
volumes: ['./:/app']
ports: [ '3000:3000' ]
links: [ mongo ]
mongo:
image: mongo
ports: [ '27017:27017' ]
I tried your configuration using my docker what Ive done is update docker-compose.yml then I docker-compose build then docker-compose up. Logs of my local run
I am not sure if you still have this question, but the datasources.json should be:
"host": "mongo"
rather than "localhost".
In my logs I see:
mongo | NETWORK [listener] connection accepted from 172.22.0.3:47880 #1 (1 connection now open)
As you can see, docker compose will NAT mongo to another V-LAN. The IP address 172.22.0.0 is an internal IP address used by the daemon to route a docker-compose image. So localhost is now not in the game.
At least, it works for me.
datasources.json
"mongoDS": {
"host": "mongo",
"port": 27017,
...
in my case works like this :
just link to the container from the command line. db is my up database container
sudo docker run -it --link db:db1 --publish 4000-4006:4000-4006 --name backend backend:latest
I made two 2 containers, one for the RethinkDB and one for a nodejs app.
I want to connect my nodejs app to this RethinkDB but everytime I try get an error
Error:{"message":"Failed to connect to localhost:58015\nFull error:\n{\"code\":\"ECONNREFUSED\"
But I can connect the same nodejs app running without Docker to the RethinkDB, with the open port (58015).
My Docker compose config look like this
# Rethink DB
rethink:
build: docker/rethinkdb
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
# NodeJS
nodejs:
build: docker/nodejs
container_name: nodejs
ports:
- 53000:3000
- 55000:5000
depends_on:
- rethink
To connect my app to the db I set the host and port inside a JS config file
database: {
servers: [
{
host: process.env.DB_PORT_28015_TCP_ADDR || 'localhost',
port: process.env.DB_PORT_28015_TCP_PORT || 28015
}
],
name: 'atlas'
},
I tried with RethinkDB port (28015) and with my open port (58015) without success.
I tried to link this two containers with links, network_mode, without success too.
Every solutions I tried don't work.
I think my Rethink container is not ready when the nodejs app try to connect. I really don't understand the problem, if this not this.
The nodejs app is running with pm2
How can I made this app connect to my db ?
For you config, you should use
# Rethink DB
rethink:
build: docker/rethinkdb
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
# NodeJS
nodejs:
build: docker/nodejs
container_name: nodejs
ports:
- 53000:3000
- 55000:5000
links:
- rethink
depends_on:
- rethink
and in JS code
database: {
servers: [
{
host: process.env.DB_PORT_28015_TCP_ADDR || 'rethink',
port: process.env.DB_PORT_28015_TCP_PORT || 28015
}
],
name: 'atlas'
},
As far as I know, one Docker container will not see the other unless specifically linked and using the same net:
docker run \
--name ${NEWAPP} \
--restart=always \
--env MYAPPPAR=${PROJ} \
-v /var/log/docker/node/logs:/usr/src/app/log \
--link myapp_rethink_1:myapp_rethink_1 \
--net myapp_default \
-p ${PORT}:9000 \
-d ${NEWAPP}
So you need both --net and --link:
--link format is sourcecontainername:containeraliasname
--net so that containers can find each other with internal DNS / containername. You can check network with 'docker network ls'
When using newer versions of docker-compose, your services will be configured to run on the same network.
The top level service name in your 'docker-compose.yml' will become the host that needs to be specified, when connecting to RethinkDB from your app:
# docker-compose.yml
version: '3.2'
Services:
web:
build: .
links: db
...
db:
image: rethinkDB
...
Using the example above, you can connect to RethinkDB by using the host named 'db', within the app configuration file:
module.exports = {
rethinkdb: {
host: 'db',
port: 28015
}
};