I'm trying to use Docker Compose to connect a Node.js container to a Postgres container. I can run the Node server fine, and can also connect to the Postgres container fine from my local as I've mapped the ports, but I'm unable to get the Node container to connect to the database.
Here's the compose YAML file:
version: "3"
services:
api_dev:
build: ./api
command: sh -c "sleep 5; npm run dev" # I added the sleep in to wait for the DB to load but it doesn't work either way with or without this sleep part
container_name: pc-api-dev
depends_on:
- dba
links:
- dba
ports:
- 8001:3001
volumes:
- ./api:/home/app/api
- /home/app/api/node_modules
working_dir: /home/app/api
restart: on-failure
dba:
container_name: dba
image: postgres
expose:
- 5432
ports:
- '5431:5432'
env_file:
- ./api/db.env
In my Node container, I'm waiting for the Node server to spin up and attempting to connect to the database in the other container like so:
const { Client } = require('pg')
const server = app.listen(app.get('port'), async () => {
console.log('App running...');
const client = new Client({
user: 'db-user',
host: 'dba', // host is set to the service name of the DB in the compose file
database: 'db-name',
password: 'db-pass',
port: 5431,
})
try {
await client.connect()
console.log(client) // x - can't see this
client.query('SELECT NOW()', (err, res) => {
console.log(err, res) // x - can't see this
client.end()
})
console.log('test') // x - can't see this
} catch (e) {
console.log(e) // x - also can't see this
}
});
After reading up on it today in depth, I've seen the DB host in the connection code above can't be localhost as that refers to the container which is currently running, so it must be set to the service name of the container we're connecting to (dba in this case). I've also mapped the ports, and can see the DB is ready accepting connections well before my Node server starts.
However, not only can I not connect to the database from Node, I'm also unable to see any success or error console logs from the try catch. It's as if the connection is not resolving, and doesn't ever time out, but I'm not sure.
I've also seen that the "listen_addresses" needs to be updated so other containers can connect to the Postgres container, but struggling to find out how to do this and test when I can't debug the actual issue due to lack of logs.
Any direction would be appreciated, thanks.
You are setting the container name and can reference that container by it. For example,
db:
container_name: container_db
And host:port
DB_URL: container_db:5432
Related
I'm trying link my node server with my db in Mongo, but doesn't work...
This is my Docker-Compose.yml:
version: '3.9'
services:
human:
build:
context: ./human-app
container_name: human
ports:
- "3001:3001"
mongodb:
image: mongo
container_name: mongodb
env_file:
- ./human-app/srcs/.env
environment:
- MONGO_INITDB_ROOT_USERNAME=Pepe
- MONGO_INITDB_ROOT_PASSWORD=PepePass
- MONGO_INITDB_DATABASE=Pool
ports:
- "27017:27017"
volumes:
- ./volumes_mongo:/data/db
And when i run the command:
docker-compose up -d
Everything work correctly but separate, i check the status with the command "docker ps -a" and the result is this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9335c5cd4940 mongo "docker-entrypoint.s…" 4 seconds ago Up 3 seconds 0.0.0.0:27017->27017/tcp mongodb
2a28635d2caa human-selection-app_human "node server" 4 seconds ago Up 3 seconds 0.0.0.0:3001->3001/tcp human
If i enter in my container of mongo, and follow the next steps, thats happen:
1- docker exec -it mongodb bash
2- mongo
3- show dbs "result no dbs"
So, i can't link nothing with my node server, probably i need create a network for the 2 services? can anyone help me? thanks.
Extra, this is how i connect from my server to MongoDB:
async _connectDB() {
//! We create the connection to the database
const dbUser = process.env.DBUSER;;
const dbPassword = process.env.DBPASSWORD;
const dbName = process.env.DBNAME;
const dbUriLocal = `mongodb://${dbUser}:${dbPassword}#127.0.0.1:27017/${dbName}`;
mongoose.connect(dbUriLocal, { useNewUrlParser: true, useUnifiedTopology: true })
.then(() => console.log(clc.cyan('Debug: Database connected successfully')))
.catch(e => console.log(e));
}
As your application is also running inside of a Docker container being in the same network with the MongoDB Service, it means you should use MongoDB's service name as the connection address instead of the localhost IP address.
Change your dbUriLocal variable to use the MongoDB Service name (in your case, it is "mongodb" named after the service name in docker-compose.yml) as the connection address instead of "127.0.0.1":
const dbUriLocal = `mongodb://${dbUser}:${dbPassword}#mongodb:27017/${dbName}`;
See "Networking" in the Docker Compose File Referrence for more information.
I am currently working on an angular app using Rest API (Express, Nodejs) and Postgresql. Everything worked well when hosted on my local machine. After testing, I moved the images to Ubuntu server so the app can be hosted on an external port. I am able to access the angular frontend using the https://serveripaddress:80 but when trying to login, the api is not connecting to Postgresql. I am getting an error message: ERR_CONNECTION_REFUSED. Here is my docker-compose file:
version: '3.0'
services:
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: myDatabase
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
ports:
- 5434:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
backend: # name of the second service
image: myid/nodeapi
ports:
- 3000:3000
environment:
POSTGRES_DB: myDatabase
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
depends_on:
- db
command: bash -c "sleep 20 && node server.js"
myapp-portal:
image: myId/angular-app
ports:
- "80:80"
depends_on:
- backend
volumes:
postgres-data:
The code to connect to database:
const { Client } = require('pg')
const client = new Client({
database: process.env.POSTGRES_DB,
user: 'postgres',
password: process.env.POSTGRES_PASSWORD,
host: process.env.POSTGRES_HOST,
port: process.env.POSTGRES_PORT
})
client.connect()
.then(
() => {
console.log("db connected");
})
and the docker-compose log for backend:
backend_1 | db connected
When I exec into the database docker container and connect to psql, I see that my database is created(used pg_dump manually) with all the tables and data. My guess is that node.js is connecting to the default Postgres database created at the time of the installation. I had the same issue on my local machine but I resolved it by creating a new server group in pgAdmin4 and creating a new db on port 5434. I prefer not to do this on server as it defeats the purpose of the concept of docker. Another thought is perhaps node.js is attempting to connect to the database even before it is up. That is the reason I added the line 'sleep 20' which worked on my local machine. Any thoughts on how I can fix this? TIA!
If you want to wait on the availability of a host and TCP port, you can use this script https://github.com/vishnubob/wait-for-it
In your docker file you can copy this file into container and change mode
RUN chmod +x wait-for-it.sh
Then in your docker compose run this script on service which you want to wait
entrypoint: bash -c "./wait-for-it.sh --timeout=0 service_name:service_port && node server.js"
This is a part of my node app
app.configure('development', function() {
app.set('db-uri', 'mongodb://localhost/nodepad-development');
app.use(express.errorHandler({ dumpExceptions: true }));
app.set('view options', {
pretty: true
});
});
app.configure('test', function() {
app.set('db-uri', 'mongodb://localhost/nodepad-test');
app.set('view options', {
pretty: true
});
});
app.configure('production', function() {
app.set('db-uri', 'mongodb://localhost/nodepad-production');
});
Edited to
app.set('db-uri', 'mongodb://mongoDB:27017/nodepad-development');
Still the same error.
I have already created container for my node app which runs on local host but I am unable to connect it to another mongo container due to which I cannot do POST requests to the app.
This is my docker compose file
version: '3'
services:
apptest:
container_name: apptest
restart: always
image: ekamzf/nodeapp:1.1
ports:
- '8080:8080
depends_on:
- mongoDB
mongoDB:
container_name: mongoDB
image: mongo
volumes:
- ./data:/usr/src/app
ports:
- '27017:27017'
And the error I get when I try to register account details in my node app is
Error: Timeout POST /users
at null._onTimeout (/usr/src/app/node_modules/connect-timeout/index.js:12:22)
at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)
What am I missing?
Basically, how should I connect this type of code with mongodb anyway ?
What's the role of 'db-uri'?
Docker Compose aliases service names to host names.
Because you named your container running the mongo image as mongoDB, mongodb (case insensitive) will be the name of the host and the name by which you should refer to it from your Node.JS container and app.
Replace localhost in the URI with mongodb
The MongoDB database defaults to port 27017. Unless you've changed the port, you should specify this value.
Add the port to URI so that you have mongodb:27017
Optional but good practice, refactor the app to use environment variables rather than hard-coded values.
This has at least 2 benefits:
a. Your code becomes more flexible;
b. Your Compose file, by then specifying these values, will be clearer.
See the DockerHub documentation for the image here
See MongoDB"s documentation on connection strings here
A Google search returns many examples using Compose, MongoDB and Node.JS
Update: repro
I'm confident your issue may be related to the timing of the Compose containers. I believe your app tries to connect to the DB before the DB container is ready. This is a common issue with Compose and is not solved using Compose's depends_on. Instead you must find a MongoDB (or perhaps other database) solution to this.
In my repro, index.js (see below) introduces a false delay before it tries to connect to the database. 5 seconds is sufficient time for the DB container to be ready and thus this works:
docker-compose build --no-cache
docker-compose up
Then:
docker-compose logs app
Attaching to 60359441_app_1
app_1 | URL: mongodb://mongodb:27017/example
app_1 | Connected
yields Connected which is good.
Alternatively, to prove the timing issue, you may run the containers separately (and you could remove the sleep function to ensure the database is ready before the app:
HOST=localhost # No DNS naming
PORT=37017 # An arbitrary port to prove the point
# In one session
docker run \
--interactive --tty --rm \
--publish=${PORT}:27017 \
mongo
# In another session
docker run \
--interactive --tty --rm \
--net=host \
--env=HOST=${HOST} --env=PORT=${PORT} --env=DATA=example --env=WAIT=0 \
app
URL: mongodb://localhost:37017/example
Connected
docker-compose.yaml:
version: "3"
services:
app:
image: app
build:
context: ./app
dockerfile: Dockerfile
environment:
- HOST=mongodb
- PORT=27017
- DATA=example
- WAIT=5000
volumes:
- ${PWD}/app:/app
mongodb:
image: mongo
restart: always
# environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: example
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_SERVER: mongodb
# ME_CONFIG_MONGODB_ADMINUSERNAME: root
# ME_CONFIG_MONGODB_ADMINPASSWORD: example
NB Because I followed your naming of mongodb, the mongo-express container must be provided this host name through ME_CONFIG_MONGODB_SERVER
NB The other environment variables shown with mongo and mongo-express images are the defaults and thus optional.
Dockerfile:
FROM node:13.8.0-slim
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
ENTRYPOINT ["node","index.js"]
index.js:
// Obtain config from the environment
const HOST = process.env.HOST;
const PORT = process.env.PORT;
const DATA = process.env.DATA;
const WAIT = parseInt(process.env.WAIT, 10);
// Create MongoDB client
var MongoClient = require("mongodb").MongoClient;
let url = `mongodb://${HOST}:${PORT}/${DATA}`;
console.log(`URL: ${url}`);
// Artificially delay the code
setTimeout(function() {
MongoClient.connect(url, function(err, db) {
if(!err) {
console.log("Connected");
}
});
}, WAIT);
NB index.js uses the environment (HOST,PORT,DB) for its config which is good practice.
Including mongo-express provides the ability to browse the mongo server to readily observe what's going on:
So what I want to do is to have 2 docker container running(MongoDB and Postgres) both of these are working. Now what I want to do is to create 2 services(node.js instances) that will connect from separate containers that will query and exercise their database. Each service should only access a single database.
To show my docker-compose.yml
services:
#mongodb set up
web:
image: node
build: ./mongodb_service
ports:
- 4000:4000
depends_on:
- mongodb
mongodb:
image: mongo
expose:
- '27017'
web_postgres:
image: node
build: ./postgres_service
ports:
- "3000:3000"
depends_on:
- postgresdb
#postgres db definition the env vars should be put in a properties file and passed to it somehow
postgresdb:
image: postgres
#the health check confirms that the db is ready to accept connections and not just the container
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
environment:
- PGUSER=user
- PGPASSWORD=1234
- PGPORT=5432
- PGDATABASE=customerdirectory
# Maps port 54320 (localhost) to port 5432 on the container. You can change the ports to fix your needs.
ports:
- "54320:5432"
Currently what is breaking is my web_postgres service. When I run docker-compose up I notice that the console.log() commands from my web service end up in the web_postgres Here is what I mean web_postgres error. At the top of the image, my web_postgres_1 service is printing stuff from web service.
Is my docker-compose.yml set up correctly to have these containers run independently?
If there is something kind of missing piece please lmk will upload.
Thanks a lot for all your help.
This is the server.js file from my web_postgres service.
const express = require("express");
const { Pool, Client } = require('pg')
const app = express()
// pools will use environment variables
// for connection information
const pool = new Pool({
user: 'user',
host: 'localhost',
database: 'customerdirectory',
password: '1234',
port: 5432,
})
pool.query('SELECT NOW()', (err, res) => {
console.log(err, res)
pool.end()
})
app.get("/", (req, res) => {
res.send("Hello from Node.js app \n");
});
app.listen('3000', () =>{
console.log("I hate this damn thingy sometimes")
})
This is the web service server.js code that gets printed
const express = require("express");
var MongoClient = require('mongodb').MongoClient;
const app = express();
MongoClient.connect("mongodb://mongodb:27017", function(err, db) {
if(err) throw err;
db.close()
})
app.get("/", (req, res) => {
res.send("Hello from Node.js app \n");
});
app.listen('4000', () =>{
console.log("I hate this damn thingy sometimes")
})
You need to remove the image: from your docker-compose.yml file. When you specify that both containers have the same image:, Docker Compose takes you at face value and uses the same image for both, even if there are separate build: instructions.
The documentation for image: notes:
If the image does not exist, Compose attempts to pull it, unless you have also specified build, in which case it builds it using the specified options and tags it with the specified tag.
So what happens here is that Compose builds the first service's image, and tags it as node:latest (potentially conflicting with a standard Docker Hub image with this same name); and when the second container starts, it sees that image already exists, so it reuses it.
You can demonstrate this with a simple setup. Create a new empty directory with subdirectories a and b. Create a/Dockerfile:
FROM busybox
WORKDIR /app
RUN echo Hello from a > hello.txt
EXPOSE 80
CMD ["httpd", "-f"]
Similarly create b/Dockerfile with a different string.
Now create this docker-compose.yml file:
version: '3'
services:
a:
# image: image
build: a
ports: ['8000:80']
b:
# image: image
build: b
ports: ['8001:80']
You can start this as-is, and curl http://localhost:8000/hello.txt and similarly for port 8001 to see the two different responses. But, if you uncomment the two image: lines, they'll both return the same string. There are also some clues in the Docker Compose output where one path builds both images but the other only builds one, and there's a warning about the named image not existing when you specify the image names.
I'm trying to build a stack with two containers as a first step, one with the app, one with a MS SQL server. Using no stack, and a container with the SQL server and the app locally works fine, but I can't manage to figure out the proper way to make the containerised app to connect to the DB.
My stack file is as follows :
version: "3.4"
services:
db:
image: orizon/training-library-sql
ports:
- 1443:1443
networks:
- backend
app:
image: orizon/training-library
ports:
- 4000:4000
networks:
- backend
depends_on:
- db
links:
- db:db
deploy:
replicas: 1
networks:
backend:
Db image is based on microsoft/mssql-server-linux:2017-latest and works fine when the app is not in a container and use 'localhost' as hostname.
In the node app, the mssql config is the following:
const config = {
user: '<username>',
password: '<password>',
server: 'db',
database: 'library',
options: {
encrypt: false // Use this if you're on Windows Azure
}
};
And the message I received from node app container :
2018-09-07T10:11:57.404Z app ConnectionError: Failed to connect to db:1433 - getaddrinfo ENOTFOUND db
EDIT
Simplified my stackfile and the connectivity now kind of works.
links seems deprecated and replaced with depends_on
version: "3.4"
services:
db:
image: orizon/training-library-sql
ports:
- 1443:1443
app:
image: orizon/training-library
ports:
- 4000:4000
depends_on:
- db
deploy:
replicas: 1
Now the error message changed and let me think it's more of a kind of delay issue. Database container seems like it needs a bit more time to get ready before popping up the app container.
I guess I'm now looking for means to delay connecting to the database either through docker or by code.
Finally made it work properly.
See OP for much simpler and effective stack file.
In addition, I added a retry strategy in my app code to let time for the MS SQL server to properly start in the container.
function connectWithRetry() {
return sql.connect(config, (err) => {
if (err) {
debug(`Connection to DB failed, retry in 5s (${chalk.gray(err.message)})`);
sql.close();
setTimeout(connectWithRetry, 5000);
} else {
debug('Connection to DB is now ready...');
}
});
}
connectWithRetry();
Docker documentation shows a parameter that should answer this (sequential_deployment: true) but docker stackdoesn't allow its usage. Docker documentation itself advise to either manage this issue by code or by adding a delay script.