docker-compose: nodejs container not communicating with postgres container - node.js

I did find a few people with a slightly different setup but with the same issue. So I hope this doesn't feel like a duplicated question.
My setup is pretty simple and straight-forward. I have a container for my node app and a container for my Postgres database. When I run docker-compose up and I see the log both containers are up and running. The problem is my node app is not connecting to the database.
I can connect to the database using Postbird and it works as it should.
If I create a docker container only for the database and run the node app directly on my machine everything works fine. So it's not and issue with the DB or the app but with the setup.
Here's a few useful information:
Running a docker just for the DB (connects and works perfectly):
> vigna-backend#1.0.0 dev /Users/lucasbittar/Dropbox/Code/vigna/backend
> nodemon src/server.js
[nodemon] 2.0.2
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node -r sucrase/register src/server.js`
Initializing database...
Connecting to DB -> vignadb | PORT: 5432
Executing (default): SELECT 1+1 AS result
Connection has been established successfully -> vignadb
Running a container for each using docker-compose:
Creating network "backend_default" with the default driver
Creating backend_db_1 ... done
Creating backend_app_1 ... done
Attaching to backend_db_1, backend_app_1
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2020-07-24 13:23:32.875 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-07-24 13:23:32.876 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-07-24 13:23:32.876 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-07-24 13:23:32.881 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-07-24 13:23:32.955 UTC [27] LOG: database system was shut down at 2020-07-23 13:21:09 UTC
db_1 | 2020-07-24 13:23:32.999 UTC [1] LOG: database system is ready to accept connections
app_1 |
app_1 | > vigna-backend#1.0.0 dev /usr/app
app_1 | > npx sequelize db:migrate && npx sequelize db:seed:all && nodemon src/server.js
app_1 |
app_1 |
app_1 | Sequelize CLI [Node: 14.5.0, CLI: 5.5.1, ORM: 5.21.3]
app_1 |
app_1 | Loaded configuration file "src/config/database.js".
app_1 |
app_1 | Sequelize CLI [Node: 14.5.0, CLI: 5.5.1, ORM: 5.21.3]
app_1 |
app_1 | Loaded configuration file "src/config/database.js".
app_1 | [nodemon] 2.0.2
app_1 | [nodemon] to restart at any time, enter `rs`
app_1 | [nodemon] watching dir(s): *.*
app_1 | [nodemon] watching extensions: js,mjs,json
app_1 | [nodemon] starting `node -r sucrase/register src/server.js`
app_1 | Initializing database...
app_1 | Connecting to DB -> vignadb | PORT: 5432
My database class:
class Database {
constructor() {
console.log('Initializing database...');
this.init();
}
async init() {
let retries = 5;
while (retries) {
console.log(`Connecting to DB -> ${databaseConfig.database} | PORT: ${databaseConfig.port}`);
const sequelize = new Sequelize(databaseConfig);
try {
await sequelize.authenticate();
console.log(`Connection has been established successfully -> ${databaseConfig.database}`);
models
.map(model => model.init(sequelize))
.map( model => model.associate && model.associate(sequelize.models));
break;
} catch (err) {
console.log(`Error: ${err.message}`);
retries -= 1;
console.log(`Retries left: ${retries}`);
// Wait 5 seconds before trying again
await new Promise(res => setTimeout(res, 5000));
}
}
}
}
Dockerfile:
FROM node:alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3333
CMD ["npm", "start"]
docker-compose.yml:
version: "3"
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: vignadb
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
app:
build: .
depends_on:
- db
ports:
- "3333:3333"
volumes:
- .:/usr/app
command: npm run dev
package.json (scrips only):
"scripts": {
"dev-old": "nodemon src/server.js",
"dev": "npx sequelize db:migrate && npx sequelize db:seed:all && nodemon src/server.js",
"build": "sucrase ./src -d ./dist --transforms imports",
"start": "node dist/server.js"
},
.env:
# Database
DB_HOST=db
DB_USER=postgres
DB_PASS=postgres
DB_NAME=vignadb
DB_PORT=5432
database config:
require('dotenv/config');
module.exports = {
dialect: 'postgres',
host: process.env.DB_HOST,
username: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
port: process.env.DB_PORT,
define: {
timestamp: true,
underscored: true,
underscoredAll: true,
},
};
I know I'm messing up something I just don't know where.
Let me know if I can provide more information.
Thanks!

You should put your 2 containers in the same network https://docs.docker.com/compose/networking/
And call your db service inside your nodejs connexion string.
Something like: postgres://db:5432/vignadb

Related

Unable to connect nodejs app to mongodb using docker-compose

Simple Node app and mongo containers created using docker-compose below... What am I missing?
mongodb://user:password#mongo:27017/
version: '3.8'
services:
mongo:
image: mongo
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=user
- MONGO_INITDB_ROOT_PASSWORD=password
app:
image: app
build:
context: ./app
dockerfile: Dockerfile
ports:
- "3000:3000"
depends_on:
- mongo
I've read several posts on the same issue and the official mongo docker page and seem to be doing everything correct. Keep getting the following msg.
app_1 | mongodb://user:password#mongo:27017/ {
app_1 | autoIndex: false,
app_1 | poolSize: 10,
app_1 | bufferMaxEntries: 0,
app_1 | useNewUrlParser: true,
app_1 | useUnifiedTopology: true
app_1 | }
app_1 | MongoDB connection with retry
app_1 | MongoDB connection unsuccessful, retry after 5 seconds. 2
I prepared interesting test for you
version: '3.8'
services:
mongo:
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=user
- MONGO_INITDB_ROOT_PASSWORD=password
healthcheck:
test: "echo 'db.runCommand(\"ping\").ok'"
interval: 5s
timeout: 5s
retries: 3
app:
image: mongo
command: "mongosh mongodb://user:password#mongo:27017/admin --eval \"printjson(db.test.insertOne({'a': 1}))\""
ports:
- "3000:3000"
depends_on:
- mongo
It will print
app_1 | {
app_1 | acknowledged: true,
app_1 | insertedId: ObjectId("63696c4e99703eb4ab9fba62")
app_1 | }
but if you will change
command: "mongosh mongodb://user:password#mongo:27017/admin --eval \"printjson(db.test.insertOne({'a': 1}))\""
to
command: "mongosh mongodb://user:password#mongo:27017/local --eval \"printjson(db.test.insertOne({'a': 1}))\""
you will see error. Even if you will add MONGO_INITDB_DATABASE to mongo with value local.
You can test it running in second console command:
docker-compose run app bash
and then try
mongosh mongodb://user:password#mongo:27017/admin --eval "printjson(db.test.insertOne({'a': 1}))"
with success
and
mongosh mongodb://user:password#mongo:27017/local --eval "printjson(db.test.insertOne({'a': 1}))"
that failing. You can see logs from mongo like
mongo_1 | {"t":{"$date":"2022-11-07T20:40:35.686+00:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn6","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"user","authenticationDatabase":"local","remote":"172.20.0.3:50382","extraInfo":{},"error":"UserNotFound: Could not find user \"user\" for db \"local\""}}
mongo_1 | {"t":{"$date":"2022-11-07T20:40:35.687+00:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn6","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"user","authenticationDatabase":"local","remote":"172.20.0.3:50382","extraInfo":{},"error":"UserNotFound: Could not find user \"user\" for db \"local\""}}
mongo_1 | {"t":{"$date":"2022-11-07T20:40:35.688+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn5","msg":"Connection ended","attr":{"remote":"172.20.0.3:50374","uuid":"b77fff1f-b832-4900-9c2d-1e7fd1e79424","connectionId":5,"connectionCount":1}}
mongo_1 | {"t":{"$date":"2022-11-07T20:40:35.699+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn6","msg":"Connection ended","attr":{"remote":"172.20.0.3:50382","uuid":"3995bcbf-706d-4bed-92a2-04736305b7c2","connectionId":6,"connectionCount":0}}
this problem is described in topic User not found on MongoDB Docker image with authentication
You can authenticate with user,password against admin db, not mydb
You can read more about creating database with user and password here:
How to create a DB for MongoDB container on start up?

Error: Cannot find module '/app/wait-for-it.sh"'

I am trying to dockerization my backend server.
my stack is nodejs-nestjs with redis and postgres
here is my Dockerfile
FROM node:15
WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
COPY wait-for-it.sh ./
COPY . .
RUN npm install -g npm#7.22.0
RUN npm install
RUN npm run build
RUN chmod +x ./wait-for-it.sh .
EXPOSE 3333
CMD [ "sh", "-c", "npm run start:prod"]
and here is my docker-compose file:
version: '3.2'
services:
redis-service:
image: "redis:alpine"
container_name: redis-container
ports:
- 127.0.0.1:6379:6379
expose:
- 6379
postgres:
image: postgres:14.1-alpine
container_name: postgres-container
restart: always
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=1234
- DB_NAME = db
ports:
- 127.0.0.1:5432:5432
expose:
- 5432
volumes:
- db:/var/lib/postgresql/data
oms-be:
build: .
ports:
- 3333:3333
links:
- postgres
- redis-service
depends_on:
- postgres
- redis-service
environment:
- DB_HOST=postgres
- POSTGRES_PASSWORD = 1234
- POSTGRES_USER=root
- AUTH_REDIS_HOST=redis-service
- DB_NAME = db
command: ["./wait-for-it.sh", "postgres:5432", "--", "sh", "-c", "npm run start:prod"]
volumes:
db:
driver: local
However, when I run docker-compose up
I got this error :
taching to oms-be-oms-be-1, postgres-container, redis-container
redis-container | 1:C 05 Jun 2022 00:35:16.730 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-container | 1:C 05 Jun 2022 00:35:16.730 # Redis version=7.0.0, bits=64, commit=00000000, modified=0, pid=1, just started
redis-container | 1:C 05 Jun 2022 00:35:16.730 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis-container | 1:M 05 Jun 2022 00:35:16.731 * monotonic clock: POSIX clock_gettime
redis-container | 1:M 05 Jun 2022 00:35:16.731 * Running mode=standalone, port=6379.
redis-container | 1:M 05 Jun 2022 00:35:16.731 # Server initialized
redis-container | 1:M 05 Jun 2022 00:35:16.731 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis-container | 1:M 05 Jun 2022 00:35:16.732 * The AOF directory appendonlydir doesn't exist
redis-container | 1:M 05 Jun 2022 00:35:16.732 * Ready to accept connections
postgres-container |
postgres-container | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres-container |
postgres-container | 2022-06-05 00:35:16.824 UTC [1] LOG: starting PostgreSQL 14.1 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit
postgres-container | 2022-06-05 00:35:16.824 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres-container | 2022-06-05 00:35:16.824 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres-container | 2022-06-05 00:35:16.827 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres-container | 2022-06-05 00:35:16.833 UTC [21] LOG: database system was shut down at 2022-06-05 00:34:36 UTC
postgres-container | 2022-06-05 00:35:16.836 UTC [1] LOG: database system is ready to accept connections
oms-be-oms-be-1 | internal/modules/cjs/loader.js:905
oms-be-oms-be-1 | throw err;
oms-be-oms-be-1 | ^
oms-be-oms-be-1 |
oms-be-oms-be-1 | Error: Cannot find module '/app/wait-for-it.sh"'
oms-be-oms-be-1 | at Function.Module._resolveFilename (internal/modules/cjs/loader.js:902:15)
oms-be-oms-be-1 | at Function.Module._load (internal/modules/cjs/loader.js:746:27)
oms-be-oms-be-1 | at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:75:12)
oms-be-oms-be-1 | at internal/main/run_main_module.js:17:47 {
oms-be-oms-be-1 | code: 'MODULE_NOT_FOUND',
oms-be-oms-be-1 | requireStack: []
oms-be-oms-be-1 | }
oms-be-oms-be-1 exited with code 1
I tried to build it without wait-for-it.sh and it was complaining that the server cannot connect to the Postgres DB and Redis, so I added wait-for-it.sh file to make it wait until the Redis and the Postgres DB are up, but I got the above error
Can anyone tell me what I am doing wrong?
I've simplified your Dockerfile and docker-compose.yaml in order to test things out on my system. I have this package.json:
{
"name": "example",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"build": "echo \"Example build command\"",
"start:prod": "sleep inf"
},
"author": "",
"license": "ISC"
}
And this Dockerfile:
FROM node:15
WORKDIR /usr/src/app
COPY package*.json ./
COPY wait-for-it.sh ./
RUN chmod +x ./wait-for-it.sh .
RUN npm install
RUN npm run build
EXPOSE 3333
CMD [ "sh", "-c", "npm run start:prod"]
And this docker-compose.yaml:
version: '3.2'
services:
postgres:
image: docker.io/postgres:14
environment:
POSTGRES_PASSWORD: secret
oms-be:
build: .
ports:
- 3333:3333
command: [./wait-for-it.sh", "postgres:5432", "--", "sh", "-c", "npm run start:prod"]
Note that the command: on the final line there has the missing quote. If I try to bring this up using docker-compose up, I see:
oms-be_1 | node:internal/modules/cjs/loader:927
oms-be_1 | throw err;
oms-be_1 | ^
oms-be_1 |
oms-be_1 | Error: Cannot find module '/usr/src/app/wait-for-it.sh"'
oms-be_1 | at Function.Module._resolveFilename (node:internal/modules/cjs/loader:924:15)
oms-be_1 | at Function.Module._load (node:internal/modules/cjs/loader:769:27)
oms-be_1 | at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:76:12)
oms-be_1 | at node:internal/main/run_main_module:17:47 {
oms-be_1 | code: 'MODULE_NOT_FOUND',
oms-be_1 | requireStack: []
oms-be_1 | }
If I correct the syntax so that we have:
version: '3.2'
services:
postgres:
image: docker.io/postgres:14
environment:
POSTGRES_PASSWORD: secret
oms-be:
build: .
ports:
- 3333:3333
command: ["./wait-for-it.sh", "postgres:5432", "--", "sh", "-c", "npm run start:prod"]
Then it runs successfully:
oms-be_1 | wait-for-it.sh: waiting 15 seconds for postgres:5432
oms-be_1 | wait-for-it.sh: postgres:5432 is available after 0 seconds
oms-be_1 |
oms-be_1 | > example#1.0.0 start:prod
oms-be_1 | > sleep inf
oms-be_1 |
The difference in behavior is due to the ENTRYPOINT script in the underlying node:15 image, which includes this logic:
if [ "${1#-}" != "${1}" ] || [ -z "$(command -v "${1}")" ]; then
set -- node "$#"
fi
That says, essentially:
IF the first parameter starts with -
OR There is no command matching $1
THEN try starting the command with node
With the missing ", you end up with an argument that doesn't match any valid commands, which is why you end up with an error in which node is trying to run the wait-for-it.sh script.

Connecting Redis with Docker with Bull with Throng with Node

I have a Heroku app that has a single process. I'm trying to change it so that it has several worker processes in a dedicated queue to handle incoming webhooks. To do so, I am using a Node.JS backend with the Bull and Throng packages, which use Redis. All of this is deployed on Docker.
I've found various tutorials that cover some of this combination, but not all of it so I'm not sure how to continue. When I spin up Docker, the main server runs, but when the worker process tries to start, it just logs Killed, which isn't that detailed of an error message.
Most of the information I found is here
My worker process file is worker.ts:
import { bullOptions, RedisData } from '../database/redis';
import throng from 'throng';
import { Webhooks } from '#octokit/webhooks';
import config from '../config/main';
import { configureWebhooks } from '../lib/github/webhooks';
import Bull from 'bull';
// Spin up multiple processes to handle jobs to take advantage of more CPU cores
// See: https://devcenter.heroku.com/articles/node-concurrency for more info
const workers = 2;
// The maximum number of jobs each worker should process at once. This will need
// to be tuned for your application. If each job is mostly waiting on network
// responses it can be much higher. If each job is CPU-intensive, it might need
// to be much lower.
const maxJobsPerWorker = 50;
const webhooks = new Webhooks({
secret: config.githubApp.webhookSecret,
});
configureWebhooks(webhooks);
async function startWorkers() {
console.log('starting workers...');
const queue = new Bull<RedisData>('work', bullOptions);
try {
await queue.process(maxJobsPerWorker, async (job) => {
console.log('processing...');
try {
await webhooks.verifyAndReceive(job.data);
} catch (e) {
console.error(e);
}
return job.finished();
});
} catch (e) {
console.error(`Error processing worker`, e);
}
}
throng({ workers: workers, start: startWorkers });
In my main server, I have the file Redis.ts:
import Bull, { QueueOptions } from 'bull';
import { EmitterWebhookEvent } from '#octokit/webhooks';
export const bullOptions: QueueOptions = {
redis: {
port: 6379,
host: 'cache',
tls: {
rejectUnauthorized: false,
},
connectTimeout: 30_000,
},
};
export type RedisData = EmitterWebhookEvent & { signature: string };
let githubWebhooksQueue: Bull.Queue<RedisData> | undefined = undefined;
export async function addToGithubQueue(data: RedisData) {
try {
await githubWebhooksQueue?.add(data);
} catch (e) {
console.error(e);
}
}
export function connectToRedis() {
githubWebhooksQueue = new Bull<RedisData>('work', bullOptions);
}
(Note: I invoke connectToRedis() before the worker process begins)
My dockerfile is
# We can change the version of ndoe by replacing `lts` to anything found here: https://hub.docker.com/_/node
FROM node:lts
ENV PORT=80
WORKDIR /usr/src/app
# Install dependencies
COPY package*.json ./
COPY yarn.lock ./
RUN yarn
RUN yarn global add npm-run-all
# Bundle app source
COPY . .
# Expose the web port
EXPOSE 80
EXPOSE 9229
EXPOSE 6379
CMD npm-run-all --parallel start start-notification-server start-github-server
and my docker-compose.yml is
version: '3.7'
services:
redis:
image: redis
container_name: cache
expose:
- 6379
api:
links:
- redis
image: instantish/api:latest
environment:
REDIS_URL: redis://cache
command: npm-run-all --parallel dev-debug start-notification-server-dev start-github-server-dev
depends_on:
- mongo
env_file:
- api/.env
- api/flags.env
ports:
- 2000:80
- 9229:9229
- 6379:6379
volumes:
# Activate if you want your local changes to update the container
- ./api:/usr/src/app:cached
Finally, the relevant NPM scripts for my project are
"dev-debug": "nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js\"",
"start-github-server-dev": "MONGOOSE_DEBUG=false nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"ts-node ./scripts/worker.ts\"",
The docker container logs are:
> instantish#1.0.0 start-github-server-dev /usr/src/app
> MONGOOSE_DEBUG=false nodemon --watch "**/**" --ext "js,ts,json" --exec "ts-node ./scripts/worker.ts"
> instantish#1.0.0 dev-debug /usr/src/app
> nodemon --watch "**/**" --ext "js,ts,json" --exec "node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js"
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `ts-node ./scripts/worker.ts`
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js`
worker.ts
Killed
[nodemon] app crashed - waiting for file changes before starting...

Can't authenticate with mongoDB from docker-compose service

What I'm trying to do
I'm trying to set up a docker-compose definition, where I have a mongoDB container, and a nodeJS container that connects to it.
version: "3.9"
services:
events-db:
image: mongo
volumes:
- db-volume:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: $SANDBOX_DB_USER
MONGO_INITDB_ROOT_PASSWORD: $SANDBOX_DB_PASS
MONGO_INITDB_DATABASE: sandboxdb
app:
image: node:15.12.0
user: node
working_dir: /home/node/app
volumes:
- ./:/home/node/app:ro
environment:
MDB_CONNECTION: mongodb://$SANDBOX_DB_USER:$SANDBOX_DB_PASS#events-db:27017/sandboxdb
command: node myapp
depends_on:
- events-db
volumes:
db-volume:
Along with a .env file that declares the credentials (planning to use proper env variables when I deploy this to a production environment):
SANDBOX_DB_USER=myuser
SANDBOX_DB_PASS=myp4ss
Finally, my nodejs script, myapp.js is simply trying to connect, grab a reference to a collection, and insert a document:
require('dotenv').config()
const { MongoClient } = require('mongodb')
async function main () {
console.log('Connecting')
const client = new MongoClient(process.env.MDB_CONNECTION, {
connectTimeoutMS: 10000,
useUnifiedTopology: true,
})
await client.connect()
const db = client.db()
const events = db.collection('events')
console.log('Inserting an event')
await events.insertOne({
type: 'foo',
timestamp: new Date(),
})
console.log('Done.')
process.exit(0)
}
if (require.main === module) {
main()
}
Result
When I run docker-compose config I see the following output, so I would expect it to work:
$ docker-compose config
services:
app:
command: node myapp
depends_on:
events-db:
condition: service_started
environment:
MDB_CONNECTION: mongodb://myuser:myp4ss#events-db:27017/sandboxdb
image: node:15.12.0
user: node
volumes:
- C:\workspace\dcsandbox:/home/node/app:ro
working_dir: /home/node/app
events-db:
environment:
MONGO_INITDB_DATABASE: sandboxdb
MONGO_INITDB_ROOT_PASSWORD: myp4ss
MONGO_INITDB_ROOT_USERNAME: myuser
image: mongo
volumes:
- db-volume:/data/db:rw
version: '3.9'
volumes:
db-volume: {}
However, when I run docker-compose up I see that my node container is unable to connect to the mongoDB to insert an event:
events-db_1 | {"t":{"$date":"2021-04-07T13:57:36.793+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
app_1 | Connecting
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.811+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.27.0.3:34164","connectionId":1,"connectionCount":1}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.816+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn1","msg":"client metadata","attr":{"remote":"172.27.0.3:34164","client":"conn1","doc":{"driver":{"name":"nodejs","version":"3.6.6"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"4.19.128-microsoft-standard"},"platform":"'Node.js v15.12.0, LE (unified)"}}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.820+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"172.27.0.3:34166","connectionId":2,"connectionCount":2}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.822+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn2","msg":"client metadata","attr":{"remote":"172.27.0.3:34166","client":"conn2","doc":{"driver":{"name":"nodejs","version":"3.6.6"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"4.19.128-microsoft-standard"},"platform":"'Node.js v15.12.0, LE (unified)"}}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.822+00:00"},"s":"I", "c":"ACCESS", "id":20251, "ctx":"conn2","msg":"Supported SASL mechanisms requested for unknown user","attr":{"user":"myuser#sandboxdb"}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.823+00:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn2","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-256","principalName":"myuser","authenticationDatabase":"sandboxdb","client":"172.27.0.3:34166","result":"UserNotFound: Could not find user \"myuser\" for db \"sandboxdb\""}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.824+00:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn2","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","principalName":"myuser","authenticationDatabase":"sandboxdb","client":"172.27.0.3:34166","result":"UserNotFound: Could not find user \"myuser\" for db \"sandboxdb\""}}
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.826+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn1","msg":"Connection ended","attr":{"remote":"172.27.0.3:34164","connectionId":1,"connectionCount":1}}
app_1 | /home/node/app/node_modules/mongodb/lib/cmap/connection.js:268
app_1 | callback(new MongoError(document));
app_1 | ^
app_1 |
app_1 | MongoError: Authentication failed.
app_1 | at MessageStream.messageHandler (/home/node/app/node_modules/mongodb/lib/cmap/connection.js:268:20)
app_1 | at MessageStream.emit (node:events:369:20)
app_1 | at processIncomingData (/home/node/app/node_modules/mongodb/lib/cmap/message_stream.js:144:12)
app_1 | at MessageStream._write (/home/node/app/node_modules/mongodb/lib/cmap/message_stream.js:42:5)
app_1 | at writeOrBuffer (node:internal/streams/writable:395:12)
app_1 | at MessageStream.Writable.write (node:internal/streams/writable:340:10)
app_1 | at Socket.ondata (node:internal/streams/readable:750:22)
app_1 | at Socket.emit (node:events:369:20)
app_1 | at addChunk (node:internal/streams/readable:313:12)
app_1 | at readableAddChunk (node:internal/streams/readable:288:9) {
app_1 | ok: 0,
app_1 | code: 18,
app_1 | codeName: 'AuthenticationFailed'
app_1 | }
events-db_1 | {"t":{"$date":"2021-04-07T13:57:38.832+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn2","msg":"Connection ended","attr":{"remote":"172.27.0.3:34166","connectionId":2,"connectionCount":0}}
dcsandbox_app_1 exited with code 1
I've put the full output at https://pastebin.com/uNyJ6tiy
and the example code at this repo: https://github.com/akatechis/example-docker-compose-mongo-node-auth
After some more digging, I managed to figure it out. The issue is that the MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD variables simply set the root user's credentials, and the MONGO_INITDB_DATABASE simply sets the initial database for scripts in /docker-entrypoint-initdb.d.
By default, the root user is added to the admin database, so by removing the /sandboxdb part of the connection string, I was able to have my node app authenticate against the admin DB as the root user.
While this doesn't quite accomplish what I wanted initially (to create a separate, non-root user for my database, and use that to authenticate), I think this puts me on the right path to using an init script to set up the user accounts I want to have.

Docker-compose error react-scripts: not found npm ERR! code ELIFECYCLE

This is the error I got from when I type the command docker-compose up. This is my node.js application and I'm using MongoDB. My goal is to containerize this application and publish on docker hub.
1. Creating mongo ... done
2. Creating app ... done
3. Attaching to mongo, app
4. mongo | 2020-07-13T05:02:33.356+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
5. mongo | 2020-07-13T05:02:33.360+0000 W ASIO [main] No TransportLayer configured during NetworkInterface startup
6. mongo | 2020-07-13T05:02:33.360+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=03ce29ac0ecc
7. mongo | 2020-07-13T05:02:33.360+0000 I CONTROL [initandlisten] db version v4.2.8
8. mongo | 2020-07-13T05:02:33.360+0000 I CONTROL [initandlisten] git version: 43d25964249164d76d5e04dd6cf38f6111e21f5f
9. mongo | 2020-07-13T05:02:33.360+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018
10. mongo | 2020-07-13T05:02:33.360+0000 I CONTROL [initandlisten] allocator: tcmalloc
11. mongo | 2020-07-13T05:02:33.360+0000 I CONTROL [initandlisten] modules: none
12. mongo | 2020-07-13T05:02:33.360+0000 I CONTROL [initandlisten] build environment:
13. mongo | 2020-07-13T05:02:33.360+0000 I CONTROL [initandlisten] distmod: ubuntu1804
14. mongo | 2020-07-13T05:02:33.360+0000 I CONTROL [initandlisten] distarch: x86_64
15. mongo | 2020-07-13T05:02:33.360+0000 I CONTROL [initandlisten] target_arch: x86_64
16. mongo | 2020-07-13T05:02:33.360+0000 I CONTROL [initandlisten] options: { net: { bindIp: "*" } }
17. mongo | 2020-07-13T05:02:33.361+0000 I STORAGE [initandlisten]
18. mongo | 2020-07-13T05:02:33.361+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
19. mongo | 2020-07-13T05:02:33.361+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
20. mongo | 2020-07-13T05:02:33.361+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=471M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
21. mongo | 2020-07-13T05:02:33.863+0000 I STORAGE [initandlisten] WiredTiger message [1594616553:863974][1:0x7fa8ae84db00], txn-recover: Set global recovery timestamp: (0, 0)
22. mongo | 2020-07-13T05:02:33.885+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
23. mongo | 2020-07-13T05:02:33.902+0000 I STORAGE [initandlisten] Timestamp monitor starting
24. mongo | 2020-07-13T05:02:33.910+0000 I CONTROL [initandlisten]
25. mongo | 2020-07-13T05:02:33.910+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
26. mongo | 2020-07-13T05:02:33.910+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
27. mongo | 2020-07-13T05:02:33.910+0000 I CONTROL [initandlisten]
28. mongo | 2020-07-13T05:02:33.911+0000 I STORAGE [initandlisten] createCollection: admin.system.version with provided UUID: 1c3e3ef6-c303-4517-9613-82f840f58488 and options: { uuid: UUID("1c3e3ef6-c303-4517-9613-82f840f58488") }
29. mongo | 2020-07-13T05:02:33.935+0000 I INDEX [initandlisten] index build: done building index _id_ on ns admin.system.version
30. mongo | 2020-07-13T05:02:33.935+0000 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
31. mongo | 2020-07-13T05:02:33.935+0000 I COMMAND [initandlisten] setting featureCompatibilityVersion to 4.2
32. mongo | 2020-07-13T05:02:33.935+0000 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
33. mongo | 2020-07-13T05:02:33.935+0000 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
34. mongo | 2020-07-13T05:02:33.935+0000 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
35. mongo | 2020-07-13T05:02:33.935+0000 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 03ec4702-b65a-4f88-8080-09ab0b26a7a4 and options: { capped: true, size: 10485760 }
36. mongo | 2020-07-13T05:02:33.954+0000 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
37. mongo | 2020-07-13T05:02:33.955+0000 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
38. mongo | 2020-07-13T05:02:33.955+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
39. mongo | 2020-07-13T05:02:33.957+0000 I SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: <unsharded>
40. mongo | 2020-07-13T05:02:33.957+0000 I NETWORK [listener] Listening on /tmp/mongodb-27017.sock
41. mongo | 2020-07-13T05:02:33.957+0000 I NETWORK [listener] Listening on 0.0.0.0
42. mongo | 2020-07-13T05:02:33.957+0000 I NETWORK [listener] waiting for connections on port 27017
43. mongo | 2020-07-13T05:02:33.964+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
44. mongo | 2020-07-13T05:02:33.964+0000 I STORAGE [LogicalSessionCacheRefresh] createCollection: config.system.sessions with provided UUID: 4c715ea5-9f5f-41b3-9101-fd44ce5455a4 and options: { uuid: UUID("4c715ea5-9f5f-41b3-9101-fd44ce5455a4") }
45. mongo | 2020-07-13T05:02:33.980+0000 I INDEX [LogicalSessionCacheRefresh] index build: done building index _id_ on ns config.system.sessions
46. mongo | 2020-07-13T05:02:33.997+0000 I INDEX [LogicalSessionCacheRefresh] index build: starting on config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 } using method: Hybrid
47. mongo | 2020-07-13T05:02:33.997+0000 I INDEX [LogicalSessionCacheRefresh] build may temporarily use up to 200 megabytes of RAM
48. mongo | 2020-07-13T05:02:33.997+0000 I INDEX [LogicalSessionCacheRefresh] index build: collection scan done. scanned 0 total records in 0 seconds
49. mongo | 2020-07-13T05:02:33.997+0000 I INDEX [LogicalSessionCacheRefresh] index build: inserted 0 keys from external sorter into index in 0 seconds
50. mongo | 2020-07-13T05:02:34.000+0000 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
51. mongo | 2020-07-13T05:02:34.005+0000 I INDEX [LogicalSessionCacheRefresh] index build: done building index lsidTTLIndex on ns config.system.sessions
52. app |
53. app | > main-application#1.0.0 start /usr/src/app
54. app | > concurrently "npm run server" "npm run client"
55. app |
56. app | [1]
57. app | [1] > main-application#1.0.0 client /usr/src/app
58. app | [1] > npm start --prefix view
59. app | [1]
60. app | [0]
61. app | [0] > main-application#1.0.0 server /usr/src/app
62. app | [0] > nodemon mainserver.js
63. app | [0]
64. app | [0] [nodemon] 2.0.4
65. app | [0] [nodemon] to restart at any time, enter `rs`
66. app | [0] [nodemon] watching path(s): *.*
67. app | [0] [nodemon] watching extensions: js,mjs,json
68. app | [0] [nodemon] starting `node mainserver.js`
69. app | [1]
70. app | [1] > main#0.1.0 start /usr/src/app/view
71. app | [1] > react-scripts start
72. app | [1]
73. app | [1] sh: 1: react-scripts: not found
74. app | [1] npm ERR! code ELIFECYCLE
75. app | [1] npm ERR! syscall spawn
76. app | [1] npm ERR! file sh
77. app | [1] npm ERR! errno ENOENT
78. app | [1] npm ERR! main#0.1.0 start: `react-scripts start`
79. app | [1] npm ERR! spawn ENOENT
80. app | [1] npm ERR!
81. app | [1] npm ERR! Failed at the main#0.1.0 start script.
82. app | [1] npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
83. app | [1] npm WARN Local package.json exists, but node_modules missing, did you mean to install?
84. app | [1]
85. app | [1] npm ERR! A complete log of this run can be found in:
86. app | [1] npm ERR! /root/.npm/_logs/2020-07-13T05_02_35_310Z-debug.log
87. app | [1] npm ERR! code ELIFECYCLE
88. app | [1] npm ERR! errno 1
89. app | [1] npm ERR! main-application#1.0.0 client: `npm start --prefix view`
90. app | [1] npm ERR! Exit status 1
91. app | [1] npm ERR!
92. app | npm ERR! Failed at the main-application#1.0.0 client script.
93. app | [1] npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
94. app | [1]
95. app | [1] npm ERR! A complete log of this run can be found in:
96. app | [1] npm ERR! /root/.npm/_logs/2020-07-13T05_02_35_348Z-debug.log
97. app | [1] npm run client exited with code 1
98. app | [0] (node:98) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
99. app | [0] Server is running on port: 5000
100. mongo | 2020-07-13T05:02:36.010+0000 I NETWORK [listener] connection accepted from 172.19.0.3:45254 #1 (1 connection now open)
101. mongo | 2020-07-13T05:02:36.015+0000 I NETWORK [conn1] received client metadata from 172.19.0.3:45254 conn1: { driver: { name: "nodejs", version: "3.5.9" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.19.76-linuxkit" }, platform: "'Node.js v10.21.0, LE (legacy)" }
102. app | [0] MongoDB Connected
Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: '3'
services:
app:
container_name: app
restart: always
build: .
ports:
- '80:3000'
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- '27017:27017'
.dockerignore
node_modules
npm-debug.log
mainserver.js
var express = require('express')
var cors = require('cors')
var bodyParser = require('body-parser')
var app = express()
const mongoose = require('mongoose')
var port = process.env.PORT || 5000
app.use(bodyParser.json())
app.use(cors())
app.use(
bodyParser.urlencoded({
extended: false
})
)
//copy and paste below into mongodb
const mongoURI = 'mongodb://mongo:27017/MainData'
mongoose
.connect(
mongoURI,
{ useNewUrlParser: true }
)
.then(() => console.log('MongoDB Connected'))
.catch(err => console.log(err))
var Users = require('./controller/Users')
var Users2 = require('./controller/Users2')
app.use('/users', Users)
app.use('/users', Users2)
app.listen(port, function() {
console.log('Server is running on port: ' + port)
})
package.json
{
"name": "main-application",
"version": "1.0.0",
"description": "",
"scripts": {
"server": "nodemon mainserver.js",
"client": "npm start --prefix view",
"start": "concurrently \"npm run server\" \"npm run client\""
},
"keywords": [
"nodejs",
"jwt",
"passport",
"express"
],
"author": "",
"license": "ISC",
"dependencies": {
"alert": "^4.1.1",
"bcrypt-nodejs": "0.0.3",
"bcryptjs": "^2.4.3",
"body-parser": "1.19.0",
"compare": "^2.0.0",
"concurrently": "^5.1.0",
"cors": "^2.8.4",
"dotenv": "^8.2.0",
"express": "^4.16.3",
"express-session": "^1.17.1",
"express-validator": "^6.6.0",
"generate-password": "^1.5.1",
"jsonwebtoken": "^8.5.1",
"latest-version": "^5.1.0",
"mongodb": "^3.1.6",
"mongoose": "^5.2.15",
"nodemailer": "^6.4.8",
"nodemon": "^2.0.3",
"truffle": "^5.1.10"
}
}
You have to add create-react-app globally in docker container, i.e, npm install -g create-react-app
FROM node:10
RUN npm install -g create-react-app
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
I found the solution to this. On the root folder, type this npm i react-scripts.

Resources