mongoose failing to connect in docker-compose - node.js

I've got a simple docker-compose.yml that looks like this:
services:
mongodb-service:
image: mongo:latest
command: "mongod --bind_ip_all"
mongodb-seeder:
image: mongo:latest
depends_on:
- mongodb-service
command: "mongoimport --uri mongodb://mongodb-service:27017/auth --drop --file /tmp/users.json"
myapp:
image: myapp:latest
environment:
DATABASEURL: mongodb://mongodb-service:27017/auth
depends_on:
- mongodb-service
myapp is a nodejs app that uses mongoose to connect to a mongodb database like so:
const databaseURL = process.env.DATABASEURL;
async function connectToMongo() {
try {
return await mongoose.connect(databaseURL, {
useUnifiedTopology: true,
useNewUrlParser: true,
useCreateIndex: true,
});
}
catch (error) {
logger.error('MongoDB connect failure.', error);
}
}
mongodb-seeder works perfectly fine. I can kick it off, it connects to mongodb-service and runs the import without any problems. However, myapp starts up, tries to connect to mongodb-service for 30 seconds, then dies:
ERROR 2020-09-16T12:13:21.427 [MAIN] MongoDB connection error! [Arguments] {
'0': MongooseError [MongooseTimeoutError]: Server selection timed out after 30000 ms
...stacktrace snipped...
at Function.Module._load (internal/modules/cjs/loader.js:724:14) {
message: 'Server selection timed out after 30000 ms',
name: 'MongooseTimeoutError',
reason: Error: connect ECONNREFUSED 127.0.0.1:27017
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1128:14) {
name: 'MongoNetworkError',
[Symbol(mongoErrorContextSymbol)]: {}
},
[Symbol(mongoErrorContextSymbol)]: {}
}
}
Note: The IP address in this log says it tried to connect to 127.0.0.1, not mongodb://mongodb-service:27017/auth. No matter what value I put in for DATABASEURL, it keeps printing 127.0.0.1. Any ideas why I can't get mongoose to recognize the hostname I'm giving it? And why would mongoose not be able to connect to a service that's clearly network-visible, since another container (mongodb-seeder) can see it without any problems?
Edit: I'm using mongoose 5.8.7

I was able to solve this on my own, turns out it was a pretty stupid miss on my part.
The DockerFile for my app defined an entrypoint that executed a script that exported the DATABASEURL prior to running. Removing that export allowed my environment: settings to flow down to the nodejs app.

Related

Nodejs Sequelize instance timeout when connecting to mysql container

I have a Node.js server running on an Ubuntu-20.04 Virtual Machine.
I'm using docker compose to setup a mysql container with a production database.
My docker-compose.yml file is like so,
prod_db:
image: mysql:latest
restart: always
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_PRODUCTION_PASSWORD}
MYSQL_DATABASE: ${MYSQL_PRODUCTION_DATABASE}
ports:
- ${MYSQL_PRODUCTION_PORT}:3306
Running docker compose up on it appears to work fine,
lockers-prod_db-1 | 2022-08-08T19:18:03.005576Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.30' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
And docker container list yields the following,
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
33289149af9f mysql:latest "docker-entrypoint.s…" 37 seconds ago Up 35 seconds 33060/tcp, 0.0.0.0:3308->3306/tcp, :::3308->3306/tcp lockers-prod_db-1
But yet when attempting to connect via the Sequelize with the following code,
config = {
id: 'production',
port: process.env.NODE_PORT,
sqlConfig: {
port: parseInt(process.env.MYSQL_PRODUCTION_PORT),
host: process.env.MYSQL_PRODUCTION_HOST,
user: process.env.MYSQL_PRODUCTION_USER,
password: process.env.MYSQL_PRODUCTION_PASSWORD,
database: process.env.MYSQL_PRODUCTION_DATABASE,
locationsCsvPath: process.env.LOCATIONS_CSV_ABSOLUTE_PATH,
lockersCsvPath: process.env.LOCKERS_CSV_ABSOLUTE_PATH,
contractsCsvPath: process.env.CONTRACTS_CSV_ABSOLUTE_PATH
}
const sequelize = new Sequelize({
dialect: 'mysql',
host: config.sqlConfig.host,
port: config.sqlConfig.port,
password: config.sqlConfig.password,
username: config.sqlConfig.user,
database: config.sqlConfig.database,
models: [Contract, Location, Locker],
logging: false
});
I get the following error,
/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:102
throw new SequelizeErrors.ConnectionError(err);
^
ConnectionError [SequelizeConnectionError]: connect ETIMEDOUT
at ConnectionManager.connect (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:102:17)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async ConnectionManager._connect (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:220:24)
at async /home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:174:32
at async ConnectionManager.getConnection (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:197:7)
at async /home/freemaa7/lockers/node_modules/sequelize/lib/sequelize.js:301:26
at async MySQLQueryInterface.tableExists (/home/freemaa7/lockers/node_modules/sequelize/lib/dialects/abstract/query-interface.js:102:17)
at async Function.sync (/home/freemaa7/lockers/node_modules/sequelize/lib/model.js:939:21)
at async Sequelize.sync (/home/freemaa7/lockers/node_modules/sequelize/lib/sequelize.js:373:9) {
parent: Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/home/freemaa7/lockers/node_modules/mysql2/lib/connection.js:189:17)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true
},
original: Error: connect ETIMEDOUT
at Connection._handleTimeoutError (/home/freemaa7/lockers/node_modules/mysql2/lib/connection.js:189:17)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
errorno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
fatal: true
}
}
I'm running this on a virtual machine, this works perfectly locally though. The main difference is that on the VM an apache2 instance is running. I'm starting to think it may be redirecting the TPC requests to the container to another address because it's setup as a reverse proxy. Could this be a possibility ?

Error: getaddrinfo EAI_AGAIN database at GetAddrInfoReqWrap.onlookup [as oncomplete]

I'm creating an api using docker, postgresql, and nodejs (typescript). I've had this error ever since creating an admin user and nothing seems to be able to fix it:
Error in docker terminal:
Error: getaddrinfo EAI_AGAIN database
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:72:26)
[ERROR] 12:33:32 Error: getaddrinfo EAI_AGAIN database
Error in Insomnia:
{
"status": "error",
"message": "Internal server error - Cannot inject the dependency \"categoriesRepository\" at position #0 of \"ListCategoriesUseCase\" constructor. Reason:\n No repository for \"Category\" was found. Looks like this entity is not registered in current \"default\" connection?"
}
I'm following an online course and this same code seems to work for everyone else since there are no reports of this error in the course's forum.
From what I've gathered it seems to be some type of problem when connecting to my database, which is a docker container. Here is my docker-compose.yml file:
version: "3.9"
services:
database_ignite:
image: postgres
container_name: database_ignite
restart: always
ports:
- 5432:5432
environment:
- POSTGRES_USER=something
- POSTGRES_PASSWORD=something
- POSTGRES_DB=rentx
volumes:
- pgdata:/data/postgres
app:
build: .
container_name: rentx
restart: always
ports:
- 3333:3333
- 9229:9229
volumes:
- .:/usr/app
links:
- database_ignite
depends_on:
- database_ignite
volumes:
pgdata:
driver: local
My server.ts file:
import "reflect-metadata";
import express, { Request, Response, NextFunction } from "express";
import "express-async-errors";
import swaggerUi from "swagger-ui-express";
import { AppError } from "#shared/errors/AppError";
import createConnection from "#shared/infra/typeorm";
import swaggerFile from "../../../swagger.json";
import { router } from "./routes";
import "#shared/container";
createConnection();
const app = express();
app.use(express.json());
app.use("/api-docs", swaggerUi.serve, swaggerUi.setup(swaggerFile));
app.use(router);
app.use(
(err: Error, request: Request, response: Response, next: NextFunction) => {
if (err instanceof AppError) {
return response.status(err.statusCode).json({
message: err.message,
});
}
return response.status(500).json({
status: "error",
message: `Internal server error - ${err.message}`,
});
}
);
app.listen(3333, () => console.log("Server running"));
And this is my index.ts file, inside src>modules>shared>infra>http>server.ts:
import { Connection, createConnection, getConnectionOptions } from "typeorm";
export default async (host = "database"): Promise<Connection> => {
const defaultOptions = await getConnectionOptions();
return createConnection(
Object.assign(defaultOptions, {
host,
})
);
};
I've tried restarting my containers, remaking them, accessing my postgres container and checking the tables, I've switched every "database" to "localhost" but it's the same every time: the containers run, but the error persists. I've checked the course's repo and my code matches. I've flushed my DNS and that also did nothing.
Here's the admin.ts file that "started it all":
import { hash } from "bcryptjs";
import { v4 as uuidv4 } from "uuid";
import createConnection from "../index";
async function create() {
const connection = await createConnection("localhost");
const id = uuidv4();
const password = await hash("admin", 6);
await connection.query(`
INSERT INTO Users (id, name, email, password, "isAdmin", driver_license, created_at)
VALUES (
'${id}',
'admin',
'admin#rentx.com.br',
'${password}',
true,
'0123456789',
NOW()
)
`);
await connection.close;
}
create().then(() => console.log("Administrative user created"));
I would love to know what is causing this error.
It looks like you have a service named database_ignite in your docker-compose.yml file. Docker by default creates a host using the name of your service. Try changing your host from database inside your index.ts file to database_ignite:
import { Connection, createConnection, getConnectionOptions } from "typeorm";
export default async (host = "database_ignite"): Promise<Connection> => {
// Changed database to ^^ database_ignite ^^
const defaultOptions = await getConnectionOptions();
return createConnection(
Object.assign(defaultOptions, {
host,
})
);
};

Can't connect to mongo in Docker

I'm practicing Docker with a dummy app but I can't connect with the database. If anyone can give a clue about what the problem could be, please. Thanks in advance for any help. If any more info is needed please let me know.
Here is my Dockerfile for api:
Dockerfile in server
FROM node:14-alpine
WORKDIR usr/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3400
CMD ["npm", "start"]
Here is my connection to db in db.js
const mongoose = require('mongoose');
const uri = 'mongodb://mongo:27017/prueba-docker';
const connect = async () => {
try {
await mongoose.connect(uri, {});
console.log('conected to db');
} catch (err) {
console.log('error connecting to db', err);
}
};
module.exports = { connect };
And here docker-compose
version: "3"
services:
prueba-react:
image: prueba-react
build: ./client/
stdin_open: true
ports:
- "3000:3000"
networks:
- mern-app
prueba-api:
image: prueba-api
build: ./server/
ports:
- "3400:3400"
networks:
- mern-app
depends_on:
- db
db:
image: mongo:4.4-bionic
ports:
- "27017:27017"
networks:
- mern-app
volumes:
- mongo-data:/data/db
networks:
mern-app:
driver: bridge
volumes:
mongo-data:
driver: local
First I found out that I should stop mongo wiht the command because I had a port conflict.
sudo systemctl stop mongod
But now I don't understand why It gives an error when I do docker-compose up
This is the error I get
prueba-api_1 | error connecting to db MongooseServerSelectionError: getaddrinfo EAI_AGAIN mongo
prueba-api_1 | at NativeConnection.Connection.openUri (/usr/app/node_modules/mongoose/lib/connection.js:796:32)
prueba-api_1 | at /usr/app/node_modules/mongoose/lib/index.js:328:10
prueba-api_1 | at /usr/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5
prueba-api_1 | at new Promise (<anonymous>)
prueba-api_1 | at promiseOrCallback (/usr/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)
prueba-api_1 | at Mongoose._promiseOrCallback (/usr/app/node_modules/mongoose/lib/index.js:1149:10)
prueba-api_1 | at Mongoose.connect (/usr/app/node_modules/mongoose/lib/index.js:327:20)
prueba-api_1 | at Object.connect (/usr/app/db.js:6:20)
prueba-api_1 | at Object.<anonymous> (/usr/app/app.js:6:4)
prueba-api_1 | at Module._compile (internal/modules/cjs/loader.js:1072:14) {
prueba-api_1 | reason: TopologyDescription {
prueba-api_1 | type: 'Unknown',
prueba-api_1 | servers: Map(1) { 'mongo:27017' => [ServerDescription] },
prueba-api_1 | stale: false,
prueba-api_1 | compatible: true,
prueba-api_1 | heartbeatFrequencyMS: 10000,
prueba-api_1 | localThresholdMS: 15,
prueba-api_1 | logicalSessionTimeoutMinutes: undefined
prueba-api_1 | }
In your docker-compose.yml you are creating a network called mern-app where all services are assigned to that network. To communicate between containers in a docker network all you have to do is use the service name. Instead of mongodb://mongo:27017/prueba-docker try connecting to mongodb://db:27017/prueba-docker.
You also mentioned a port conflict. It seems like you are already running mongodb. If you don't want to stop your mongodb everytime you want to try out your docker-compose, you could just map to another port:
ports:
- "27018:27017"
Or you could just don't expose your mongodb at all if no external application is using your db. This only introduces security threats.

Why is my dockerized nodejs server not able to connect to my dockerized mongodb?

I have a mongodb docker container which is working correctly. I can query it from outside docker using localhost if I expose port 27017. I can query it from a python docker container on the same docker network using the mongo container name.
I also have a nodejs server in a docker container. It is on the same docker network as the mongodb container. If I create a simple test script and place it in the nodejs container, I am able to run it inside the nodejs container and successfully query the mongo container using the mongo containers name.
On a separate server, it I check out the project (identical code to where the problem is happening) and run "docker-compose up", the nodejs container is able to query the mongo container.
However, when running the project locally, the nodejs server code fails to connect to mongo using the container name. I constantly get the error "Connection to database failed, MongoNetworkError: failed to connect to server [sciencedog_db:27017] on first connect [MongoNetworkError: connection timed out]".
Can anyone give me any ideas regarding what to look for? The error seems clear enough, but a test script makes it clear that there is in fact connectivity between the containers. Is there any way that a node server could be configured that would make the connection to the mongo container fail when the network is working? Is there some environmental factor I am missing?
Test script which works when run directly with node:
const { MongoClient } = require("mongodb");
// Replace the uri string with your MongoDB deployment's connection string.
const uri =
// "mongodb+srv://<user>:<password>#<cluster-url>?retryWrites=true&w=majority";
"mongodb://sciencedog_db:27017";
const client = new MongoClient(uri);
async function run() {
try {
await client.connect();
const database = client.db('sciencedog');
const collection = database.collection('users');
// // Query for a movie that has the title 'Back to the Future'
const query = { username: 'daniel' };
const user = await collection.findOne(query);
console.log(user);
} finally {
// Ensures that the client will close when you finish/error
await client.close();
}
}
run().catch(console.dir);
Code in nodejs server which fails (part of a larger module), application is run with gulp:
const MongoClient = require('mongodb').MongoClient
const host = process.env.MODE === 'development' ? 'localhost' : 'sciencedog_db'
const dbUrl = `mongodb://${host}:27017`
const dbName = 'sciencedog'
var db_client
function getConnection() {
/**
* Get a connection to the database.
* #return {MongoClient} Connection to the database
*/
return new Promise(function (resolve, reject) {
if (typeof db_client == 'undefined') {
console.log("There is no db_client");
MongoClient.connect(dbUrl).then(
function (client) {
console.log("So we got one");
db_client = client
console.log("Connected successfully to server");
resolve(db_client)
},
function (err) {
let err_msg = 'Connection to database failed, ' + err
console.log(err_msg);
reject(err_msg)
}
)
} else {
console.log('Found existing database connection');
resolve(db_client)
}
})
}
My docker-compose file:
version: '3.7'
services:
sciencedog_python:
build: .
container_name: sciencedog_python
init: true
stop_signal: SIGINT
environment:
- 'PYTHONUNBUFFERED=TRUE'
networks:
- sciencedog
ports:
- 8080:8080
- 8443:8443
volumes:
- type: bind
source: /etc/sciencedog/.env
target: /etc/sciencedog/.env
read_only: true
- type: bind
source: .
target: /opt/python_sciencedog/
sciencedog_node:
build: ../sciencedog/.
container_name: sciencedog_node
ports:
- 80:8001
networks:
- sciencedog
volumes:
- type: bind
source: /etc/sciencedog/.env
target: /etc/sciencedog/.env
read_only: true
- type: bind
source: ../sciencedog/src/.
target: /opt/sciencedog_node/src/
sciencedog_db:
image: mongo:4.0.4
container_name: sciencedog_db
volumes:
- sciencedog:/data/db
networks:
- sciencedog
volumes:
sciencedog:
driver: local
networks:
sciencedog:
driver: bridge
docker-compose dev extension (enables connection from host, not needed for containers to communicate via docker network):
version: '3.7'
services:
sciencedog_python:
ports:
- 6900:6900
stdin_open: true
tty: true
sciencedog_db:
ports:
- 27017:27017
Since the same code is able to connect in another machine, I'll guess the problem is initialization order - when running in the dev machine, the nodejs container is starting first and trying to connect to Mongo too early.
Quick way to solve this is declaring a dependency between the nodejs container and the Mongo one:
sciencedog_node:
networks:
# etc etc etc
depends_on:
- sciencedog_db
Note that even when declaring such dependencies, it's not guaranteed that Mongo will be 100% ready to receive connections (see https://docs.docker.com/compose/startup-order/), so I think you would benefit from configuring a generous timeout for MongoClient:
// example taken from https://stackoverflow.com/questions/39979924/how-to-set-mongoclient-connection-timeout
MongoClient.connect(dbUrl, {
server: {
socketOptions: {
connectTimeoutMS: 20000
}
}
}).then(.....)

Can't connect Elasticsearch from my Nodejs app: connect ECONNREFUSED 127.0.0.1:9200

I cannot connect to Elasticsearch docker server from my NodeJS application.
My code
This is my docker-compose file:
version: "3.7"
services:
backend:
container_name: vstay-api
ports:
- "4000:4000"
build:
context: .
dockerfile: Dockerfile
env_file:
- ./.env
environment:
- DB_URI=mongodb://mongo:27017/vstay-db
- DB_HOST=mongo
- DB_PORT=27017
restart: always
links:
- mongo
- elasticsearch
mongo:
image: mongo:4.2
ports:
- "9000:27017"
container_name: vstay-db
restart: always
volumes:
- "./data/mongo:/data/db"
environment:
- DB_HOST=mongo
- DB_PORT=27017
command: mongod
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
container_name: vstay_elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=datasearch
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch
ports:
- 9200:9200
- 9300:9300
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./data/elastic:/usr/share/elasticsearch/data
kibana:
image: docker.elastic.co/kibana/kibana:7.9.3
container_name: vstay_kibana
logging:
driver: none
elastic.js
const { Client } = require("#elastic/elasticsearch");
module.exports.connectES = () => {
try {
const client = new Client({
node: "http://localhost:9200",
maxRetries: 5,
requestTimeout: 60000,
sniffOnStart: true,
});
client.ping(
{
// ping usually has a 3000ms timeout
requestTimeout: Infinity,
// undocumented params are appended to the query string
hello: "elasticsearch!",
},
function (error) {
if (error) {
console.log(error);
console.trace("elasticsearch cluster is down!");
} else {
console.log("All is well");
}
}
);
return client;
} catch (error) {
console.log(error);
process.exit(0);
}
};
And index.js to connect:
const { app } = require("./config/express");
const { connect: dbConnect } = require("./config/mongo");
const { connectES } = require("./config/elastic");
const { port, domain, env } = require("./config/vars");
let appInstance;
const startApp = async () => {
const dbConnection = await dbConnect();
const ESConnection = await connectES();
app.locals.db = dbConnection;
app.locals.ESClient = ESConnection;
app.listen(port, () =>
console.log(`Server is listening on ${domain.API} (${env})`)
);
return app;
};
appInstance = startApp();
module.exports = { appInstance };
Error
I have an application that is dockerized (NodeJS and Elasticsearch - v7.9.3). The server could be started well, but when I tried to create a Client instance in Elasticsearch, it showed me an error:
ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at ClientRequest.<anonymous> (/app/node_modules/#elastic/elasticsearch/lib/Connection.js:109:18)
at ClientRequest.emit (events.js:210:5)
at Socket.socketErrorListener (_http_client.js:406:9)
at Socket.emit (events.js:210:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 5,
aborted: false
}
}
}
The server of Elasticsearch and Kibana are started, I can connect it on my browser at: http://localhost:9200 and http://localhost:5601.
But when I connect from my nodeJS app, it still shows error. I also tried to find my Container IP and replace it with 'localhost' but it still not working.
Can anyone help me to resolve this? Thanks.
My Environment
node version: v10.19.0
#elastic/elasticsearch version: 7.9.3
os: Linux
Enviroment: Docker
when you run the docker-compose file, the elasticsearch service instance will not be available to your backend service at localhost. change http://localhost:9200 to http://elasticsearch:9200 in your node.js code.
docker compose automatically creates dns entries with same name as the service name for each service.
Following solution works for me
http://127.0.0.1:9200

Resources