Can't connect to mongoose on test environment - node.js

I have a node application running in docker with mongodb and it works fine on development environment. However, I'm creating some tests with mocha and chai and I can't connect to mongo when I run these tests.
The function I want to test is:
const Interactor = require("interactor");
const Donation = require("../models/donations");
module.exports = class CreateDonation extends Interactor {
async run(context) {
this.context = context;
this.donation = new Donation.Model({
donationId: context.id,
status: context.status,
amount: context.chargeInfo.donatedValue,
donatorEmail: context.donatorInfo.email,
source: context.source,
});
await this.donation.save();
}
rollback() {
Donation.Model.findOneAndRemove({ donationId: this.context.id });
}
};
My test:
/* eslint-disable no-unused-vars */
/* eslint-disable no-undef */
const chai = require("chai");
const chaiHttp = require("chai-http");
const CreateDonation = require("../../interactors/create-donation");
require("../../config/db");
const should = chai.should();
const { expect } = chai;
chai.use(chaiHttp);
describe("CreateDonation", () => {
it("Creates a donation when context passed is correct", async (done) => {
const context = {
id: "123123",
status: "AUTHORIZED",
chargeInfo: {
donatedValue: 25.0,
},
donatorInfo: {
email: "test#example.com",
},
source: "CREDIT_CARD",
};
const result = await CreateDonation.run(context);
console.log(result);
done();
});
});
My db config file:
const mongoose = require("mongoose");
require("dotenv/config");
mongoose
.connect("mongodb://db:27017/donations", {
useNewUrlParser: true,
useUnifiedTopology: true,
reconnectInterval: 5000,
reconnectTries: 50,
})
.then(() => {
console.log("good");
})
.catch((err) => {
console.log(err);
});
mongoose.Promise = global.Promise;
module.exports = mongoose;
The error I get from the test above is:
MongooseServerSelectionError: getaddrinfo ENOTFOUND db
What am I doing wrong? Am I missing to import something?

When you run your services inside docker with a docker compose file, they'll get an hostname based on the name you wrote for the service inside the docker-compose file.
Example:
version: "3.9"
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"
In this example, the web service can reach the redis db at the redis hostname.
If you change the service name in this way:
db:
image: "redis:alpine"
The web service must connect to the db host.
So, when you run the compose file, your the db service is reached with the db hostname from you app service. But when you run your tests outside a docker compose, the db hostname isn't available and you need to use localhost because your db is running on your OS directly (or it is running inside a container with the 27017 port mapped on the main host).
If you're using a unix OS, you can solve your problem adding an alias in your /etc/hosts file:
127.0.0.1 localhost db
In this way you can run your tests keeping the db connection string.
Otherwise, and this is the suggested solution, you can use an environment variable to change the connection string at the application startup:
mongoose.connect(process.env.MONGO_URI)
And run it using
MONGO_URI=mongodb://db:27017/donations npm start
Then in the docker compose you can add a fixed environment variable using this code:
environment:
- MONGO_URI=mongodb://db:27017/donations

Just found out that when testing, I need to use "localhost" on my connection string for mongo (I was using the name from docker-compose). So with the URI as "mongodb://localhost:27017/donations" it worked. I don't know why.

Related

unable to connect to elasticsearch service using AWS lambda

I am running my lambdas on port:4566 using localstack using below image
version: "2.1"
services:
localstack:
image: "localstack/localstack"
container_name: "localstack"
ports:
- "4566-4620:4566-4620"
- "127.0.0.1:8055:8080"
environment:
- SERVICES=s3,es,dynamodb,apigateway,lambda,sns,sqs,sloudformation
- DEBUG=1
- EDGE_PORT=4566
- DATA_DIR=/var/lib/localstack/data
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
- LAMBDA_EXECUTOR=docker
- DYNAMODB_SHARE_DB=1
- DISABLE_CORS_CHECKS=1
- AWS_DDB_ENDPOINT=http://localhost:4566
volumes:
- "${TMPDIR:-/var/lib/localstack}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- "local"
elasticsearch:
container_name: tqd-elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
# volumes:
# - esdata:/usr/share/elasticsearch/data
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
depends_on:
- "localstack"
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- "local"
networks:
local:
driver: "bridge"
Problem: Not getting any response from elasticsearch while calling it from lambda
This is my lambda code
module.exports.createIndex = async () => {
const elasticClient = new Client({
node: "http://localhost:9200"
});
console.log("before the client call")
console.log(getIndices().then(res => { console.log(res) }).catch(err => {
console.log(err)
}))
console.log("after the client call")
const getIndices = async () =>
{
return await elasticClient.indices.create({
index:"index-from-lambda"
})
}
return {
statusCode: 201,
body: JSON.stringify({
msg:"index created successfully"
})
}
}
logs on my docker image,
before the client call
Promise { <pending> }
console.log("after the client call")
After this even when i go to bash and validate whether this index has been created or not , it returns empty set of indexes i.e. no index has been created
But, the same code works fine i.e. creates index on elasticsearch at port 9200 while called from httpserver #port 3000 and from standalone javascript file
standalone server code
const express = require('express')
const app = express();
const { Client } = require('#elastic/elasticsearch');
const elasticClient = new Client({
node: "http://localhost:9200"
});
app.listen(3000, () => {
console.log('listening to the port 3000')
})
const getIndices = async () =>
{
return await elasticClient.cat.indices()
}
console.log(getIndices().then(res => { console.log(res) }).catch(err => {
console.log(err)
}))
this is standalone js script
const { Client } = require('#elastic/elasticsearch');
const elasticClient = new Client({
node: "http://localhost:9200"
});
const getIndices = async () =>
{
return await elasticClient.cat.indices()
}
console.log(getIndices().then(res => { console.log(res) }).catch(err => {
console.log(err)
}))
Is this any kind of networking error or docker image error?
This issue has been listed out here,
The problem is with the endpoint.
So localhost actually addresses your docker container, not your host machine.
If you run your express server on the host, please use host.docker.internal as hostname, which will address the host from your docker container.
Same is the thing with Elasticsearch image.
Now code becomes,
elasticClient = new Client({
node:"http://host.docker.internal:9200"
})
Rest remains the same.

Error: getaddrinfo EAI_AGAIN database at GetAddrInfoReqWrap.onlookup [as oncomplete]

I'm creating an api using docker, postgresql, and nodejs (typescript). I've had this error ever since creating an admin user and nothing seems to be able to fix it:
Error in docker terminal:
Error: getaddrinfo EAI_AGAIN database
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:72:26)
[ERROR] 12:33:32 Error: getaddrinfo EAI_AGAIN database
Error in Insomnia:
{
"status": "error",
"message": "Internal server error - Cannot inject the dependency \"categoriesRepository\" at position #0 of \"ListCategoriesUseCase\" constructor. Reason:\n No repository for \"Category\" was found. Looks like this entity is not registered in current \"default\" connection?"
}
I'm following an online course and this same code seems to work for everyone else since there are no reports of this error in the course's forum.
From what I've gathered it seems to be some type of problem when connecting to my database, which is a docker container. Here is my docker-compose.yml file:
version: "3.9"
services:
database_ignite:
image: postgres
container_name: database_ignite
restart: always
ports:
- 5432:5432
environment:
- POSTGRES_USER=something
- POSTGRES_PASSWORD=something
- POSTGRES_DB=rentx
volumes:
- pgdata:/data/postgres
app:
build: .
container_name: rentx
restart: always
ports:
- 3333:3333
- 9229:9229
volumes:
- .:/usr/app
links:
- database_ignite
depends_on:
- database_ignite
volumes:
pgdata:
driver: local
My server.ts file:
import "reflect-metadata";
import express, { Request, Response, NextFunction } from "express";
import "express-async-errors";
import swaggerUi from "swagger-ui-express";
import { AppError } from "#shared/errors/AppError";
import createConnection from "#shared/infra/typeorm";
import swaggerFile from "../../../swagger.json";
import { router } from "./routes";
import "#shared/container";
createConnection();
const app = express();
app.use(express.json());
app.use("/api-docs", swaggerUi.serve, swaggerUi.setup(swaggerFile));
app.use(router);
app.use(
(err: Error, request: Request, response: Response, next: NextFunction) => {
if (err instanceof AppError) {
return response.status(err.statusCode).json({
message: err.message,
});
}
return response.status(500).json({
status: "error",
message: `Internal server error - ${err.message}`,
});
}
);
app.listen(3333, () => console.log("Server running"));
And this is my index.ts file, inside src>modules>shared>infra>http>server.ts:
import { Connection, createConnection, getConnectionOptions } from "typeorm";
export default async (host = "database"): Promise<Connection> => {
const defaultOptions = await getConnectionOptions();
return createConnection(
Object.assign(defaultOptions, {
host,
})
);
};
I've tried restarting my containers, remaking them, accessing my postgres container and checking the tables, I've switched every "database" to "localhost" but it's the same every time: the containers run, but the error persists. I've checked the course's repo and my code matches. I've flushed my DNS and that also did nothing.
Here's the admin.ts file that "started it all":
import { hash } from "bcryptjs";
import { v4 as uuidv4 } from "uuid";
import createConnection from "../index";
async function create() {
const connection = await createConnection("localhost");
const id = uuidv4();
const password = await hash("admin", 6);
await connection.query(`
INSERT INTO Users (id, name, email, password, "isAdmin", driver_license, created_at)
VALUES (
'${id}',
'admin',
'admin#rentx.com.br',
'${password}',
true,
'0123456789',
NOW()
)
`);
await connection.close;
}
create().then(() => console.log("Administrative user created"));
I would love to know what is causing this error.
It looks like you have a service named database_ignite in your docker-compose.yml file. Docker by default creates a host using the name of your service. Try changing your host from database inside your index.ts file to database_ignite:
import { Connection, createConnection, getConnectionOptions } from "typeorm";
export default async (host = "database_ignite"): Promise<Connection> => {
// Changed database to ^^ database_ignite ^^
const defaultOptions = await getConnectionOptions();
return createConnection(
Object.assign(defaultOptions, {
host,
})
);
};

AWS Redis Cluster MOVED Error using redis node library

I have created a Redis MemoryDB cluster with 2 nodes in AWS:
I connect to it using redis node library v4.0.0 like this:
import { createCluster } from 'redis';
(async () => {
const REDIS_USERNAME = 'test-username';
const REDIS_PASSWORD = 'test-pass';
const cluster = createCluster({
rootNodes: [
{
url: `rediss://node1.amazonaws.com:6379`,
},
{
url: `rediss://node2.amazonaws.com:6379`,
},
],
defaults: {
url: `rediss://cluster.amazonaws.com:6379`,
username: REDIS_USERNAME,
password: REDIS_PASSWORD,
}
});
cluster.on('error', (err) => console.log('Redis Cluster Error', err));
await cluster.connect();
console.log('connected to cluster...');
await cluster.set('key', 'value');
const value = await cluster.get('key');
console.log('Value', value);
await cluster.disconnect();
})();
But sometimes I get the error ReplyError: MOVED 12539 rediss://node2.amazonaws.com:6379 and I cannot get the value from the key.
Do you have any idea if there is something wrong with the configuration of the cluster or with the code using redis node library?
Edit:
I tried it with ioredis library and it works, so it's something wrong with the redis library.
Node.js Version: 16
Redis Server Version: 6
I had created an issue to redis library, so it's going to be solved soon with this PR.

Why is my dockerized nodejs server not able to connect to my dockerized mongodb?

I have a mongodb docker container which is working correctly. I can query it from outside docker using localhost if I expose port 27017. I can query it from a python docker container on the same docker network using the mongo container name.
I also have a nodejs server in a docker container. It is on the same docker network as the mongodb container. If I create a simple test script and place it in the nodejs container, I am able to run it inside the nodejs container and successfully query the mongo container using the mongo containers name.
On a separate server, it I check out the project (identical code to where the problem is happening) and run "docker-compose up", the nodejs container is able to query the mongo container.
However, when running the project locally, the nodejs server code fails to connect to mongo using the container name. I constantly get the error "Connection to database failed, MongoNetworkError: failed to connect to server [sciencedog_db:27017] on first connect [MongoNetworkError: connection timed out]".
Can anyone give me any ideas regarding what to look for? The error seems clear enough, but a test script makes it clear that there is in fact connectivity between the containers. Is there any way that a node server could be configured that would make the connection to the mongo container fail when the network is working? Is there some environmental factor I am missing?
Test script which works when run directly with node:
const { MongoClient } = require("mongodb");
// Replace the uri string with your MongoDB deployment's connection string.
const uri =
// "mongodb+srv://<user>:<password>#<cluster-url>?retryWrites=true&w=majority";
"mongodb://sciencedog_db:27017";
const client = new MongoClient(uri);
async function run() {
try {
await client.connect();
const database = client.db('sciencedog');
const collection = database.collection('users');
// // Query for a movie that has the title 'Back to the Future'
const query = { username: 'daniel' };
const user = await collection.findOne(query);
console.log(user);
} finally {
// Ensures that the client will close when you finish/error
await client.close();
}
}
run().catch(console.dir);
Code in nodejs server which fails (part of a larger module), application is run with gulp:
const MongoClient = require('mongodb').MongoClient
const host = process.env.MODE === 'development' ? 'localhost' : 'sciencedog_db'
const dbUrl = `mongodb://${host}:27017`
const dbName = 'sciencedog'
var db_client
function getConnection() {
/**
* Get a connection to the database.
* #return {MongoClient} Connection to the database
*/
return new Promise(function (resolve, reject) {
if (typeof db_client == 'undefined') {
console.log("There is no db_client");
MongoClient.connect(dbUrl).then(
function (client) {
console.log("So we got one");
db_client = client
console.log("Connected successfully to server");
resolve(db_client)
},
function (err) {
let err_msg = 'Connection to database failed, ' + err
console.log(err_msg);
reject(err_msg)
}
)
} else {
console.log('Found existing database connection');
resolve(db_client)
}
})
}
My docker-compose file:
version: '3.7'
services:
sciencedog_python:
build: .
container_name: sciencedog_python
init: true
stop_signal: SIGINT
environment:
- 'PYTHONUNBUFFERED=TRUE'
networks:
- sciencedog
ports:
- 8080:8080
- 8443:8443
volumes:
- type: bind
source: /etc/sciencedog/.env
target: /etc/sciencedog/.env
read_only: true
- type: bind
source: .
target: /opt/python_sciencedog/
sciencedog_node:
build: ../sciencedog/.
container_name: sciencedog_node
ports:
- 80:8001
networks:
- sciencedog
volumes:
- type: bind
source: /etc/sciencedog/.env
target: /etc/sciencedog/.env
read_only: true
- type: bind
source: ../sciencedog/src/.
target: /opt/sciencedog_node/src/
sciencedog_db:
image: mongo:4.0.4
container_name: sciencedog_db
volumes:
- sciencedog:/data/db
networks:
- sciencedog
volumes:
sciencedog:
driver: local
networks:
sciencedog:
driver: bridge
docker-compose dev extension (enables connection from host, not needed for containers to communicate via docker network):
version: '3.7'
services:
sciencedog_python:
ports:
- 6900:6900
stdin_open: true
tty: true
sciencedog_db:
ports:
- 27017:27017
Since the same code is able to connect in another machine, I'll guess the problem is initialization order - when running in the dev machine, the nodejs container is starting first and trying to connect to Mongo too early.
Quick way to solve this is declaring a dependency between the nodejs container and the Mongo one:
sciencedog_node:
networks:
# etc etc etc
depends_on:
- sciencedog_db
Note that even when declaring such dependencies, it's not guaranteed that Mongo will be 100% ready to receive connections (see https://docs.docker.com/compose/startup-order/), so I think you would benefit from configuring a generous timeout for MongoClient:
// example taken from https://stackoverflow.com/questions/39979924/how-to-set-mongoclient-connection-timeout
MongoClient.connect(dbUrl, {
server: {
socketOptions: {
connectTimeoutMS: 20000
}
}
}).then(.....)

Automated Testing with Databases

I'm fairly new to automated testing and was wondering how I should go about writing tests for the database. The project I'm working on right now is running PostgreSQL with Sequelize as the ORM on a Node.JS environment. If it matters, I'm also using Jest as the testing library right now.
In my app I use a config module to control configuration settings for different environments. When running tests the process.env.APP_ENV is set to test, and it will set the dialect to sqlite. Note that you will not have any data or data persistence, so you will need to populate it with all the data needed for your tests.
Include sqlite3
yarn add -D sqlite3
or
npm i -D sqlite3
Config
module.exports = {
database: {
name: 'dbname',
user: 'user',
password: 'password',
host: 'host',
// Use "sqlite" for "test", the connection settings above are ignored
dialect: process.env.APP_ENV === 'test' ? 'sqlite' : 'mysql',
},
};
Database/Sequelize
// get our config
const config = require('../config');
// ... code
const instance = new Sequelize(
config.database.name,
config.database.user,
config.database.password,
{
host: config.database.host,
// set the dialect, will be "sqlite" for "test"
dialect: config.database.dialect,
}
);
Test Class (Mocha)
const TestUtils = require('./lib/test-utils');
describe('Some Tests', () => {
let app = null;
// run before the tests start
before((done) => {
// Mock up our services
TestUtils.mock();
// these are instantiated after the mocking
app = require('../server');
// Populate redis data
TestUtils.populateRedis(() => {
// Populate db data
TestUtils.syncAndPopulateDatabase('test-data', () => {
done();
});
});
});
// run code after tests have completed
after(() => {
TestUtils.unMock();
});
describe('/my/route', () => {
it('should do something', (done) => {
return done();
});
});
});
Run Tests
APP_ENV=test ./node_modules/.bin/mocha
You could use ENV variables in other ways to set the dialect and connection parameters as well - the above is just an example based on what we have done with a lot of supporting code.
If you're not doing anything particularly complicated on the DB side, take a look at pg-mem:
https://swizec.com/blog/pg-mem-and-jest-for-smooth-integration-testing/
https://github.com/oguimbal/pg-mem
It's really cool in that it tests actual PG syntax and can pick up a bunch of errors that using a different DB or mock DB won't pick up. However, it's not a perfect implementation and missing a bunch of features (e.g. triggers, decent "not exists" handling, lots of functions) some of which are easy to work around with the hooks provided and some aren't.
For me, having the test DB initialized with the same schema initialization scripts as the real DB is a big win.

Resources