Multiple MongoDB database Connections in NodeJS - node.js

I have always created a single connection with one connection string. My question is, how to make multiple connections(MongoDB instances) if an array of connection strings are given in NodeJs get API?
Let’s say multiple connection strings will have the same type of database. e.g., my database name is “University” and this database is available in all different locations. And I wanted to write one common API which will provide me with an array of universities from different connections, how to do it?
Example
connectionString1 = mongodb://localhost:27017
connectionString2 = mongodb://localhost:27018
connectionString3 = mongodb://localhost:27019
Now I wanted to connect with all three connection strings and fetch all records from them and send them
in response to one common API, how can I do it in an efficient manner? Also after retrieval of each query, I need to close the corresponding database instances.
Your input will help me to understand this structure in a better way

Execute your query against each database using example function named exec, and await the returned promise array by Promise.allSettled. Once settled, parse (e.g., reduce, maybe sort) for proper merging.
// each db with client.db(name)
const dbArr = [db1, db2, ...];
// execute query for given collection across each db, return promise array
function exec(coll, query) {
let p = [];
for (let db of dbArr) {
p.push(db.collection(coll).find(query))
}
return p;
}
// main
async function fetchUniversitiesBy(filter) {
try {
// make mongo filter doc
const query = filter
const results = await Promise.allSettled(exec('university', query));
/*
reduce, or execute any other manipulation here to merge results.
Can check `status` of settled objects here
*/
return results.reduce((acc, c) => [...acc, ...c], []);
} catch(e) {
console.log(e)
} finally {
// client `close()` here
}
}
In terms of 'API', invoke fetchUniversitiesBy where you defined your api/universities/get (or however defined) route. Imagine your request params can be passed as filter.

Related

how to use redis and sequelize together

im using redis for caching and sequelize as my orm.
i want to cache every query as key and it's result as value.
let me give you an example of how i'm trying to do it
imagine user request for all blogs that are created by himself, normally we would write something like this
blogs.findAll({where:{author:req.params.id}})
when i want to cache something like this i add an attribute named as model and for this example model would be equal to blog after that i will stringify this object and use it as key. this way i can easily create the key and check whether user response is cached or not, but i don't want to rewrite code for every request for checking redis and deciding to make query to database or not and i have 2 models now so i write this piece of code
for (m in models) {
models[m].myFindAll = function (options = {}) {
return new Promise(async function (resolve, reject) {
try {
const key = Object.assign({}, options);
key.model = m;
key.method = "findAll";
var result = await redis.get(JSON.stringify(key));
if (result) {
resolve(JSON.parse(result));
}
result = await models[m].findAll(options);
redis.set(JSON.stringify(key), JSON.stringify(result));
resolve(result);
} catch (err) {
reject(err);
}
});
};
}
as you can see i have an object that contains every model that i have and it is named models.
firstly i added User and after that i added Blog.
my problem is that when i try to use myFindAll function on User Model it won't work becuse it tries to set key.model with value of variable m which will be Blog in the run time, i solved it when i passed the model name an argument to my function but i don't want it that way and i think there should be a better way be i can't find it, isn't there some way of accessing right model through this object?
another thing that i tried to used libraries like sequelize-redis-cache and ... but i wanted to do it my way and i don't want to use this library

Obtain raw value "val()" data of parent node from a child reference in Cloud Functions Realtime Database

Let's say I'm pointing at a path as follow in my typescript/javascript code function:
exports.sendNotification = functions.database.ref('shops/countries/{countryName}/shopAddress/{currentShopAddress}')
.onWrite((snapshot,context) => {
// Is it possible to get the data raw value from a child reference node?
// For example:
const countryName = snapshot.before.parent('countryName').val();
// Then
const countryId = countryName['countryId'];
})
I'm a node.js/typescript and firebase cloud functions newbie :)
The data for parent nodes from the database is not automatically passed into your Cloud Function, as that you be a potentially large amount of unneeded data.
If you need it, you'll need to load it yourself. Luckily this is not all that hard:
const countryRef = snapshot.ref.parent.parent;
const countryName = countryRef.key; // also available as context.params.countryName
countryRef.child('countryId').once('value').then((countryIdSnapshot) => {
const countryId = countryIdSnapshot.val();
});
Note that, since your asynchronously loading additional data, you'll need to return a promise to ensure your function doesn't get shut down prematurely.

Google Cloud Datastore, how to query for more results

Straight and simple, I have the following function, using Google Cloud Datastore Node.js API:
fetchAll(query, result=[], queryCursor=null) {
this.debug(`datastoreService.fetchAll, queryCursor=${queryCursor}`);
if (queryCursor !== null) {
query.start(queryCursor);
}
return this.datastore.runQuery(query)
.then( (results) => {
result=result.concat(results[0]);
if (results[1].moreResults === _datastore.NO_MORE_RESULTS) {
return result;
} else {
this.debug(`results[1] = `, results[1]);
this.debug(`fetch next with queryCursor=${results[1].endCursor}`);
return this.fetchAll(query, result, results[1].endCursor);
}
});
}
The Datastore API object is in the variable this.datastore;
The goal of this function is to fetch all results for a given query, notwithstanding any limits on the number of items returned per single runQuery call.
I have not yet found out about any definite hard limits imposed by the Datastore API on this, and the documentation seems somewhat opaque on this point, but I only noticed that I always get
results[1] = { moreResults: 'MORE_RESULTS_AFTER_LIMIT' },
indicating that there are still more results to be fetched, and the results[1].endCursor remains stuck on constant value that is passed on again on each iteration.
So, given some simple query that I plug into this function, I just go on running the query iteratively, setting the query start cursor (by doing query.start(queryCursor);) to the endCursor obtained in the result of the previous query. And my hope is, obviously, to obtain the next bunch of results on each successive query in this iteration. But I always get the same value for results[1].endCursor. My question is: Why?
Conceptually, I cannot see a difference to this example given in the Google Documentation:
// By default, google-cloud-node will automatically paginate through all of
// the results that match a query. However, this sample implements manual
// pagination using limits and cursor tokens.
function runPageQuery (pageCursor) {
let query = datastore.createQuery('Task')
.limit(pageSize);
if (pageCursor) {
query = query.start(pageCursor);
}
return datastore.runQuery(query)
.then((results) => {
const entities = results[0];
const info = results[1];
if (info.moreResults !== Datastore.NO_MORE_RESULTS) {
// If there are more results to retrieve, the end cursor is
// automatically set on `info`. To get this value directly, access
// the `endCursor` property.
return runPageQuery(info.endCursor)
.then((results) => {
// Concatenate entities
results[0] = entities.concat(results[0]);
return results;
});
}
return [entities, info];
});
}
(except for the fact, that I don't specify a limit on the size of the query result by myself, which I have also tried, by setting it to 1000, which does not change anything.)
Why does my code run into this infinite loop, stuck on each step at the same "endCursor"? And how do I correct this?
Also, what is the hard limit on the number of results obtained per call of datastore.runQuery()? I have not found this information in the Google Datastore documentation thus far.
Thanks.
Looking at the API documentation for the Node.js client library for Datastore there is a section on that page titled "Paginating Records" that may help you. Here's a direct copy of the code snippet from the section:
var express = require('express');
var app = express();
var NUM_RESULTS_PER_PAGE = 15;
app.get('/contacts', function(req, res) {
var query = datastore.createQuery('Contacts')
.limit(NUM_RESULTS_PER_PAGE);
if (req.query.nextPageCursor) {
query.start(req.query.nextPageCursor);
}
datastore.runQuery(query, function(err, entities, info) {
if (err) {
// Error handling omitted.
return;
}
// Respond to the front end with the contacts and the cursoring token
// from the query we just ran.
var frontEndResponse = {
contacts: entities
};
// Check if more results may exist.
if (info.moreResults !== datastore.NO_MORE_RESULTS) {
frontEndResponse.nextPageCursor = info.endCursor;
}
res.render('contacts', frontEndResponse);
});
});
Maybe you can try using one of the other syntax options (instead of Promises). The runQuery method can take a callback function as an argument, and that callback's parameters include explicit references to the entities array and the info object (which has the endCursor as a property).
And there are limits and quotas imposed on calls to the Datastore API as well. Here are links to official documentation that address them in detail:
Limits
Quotas

How to control serial and parallel control flow with mapped functions?

I've drawn a simple flow chart, which basically crawls some data from internet and loads them into the database. So far, I had thought I was peaceful with promises, however now I have an issue that I'm working for at least three days without a simple step.
Here is the flow chart:
Consider there is a static string array like so: const courseCodes = ["ATA, "AKM", "BLG",... ].
I have a fetch function, it basically does a HTTP request followed by parsing. Afterwards it returns some object array.
fetch works perfectly with invoking its callback with that expected object array, it even worked with Promises, which was way greater and tidy.
fetch function should be invoked with every element in the courseCodes array as its parameter. This task should be performed in parallel execution, since those seperate fetch functions do not affect each other.
As a result, there should be a results array in callback (or Promises resolve parameter), which includes array of array of objects. With those results, I should invoke my loadCourse with those objects in the results array as its parameter. Those tasks should be performed in serial execution, because it basically queries database if similar object exists, adds it if it's not.
How can perform this kind of tasks in node.js? I could not maintain the asynchronous flow in such a scenario like this. I've failed with caolan/async library and bluebird & q promise libraries.
Try something like this, if you are able to understand this:
const courseCodes = ["ATA, "AKM", "BLG",... ]
//stores the tasks to be performed.
var parallelTasks = [];
var serialTasks = [];
//keeps track of courses fetched & results.
var courseFetchCount = 0;
var results = {};
//your fetch function.
fetch(course_code){
//your code to fetch & parse.
//store result for each course in results object
results[course_code] = 'whatever result comes from your fetch & parse code...';
}
//your load function.
function loadCourse(results) {
for(var index in results) {
var result = results[index]; //result for single course;
var task = (
function(result) {
return function() {
saveToDB(result);
}
}
)(result);
serialTasks.push(task);
}
//execute serial tasks for saving results to database or whatever.
var firstSerialTask = serialTasks.shift();
nextInSerial(null, firstSerialTask);
}
//pseudo function to save a result to database.
function saveToDB(result) {
//your code to store in db here.
}
//checks if fetch() is complete for all course codes in your array
//and then starts the serial tasks for saving results to database.
function CheckIfAllCoursesFetched() {
courseFetchCount++;
if(courseFetchCount == courseCodes.length) {
//now process courses serially
loadCourse(results);
}
}
//helper function that executes tasks in serial fashion.
function nextInSerial(err, result) {
if(err) throw Error(err.message);
var nextSerialTask = serialTasks.shift();
nextSerialTask(result);
}
//start executing parallel tasks for fetching.
for(var index in courseCode) {
var course_code = courseCode[index];
var task = (
function(course_code) {
return function() {
fetch(course_code);
CheckIfAllCoursesFetched();
}
}
)(course_code);
parallelTasks.push(task);
for(var task_index in parallelTasks) {
parallelTasks[task_index]();
}
}
Or you may refer to nimble npm module.

How to cache a mongoose query in memory?

I have the following queries, which starts with the GetById method firing up, once that fires up and extracts data from another document, it saves into the race document.
I want to be able to cache the data after I save it for ten minutes. I have taken a look at cacheman library and not sure if it is the right tool for the job. what would be the best way to approach this ?
getById: function(opts,callback) {
var id = opts.action;
var raceData = { };
var self = this;
this.getService().findById(id,function(err,resp) {
if(err)
callback(null);
else {
raceData = resp;
self.getService().getPositions(id, function(err,positions) {
self.savePositions(positions,raceData,callback);
});
}
});
},
savePositions: function(positions,raceData,callback) {
var race = [];
_.each(positions,function(item) {
_.each(item.position,function(el) {
race.push(el);
});
});
raceData.positions = race;
this.getService().modelClass.update({'_id' : raceData._id },{ 'positions' : raceData.positions },callback(raceData));
}
I have recently coded and published a module called Monc. You could find the source code over here. You could find several useful methods to store, delete and retrieve data stored into the memory.
You may use it to cache Mongoose queries using simple nesting as
test.find({}).lean().cache().exec(function(err, docs) {
//docs are fetched into the cache.
});
Otherwise you may need to take a look at the core of Mongoose and override the prototype in order to provide a way to use cacheman as you original suggested.
Create a node module and force it to extend Mongoose as:
monc.hellocache(mongoose, {});
Inside your module you should extend the Mongoose.Query.prototype
exports.hellocache = module.exports.hellocache = function(mongoose, options, Aggregate) {
//require cacheman
var CachemanMemory = require('cacheman-memory');
var cache = new CachemanMemory();
var m = mongoose;
m.execAlter = function(caller, args) {
//do your stuff here
}
m.Query.prototype.exec = function(arg1, arg2) {
return m.execAlter.call(this, 'exec', arguments);
};
})
Take a look at Monc's source code as it may be a good reference on how you may extend and chain Mongoose methods
I will explain with npm redis package which stores key/value pairs in the cache server. keys are queries and redis stores only strings.
we have to make sure that keys are unique and consistent. So key value should store query and also name of the model that you are applying the query.
when you query, inside the mongoose library, there is
function Query(conditions, options, model, collection) {} //constructor function
responsible for query. inside this constructor,
Query.prototype.exec = function exec(op, callback) {}
this function is responsible executing the queries. so we have to manipulate this function and have it execute those tasks:
first check if we have any cached data related to the query
if yes respond to request right away and return
if no we need to respond to request and update our cache and then respond
const redis = require("client");
const redisUrl = "redis://127.0.0.1:6379";
const client = redis.createClient(redisUrl);
const util = require("util");
//client.get does not return promise
client.get = util.promisify(client.get);
const exec = mongoose.Query.prototype.exec;
//mongoose code is written using classical prototype inheritance for setting up objects and classes inside the library.
mongoose.Query.prototype.exec = async function() {
//crate a unique and consistent key
const key = JSON.stringify(
Object.assign({}, this.getQuery(), {
collection: this.mongooseCollection.name
})
);
//see if we have value for key in redis
const cachedValue = await redis.get(key);
//if we do return that as a mongoose model.
//the exec function expects us to return mongoose documents
if (cachedValue) {
const doc = JSON.parse(cacheValue);
return Array.isArray(doc)
? doc.map(d => new this.model(d))
: new this.model(doc);
}
const result = await exec.apply(this, arguments); //now exec function's original task.
client.set(key, JSON.stringify(result),"EX",6000);//it is saved to cache server make sure capital letters EX and time as seconds
};
if we store values as array of objects we need to make sure that each object is individullay converted to mongoose document.
this.model is a method inside the Query constructor and converts object to a mongoose document.
note that if you are storing nested values instead of client.get and client.set, use client.hset and client.hget
Now we monkey patched
Query.prototype.exec
so you do not need to export this function. wherever you have a query operation inside your code, mongoose will execute above code

Resources