Data goes null after few successful requests - Node service App pool caching - node.js

I have a service built using Node and Express and MongoDB as database. Service is hosted on IIS.
There is a side filter panel section in the application. Since that filters' master information does not change often (Data Size is in KBs), I use basic Node caching technique(no npm package) to avoid going to database on each page load request. Below is the sample Node code:
//main index.js file
SetFiltersList() function is called as Node service is first initialized on IIS, or, when app pool recycles.
(async () => {
await init.SetFiltersList();
})();
//init.js (utility file)
let filtersList = null; // filterList object that keeps list of Filters as cached object
const SetFiltersList = async (_error) => {
//This is a MongoDB database call
result = await defaultState.DEFAULT_STATE.GET("FiltersList");
filtersList = result.filters;
}
//Get filters call
const getFiltersList = () => filtersList;
module.exports = {
FiltersList: getFiltersList
};
//Controller.js
const GETFILTERLIST = async (req, res, next) => {
res.send(init.FiltersList());
}
//Controller Route
approuter.route('/GetFilterList/')
.get(Controller.GETFILTERLIST);
Problem
After few calls, Filters start returning null and strangely when I recycle the Application pool, the Filters starts coming again for sometime and this repeats after period of time.
Any thoughts whats going wrong here and how I can overcome this?

Related

Custom Computed Etag for Express.js

I'm working on a simple local image server that provides images to a web application with some JSON. The web application has pagination that will do a get request "/images?page=X&limit&200" to an express.js server that returns the JSON files in a single array. I want to take advantage of the browser's internal caching such that if a user goes to a previous page the express.js returns an ETAG. I was wondering how this could be achieved with express.js? For this application, I really just want the computation of the ETAG to take in three parameters the page, the directory, and the limit (It doesn't need to consider the whole JSON body). Also this application is for local use only, so I want the server to do the heavy lifting since I figured it be faster than the browser. I did see https://www.npmjs.com/package/etag which seems promising, but I'm not sure how to use it with express.js
Here's a boilerplate of the express.js code I have below:
var express = require('express');
var app = express();
var fs = require('fs');
app.get('/', async (req, res) =>{
let files = [];
let directory = fs.readdirSync("mypath");
let page = parseInt(req.query.page);
let limit = parseInt(req.query.limit);
for (let i = 0; i < limit; ++i) {
files.push(new Promise((resolve) => {
fs.readFile(files[i + page * limit].name, (err, data) => {
// format the data so easy to use for UI
resolve(JSON.parse(data));
});
});
}
let results = await Promise.all(files);
// compute an etag here and attach it the results.
res.send(results);
});
app.listen(3000);
When your server sends an ETag to the client, it must also be prepared to check the ETag that the client sends back to the server in the If-None-Match header in a subsequent "conditional" request.
If it matches, the server shall respond with status 304; otherwise there is no benefit in using ETags.
var serverEtag = "<compute from page, directory and limit>";
var clientEtag = req.get("If-None-Match");
if (clientEtag === serverEtag) res.status(304).end();
else {
// Your code from above
res.set("ETag", serverEtag);
res.send(results);
}
The computation of the serverEtag could be based on the time of the last modification in the directory, so that it changes whenever any of the images in that directory changes. Importantly, this could be done without carrying out the fs.readFile statements from your code.

Can you keep a PostgreSQL connection alive from within a Next.js API?

I'm using Next.js for my side project. I have a PostrgeSQL database hosted on ElephantSQL. Inside the Next.js project, I have a GraphQL API set up, using the apollo-server-micro package.
Inside the file where the GraphQL API is set up (/api/graphql), I import a database helper-module. Inside that, I set up a pool connection and export a function which uses a client from the pool to execute a query and return the result. This looks something like this:
// import node-postgres module
import { Pool } from 'pg'
// set up pool connection using environment variables with a maximum of three active clients at a time
const pool = new Pool({ max: 3 })
// query function which uses next available client to execute a single query and return results on success
export async function queryPool(query) {
let payload
// checkout a client
try {
// try executing queries
const res = await pool.query(query)
payload = res.rows
} catch (e) {
console.error(e)
}
return payload
}
The problem I'm running into, is that it appears as though the Next.js API doesn't (always) keep the connection alive but rather opens up a new one (either for every connected user or maybe even for every API query), which results in the database quickly running out of connections.
I believe that what I'm trying to achieve is possible for example in AWS Lambda (by setting context.callbackWaitsForEmptyEventLoop to false).
It is very possible that I don't have a proper understanding of how serverless functions work and this might not be possible at all but maybe someone can suggest me a solution.
I have found a package called serverless-postgres and I wonder if that might be able to solve it but I'd prefer to use the node-postgres package instead as it has much better documentation. Another option would probably be to move away from the integrated API functionality entirely and build a dedicated backend-server, which maintains the database connection but obviously this would be a last resort.
I haven't stress-tested this yet, but it appears that the mongodb next.js example, solves this problem by attaching the database connection to global in a helper function. The important bit in their example is here.
Since the pg connection is a bit more abstract than mongodb, it appears this approach just takes a few lines for us pg enthusiasts:
// eg, lib/db.js
const { Pool } = require("pg");
if (!global.db) {
global.db = { pool: null };
}
export function connectToDatabase() {
if (!global.db.pool) {
console.log("No pool available, creating new pool.");
global.db.pool = new Pool();
}
return global.db;
}
then in, eg, our API route, we can just:
// eg, pages/api/now
export default async (req, res) => {
const { pool } = connectToDatabase();
try {
const time = (await pool.query("SELECT NOW()")).rows[0].now;
res.end(`time: ${time}`);
} catch (e) {
console.error(e);
res.status(500).end("Error");
}
};

How to properly use dataloaders across multiple users?

In caching per request the following example is given that shows how to use dataloaders in express.
function createLoaders(authToken) {
return {
users: new DataLoader(ids => genUsers(authToken, ids)),
}
}
var app = express()
app.get('/', function(req, res) {
var authToken = authenticateUser(req)
var loaders = createLoaders(authToken)
res.send(renderPage(req, loaders))
})
app.listen()
I'm confused about passing authToken to genUsers batch function. How should a batch function be composed to use authToken and to return each user corresponding results??
What the example is saying that genUsers should use the credentials of the current request's user (identified by their auth token) to ensure they can only fetch data that they're allowed to see. Essentially, the loader gets initialised at the start of the request, and then discarded at the end, and never recycled between requests.

FeathersJS mount service at root

I am building a microservices architecture using FeathersJS and I only need one service per application, so I will mount that service to the root (/) of each app.
I have tried to do that using / as a path when I'm generating the service (Mongoose) and deleting the app.use('/', express.static(app.get('public'))); line from app.js and it works as it should when I'm accessing the base path (it lists all entries), but when I try /421jsadi23o1sj to find an entry, it returns 404.
I think it gets that ID as being a service's path and looks for it.
This is my businesses.service.js file:
const createService = require('feathers-mongoose');
const createModel = require('../../models/businesses.model');
const hooks = require('./businesses.hooks');
module.exports = function (app) {
const Model = createModel(app);
const paginate = app.get('paginate');
const options = {
name: 'businesses',
Model,
paginate
};
// Initialize our service with any options it requires
app.use('/', createService(options));
// Get our initialized service so that we can register hooks and filters
const service = app.service('/');
service.hooks(hooks);
app.publish(() => {
// Here you can add event publishers to channels set up in `channels.js`
// To publish only for a specific event use `app.publish(eventname, () => {})`
// e.g. to publish all service events to all authenticated users use
// return app.channel('authenticated');
});
};
Had anyone came across this issue? Any ideas about how it can be solved?

NodeJS Express Dependency Injection and Database Connections

Coming from a non Node background, my first instinct is to define my service as such
MyService.js
module.exports = new function(dbConnection)
{
// service uses the db
}
Now, I want one open db connection per request, so I define in middleware:
res.locals.db = openDbConnection();
And in some consuming Express api code:
api.js
var MyService = require(./services/MyService')
...
router.get('/foo/:id?', function (req, res) {
var service = new MyService(res.locals.db);
});
Now, being that Node's preferred method of dependency injection is via the require(...) statement, it seems that I shouldn't be using the constructor of MyService for injection of the db.
So let's say I want to have
var db = require('db');
at the top of MyService and then use somehow like db.current....but how would I tie the db to the current res.locals object now that db is a module itself? What's a recommended way of handling this kind of thin in Node?
Updated Answer: 05/02/15
If you want to attach a DB connection to each request object, then use that connection in your service, the connection will have to be passed to myService some how. The example below shows one way of doing that. If we try to use db.current or something to that effect, we'll be storing state in our DB module. In my experience, that will lead to trouble.
Alternatively, I lay out the approach I've used (and still use) in this previous answer. What this means for this example is the following:
// api.js
var MyService = require(./services/MyService')
...
router.get('/foo/:id?', function (req, res) {
MyService.performTask(req.params.id);
});
// MyService.js
var db = require('db');
module.exports = {
performTask: function(id)
{
var connection = db.getOpenConnection();
// Do whatever you want with the connection.
}
}
With this approach, we've decoupled the DB module from the api/app/router modules and only the module that actually uses it will know it exists.
Previous Answer: 05/01/15
What you're talking about could be done using an express middleware. Here's what it might look like:
var db = require('db');
// Attach a DB connection to each request coming in
router.use(req, res, next){
req.locals.db = db.getOpenConnection();
next();
}
// Later on..
router.get('/foo/:id?', function (req, res) {
// We should now have something attached to res.locals.db!
var service = new MyService(res.locals.db);
});
I personally have never seen something like new MyService before in express applications. That doesn't mean it can't be done, but you might consider an approach like this
// router.js
var MyService = require('MyService');
router.get('/foo/:id?', function (req, res) {
MyService.foo(res.locals.db);
});
// MyService.js
module.exports.foo(connection){
// I have a connection!
}

Resources