Sail.js requires server restart after running command to refresh database - node.js

From this question, Sails js using models outside web server I learned how to run a command from the terminal to update records. However, when I do this the changes don't show up until I restart the server. I'm using the sails-disk adapter and v0.9

According to the source code, the application using sails-disk adapter loads the data from file only once, when the corresponding Waterline collection is being created. After that all the updates and destroys happen in the memory, then the data is being dumped to the file, but not being re-read.
That said, what's happening in your case is that once your server is running, it doesn't matter if you are changing the DB file (.tmp/disk.db) using your CLI instance, 'cause lifted Sails server won't know about the changes until it's restarted.
Long story short, the solution is simple: use another adapter. I would suggest you to check out sails-mongo or sails-redis (though the latter is still in development), for both Mongo and Redis have data auto expiry functionality (http://docs.mongodb.org/manual/tutorial/expire-data/, http://redis.io/commands/expire). Besides, sails-disk is not production-suitable anyways, so sooner or later you would need something else.

One way to accomplish deleting "expired records" over time is by rolling your own "cron-like job" in /config/bootstrap.js. In psuedo code it would look something like this:
module.exports.bootstrap = function (cb) {
setInterval(function() { < Insert Model Delete Code here> }, 300000);
cb();
};
The downside to this approach is if it throws it will stop the server. You might also take a look at kue.

Related

How to prevent Mocha from preserving require cache between test files?

I am running my integration test cases in separate files for each API.
Before it begins I start the server along with all services, like databases. When it ends, I close all connections. I use Before and After hooks for that purpose. It is important to know that my application depends on an enterprise framework where most "core work" is written and I install it as a dependency of my application.
I run the tests with Mocha.
When the first file runs, I see no problems. When the second file runs I get a lot of errors related to database connections. I tried to fix it in many different ways, most of them failed because of the limitations the Framework imposed me.
Debugging I found out that Mocha actually loads all files first, that means that all code written before the hooks and the describe calls is executed. So when the second file is loaded, the require.cache is already full of modules. Only after that the suite executes the tests sequentially.
That has a huge impact in this Framework because many objects are actually Singletons, so if in a after hook it closes a connection with a database, it closes the connection inside the Singleton. The way the Framework was built makes it very hard to give a workaround to this problem, like reconnecting to all services in the before hook.
I wrote a very ugly code that helps me before I can refactor the Framework. This goes in each test file I want to invalidate the cache.
function clearRequireCache() {
Object.keys(require.cache).forEach(function (key) {
delete require.cache[key];
});
}
before(() => {
clearRequireCache();
})
It is working, but seems to be very bad practice. And I don`t want this in the code.
As a second idea I was thinking about running Mocha multiple times, one for each "module" (as of my Framework) or file.
"scripts": {
"test-integration" : "./node_modules/mocha/bin/mocha ./api/modules/module1/test/integration/*.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file1.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file2.integration.js"
}
I was wondering if Mocha provides a solution for this problem so I can get rid of that code and delay the code refacting a bit.

Mongoose - Keep local database in sync with remote database

I have access to two separate databases that I'd like to keep in sync, a new one and an existing one, which will be in separate physical locations. The new one is going to be used to service an external API, so to cut down on request time, I think it makes sense to only query the local database for API requests.
My initial approach was to use mongoose.createConnection and limit the local collection to minor metadata and directly access the remote collection, but that's what I'm now looking to avoid.
Another approach might be to use mongoose.createConnection to periodically query the remote db and update the local one, but it could be costly if I want to do make frequent updates.
There are ways to cut down the cost - for example, there is a lastUpdated property in the relevant collection on the existing database, which could be used to limit the remote query to recently updated records such as:
RemoteCollection.find({
lastUpdated: {$gte: Date.now() - lookbackPeriod}
})
But I'm wondering if there's any native functionality of mongoose/mongoDB that can be used more efficiently make the updates. I also thought about mongodump and mongorestore to keep a full local copy of the records I needed, but that also seems costly.
Any help is appreciated.
After a bit of reading and thanks to Jake's comment, it looks like it's working. I need to do some more setup, but the code below should work and is based on this section from the docs:
https://mongoosejs.com/docs/models.html#change-streams
The first step would be start mongod with the --replSet flag:
mongod --replSet "rs0" --bind_ip localhost,<hostname(s)|ip address(es)>
Then close and restart mongo and run rs.initiate() on the database. You can then check the status of the replica set with rs.status(). If that command works and returns a result, the replica set functionality should be there.
Then within Node, you can do something like this:
// The docs reference creating a new model but you can just import an existing one
const RemotePerson = require('./models/RemotePerson');
const LocalPerson = require('./models/LocalPerson');
RemotePerson.watch().on('change', data => {
if (data.operationType === "insert") {
LocalPerson.create(data.fullDocument);
} else if (data.operationType === "update") {
LocalPerson.findByIdAndUpdate(data.documentKey, {
$set: data.updateDescription.updatedFields
});
}
});

Feathers JS nested Routing or creating alternate services

The project I'm working on uses the feathers JS framework server side. Many of the services have hooks (or middleware) that make other calls and attach data before sending back to the client. If I have a new feature that needs to query a database but for a only few specific things I'm thinking I don't want to use the already built out "find" method for this database query as that "find" method has many other unneeded hooks and calls to other databases to get data I do not need for this new query on my feature.
My two solutions so far:
I could use the standard "find" query and just write if statements in all hooks that check for a specific string parameter that can be passed in on client side so these hooks are deactivated on this specific call but that seems tedious especially if I find this need for several other different services that have already been built out.
I initialize a second service below my main service so if my main service is:
app.use('/comments', new JHService(options));
right underneath I write:
app.use('/comments/allParticipants', new JHService(options));
And then attach a whole new set of hooks for that service. Basically it's a whole new service with the only relation to the origin in that the first part of it's name is 'comments' Since I'm new to feathers I'm not sure if that is a performant or optimal solution.
Is there a better solution then those options? or is option 1 or option 2 the most correct way to solve my current issue?
You can always wrap the population hooks into a conditional hook:
const hooks = require('feathers-hooks-common');
app.service('myservice').after({
create: hooks.iff(hook => hook.params.populate !== false, populateEntries)
});
Now population will only run if params.populate is not false.

How to handle Node js app errors to prevent crashing

I am new to node and what I would call, real server-side programming (vs PHP). I was setting up a user database with MongoDB, Mongoose and a simple mongoose user plugin that came with a schema and password stuff to use. You can add validation to Mongoose for your fields like so
schema.path('email').validate(function (email) {
if (this.skipValidation) return true
return email.trim().length
}, 'Please provide a valid email')
(this is not my code). I noticed though when I passed an invalid or blank email, .trim() failed and the entire server crashed. This is very worrisome to me because things like this don't happen in your good ol' WAMP stack. If you have a bug, 99.9% of the time it's just the browser that is affected.
Now that I am delving into lower level programming, do I have to be paranoid about every incoming variable to a simple function? Is there a tried-and-true error system I should follow?
Just check before using the variable with trim, if it is !null for example:
if(!email) {
return false;
}
And if you want to run your app forever, rather use PM2.
If you are interested in running forever, read this interesting post http://devo.ps/blog/goodbye-node-forever-hello-pm2/
You may consider using forever to keep your node.js program running. Even it crashes, it restarts automatically and the error is logged as well.
Note: Although you could actually catch all exceptions to prevent the node.js from crashing, it is not recommended.
One of our strategies is to make use of Node.js Domain to handle errors - http://nodejs.org/api/domain.html
You should set up a error logging node modules like Winston, once configured produces useful error/exceptions.
Have a look in this answer for how to catch error within your node implementation, though specific to expressjs but relevant.
Once you catch exceptions, it prevents unexpected crashes.

Temporary File Download

Is there a service that creates basically a one-time download of a file, preferably something I can use from NodeJS?
I've done some research on FilePicker, and haven't found anything about regenerating the link it gives you for a file. There may be a way to do this with NodeJS, but I'm using Meteor at the same time so many Node things probably will conflict.
You could build it with meteor. Using meteor-router with meteorite & use server side routing to deliver the files.
You need a collection to keep track of downloaded files:
Server JS
var downloads = new Meteor.Collection("downloads");
//create a link
downloads.insert({url:"/mydownload.zip",downloaded:false})
Meteor.Router.add('/file/:id', 'GET', function(id) {
download = downloads.findOne(id);
if( download) {
if(dowload.downloaded) {
this.response.send("You've already downloaded me")
}
else
{
//I guess you could just redirect or stream the file for an extra layer of surety
this.response.redirect(download.url);
}
}
});
On the client you can use /files/{{_id}} with _id of the file from downloads the person has as the link
My recommendation would also be to add custom server-side logic to count # of uploads (or just flag a file as downloaded/not downloaded) and respond accordingly. The closest you could do with Filepicker.io would be using the security policies to restrict downloading the file to a specific time interval.
in addition to using the router package
in Meteor.startup you can add
var require = __meteor_bootstrap__.require;
fs = require( 'fs' );
the fs variable should be declared on the server only. the fs package is used by Meteor and does not need to be added separately.
once you have done this, you can create files with Meteor.uuid() as their name which makes them unique and very difficult to guess. It is also possible to delete the file after a certain amount of time by using Meteor.setTimeout
the question is: where do the files to be downloaded come from?
Solution using Heroku Cloud and NodeJS Meteor Hooks
Heroku in particular is actually great for temporary file download links: they offer a "temporary scratchpad" filesystem that is reset every time the program restarts, and each running Node server cannot see the files other instances have created.
Each dyno gets its own ephemeral filesystem, with a fresh copy of the
most recently deployed code. During the dyno’s lifetime its running
processes can use the filesystem as a temporary scratchpad, but no
files that are written are visible to processes in any other dyno and
any files written will be discarded the moment the dyno is stopped or
restarted.
Taken from the Heroku documentation: https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
Thus, any files written to the "filesystem" will be temporary.
This allows for a very easy solution to this problem: you can simply use NodeJS filesystem manipulation to create temporary files on the server, serve them once (or for a limited time), and then remove them so they cannot be downloaded again.
This in combination with something like $.download() will make a seamless experience which in turn prevents unauthorized downloads.

Resources