sails: disable `blueprints actions` in production, since it creates a huge security footprint? - node.js

Getting acquinted with Sails for Node.
One thing I need to get used to is the 'automagic' way in which routes for controller-methods are set-up using blueprints.
For example, from the docs, if actions-blueprints are enabled (which they are by default) GET, POST, PUT, and DELETE routes will be generated for every one of a controller's actions.
E.g from the docs, when you've got controlled-method EmailController.send the following routes are created:
* `EmailController.send`
* :::::::::::::::::::::::::::::::::::::::::::::::::::::::
* `GET /email/send/:id?`
* `POST /email/send/:id?`
* `PUT /email/send/:id?`
* `DELETE /email/send/:id?`
The docs specifically state: actions are enabled by default, and are OK for production-- however, you must take great care not to inadvertently expose unsafe controller logic to GET requests.
Normally I would write a controller-method for ONE specific HTTP Verb (e.g.: POST). That's clearly not compatible with this automagic wiring, since these methods would be exposed on GETs (and PUTs and DELETEs) as well, which would leave a huge security footprint imho.
So: what's the practical use of enabling these actions? To me, it seems like a huge security risk. On the other hand, I can (theoretically) imagine writing all controller methods with conditional logic to discriminate between HTTP VERBS , but for most controller methods this just doesn't make sense.
So help me out: What's the advantage of working with these actions which Sails seems to try to nudge me towards? Or is it just a way to get going quickly, but really not meant for production?
Thanks for wrapping my head around this.

Action Blueprints automatically create routes to all the available controller methods. I personally turn them off, and do my routing manually.
Restful blueprints automatically generate the controller methods themselves. Which would then have routes to them created by the Action Blueprints. I believe these are the rest defaults....
* GET /boat/:id? -> BoatController.find
* POST /boat -> BoatController.create
* PUT /boat/:id -> BoatController.update
* DELETE /boat/:id -> BoatController.destroy

Related

Node.js express app architecture with testing

Creating new project with auto-testing feature.
It uses basic express.
The question is how to orginize the code in order to be able to test it properly. (with mocha)
Almost every controller needs to have access to the database in order to fetch some data to proceed. But while testing - reaching the actual database is unwanted.
There are two ways as I see:
Stubbing a function, which intends to read/write from/to database.
Building two separate controller builders, one of each will be used to reach it from the endpoints, another one from tests.
just like that:
let myController = new TargetController(AuthService, DatabaseService...);
myController.targetMethod()
let myTestController = new TargetController(FakeAuthService, FakeDatabaseService...);
myTestController.targetMethod() // This method will use fake services which doesnt have any remote connection functionality
Every property passed will be set to a private variable inside the constructor of the controller. And by aiming to this private variable we could not care about what type of call it is. Test or Production one.
Is that a good approach of should it be remade?
Alright, It's considered to be a good practice as it is actually a dependency injection pattern

AsyncAPI - How to distribute definition over different files

I am thinking of using AsyncAPI in my project for documenting the RabbitMQ messaging system.
What I need to do is, rather than creating a single yaml/json file for all the messages in the application, I want to create the AsyncAPI definition for each message in its own file, very much like its done in Swagger.
I am using Swagger 2.0, on a node express server for the REST API definitions. For the definition of the APIs, I write comments on each API with the #swagger decorator for Swagger to pick up the documentation. For example:
/**
* #swagger
* /user/register:
* post:
* description: Register a new user
...
...
*/
I also have a common definition in a routes.js file, where I define all the reusables.
Such definitions sit on the top of each API end point file. Swagger, collects all these documentations distributed over various files, and creates a single documentation for all the APIs in the application.
I was wondering if something similar can be done in AsyncAPI and if yes, how do I achieve that.
Would really appreciate your response on this.
Thanks,
Rachit
What I need to do is, rather than creating a single yaml/json file for all the messages in the application, I want to create the AsyncAPI definition for each message in its own file, very much like its done in Swagger.
Split definition of schema objects across several files is possible in AsyncAPI exactly in the same way like in OpenAPI
I am using Swagger 2.0, on a node express server for the REST API definitions. For the definition of the APIs, I write comments on each API with the #swagger decorator for Swagger to pick up the documentation. For example:
Up to now there is no code-first tool to generate AsyncAPI from the JS source code

Feathers JS nested Routing or creating alternate services

The project I'm working on uses the feathers JS framework server side. Many of the services have hooks (or middleware) that make other calls and attach data before sending back to the client. If I have a new feature that needs to query a database but for a only few specific things I'm thinking I don't want to use the already built out "find" method for this database query as that "find" method has many other unneeded hooks and calls to other databases to get data I do not need for this new query on my feature.
My two solutions so far:
I could use the standard "find" query and just write if statements in all hooks that check for a specific string parameter that can be passed in on client side so these hooks are deactivated on this specific call but that seems tedious especially if I find this need for several other different services that have already been built out.
I initialize a second service below my main service so if my main service is:
app.use('/comments', new JHService(options));
right underneath I write:
app.use('/comments/allParticipants', new JHService(options));
And then attach a whole new set of hooks for that service. Basically it's a whole new service with the only relation to the origin in that the first part of it's name is 'comments' Since I'm new to feathers I'm not sure if that is a performant or optimal solution.
Is there a better solution then those options? or is option 1 or option 2 the most correct way to solve my current issue?
You can always wrap the population hooks into a conditional hook:
const hooks = require('feathers-hooks-common');
app.service('myservice').after({
create: hooks.iff(hook => hook.params.populate !== false, populateEntries)
});
Now population will only run if params.populate is not false.

How should I use Swagger with Hapi?

I have a working ordinary Hapi application that I'm planning to migrate to Swagger. I installed swagger-node using the official instructions, and chose Hapi when executing 'swagger project create'. However, I'm now confused because there seem to be several libraries for integrating swagger-node and hapi:
hapi-swagger: the most popular one
hapi-swaggered: somewhat popular
swagger-hapi: unpopular and not that active but used by the official Swagger Node.js library (i.e. swagger-node) as default for Hapi projects
I though swagger-hapi was the "official" approach, until I tried to find information on how do various configurations on Hapi routes (e.g. authorization, scoping, etc.). It seems also that the approaches are fundamentally different, swagger-hapi taking Swagger definition as input and generating the routes automatically, whereas hapi-swagger and hapi-swaggered seem to have similar approach with each other by only generating Swagger API documentation from plain old Hapi route definitions.
Considering the amount of contributors and the number of downloads, hapi-swagger seems to be the way to go, but I'm unsure on how to proceed. Is there an "official" Swagger way to set up Hapi, and if there is, how do I set up authentication (preferably by using hapi-auth-jwt2, or other similar JWT solution) and authorization?
EDIT: I also found swaggerize-hapi, which seems to be maintained by PayPal's open source kraken.js team, which indicates that it might have some kind of corporate backing (always a good thing). swaggerize-hapi seems to be very similar to hapi-swagger, although the latter seems to provide more out-of-the-box functionality (mainly Swagger Editor).
Edit: Point 3. from your question and understanding what swagger-hapi actually does is very important. It does not directly serves the swagger-ui html. It is not intended to, but it is enabling the whole swagger idea (which the other projects in points 1. and 2. are actually a bit reversing). Please see below.
It turns out that when you are using swagger-node and swagger-hapi you do not need all the rest of the packages you mentioned, except for using swagger-ui directly (which is used by all the others anyways - they are wrapping it in their dependencies)
I want to share my understanding so far in this hapi/swagger puzzle, hope that these 8 hours that I spent can help others as well.
Libraries like hapi-swaggered, hapi-swaggered-ui, also hapi-swagger - All of them follow the same approach - that might be described like that:
You document your API while you are defining your routes
They are somewhat sitting aside from the main idea of swagger-node and the boilerplate hello_world project created with swagger-cli, which you mentioned that you use.
While swagger-node and swagger-hapi (NOTE that its different from hapi-swagger) are saying:
You define all your API documentation and routes **in a single centralized place - swagger.yaml**
and then you just focus on writing controller logic. The boilerplate project provided with swagger-cli is already exposing this centralized place swagger.yaml as json thru the /swagger endpoint.
Now, because the swagger-ui project which all the above packages are making use of for showing the UI, is just a bunch of static html - in order to use it, you have two options:
1) to self host this static html from within your app
2) to host it on a separate web app or even load the index.html directly from file system.
In both cases you just need to feed the swagger-ui with your swagger json - which as said above is already exposed by the /swagger endpoint.
The only caveat if you chose option 2) is that you need to enable cors for that end point which happened to be very easy. Just change your default.yaml, to also make use of the cors bagpipe. Please see this thread for how to do this.
As #Kitanotori said above, I also don't see the point of documenting the code programmatically. The idea of just describing everything in one place and making both the code and the documentation engine to understand it, is great.
We have used Inert, Vision, hapi-swagger.
server.ts
import * as Inert from '#hapi/inert';
import * as Vision from '#hapi/vision';
import Swagger from './plugins/swagger';
...
...
// hapi server setup
...
const plugins: any[] = [Inert, Vision, Swagger];
await server.register(plugins);
...
// other setup
./plugins/swagger
import * as HapiSwagger from 'hapi-swagger';
import * as Package from '../../package.json';
const swaggerOptions: HapiSwagger.RegisterOptions = {
info: {
title: 'Some title',
version: Package.version
}
};
export default {
plugin: HapiSwagger,
options: swaggerOptions
};
We are using Inert, Vision and hapi-swagger to build and host swagger documentation.
We load those plugins in exactly this order, do not configure Inert or Vision and only set basic properties like title in the hapi-swagger config.

Node.js: Connect-Auth VS. EveryAuth

Can anyone give a good comparison between:
https://github.com/ciaranj/connect-auth
and https://github.com/bnoguchi/everyauth
Which seem to be the only options for express/connect
I'm the author of everyauth.
I wrote everyauth because I found using connect-auth lacking in terms of:
Easy and powerful configuration
To get particular, one area where it was lacking in terms of configurability was setting facebook's "scope" security settings dynamically.
Good debugging support
I found connect-auth not-so-straightforward to debug in terms of pinpointing the part of the auth process was failing. This is mostly due to the way that connect-auth sets up its nested callbacks.
With everyauth, I tried to create a system that solved the issues I ran into with connect-auth.
On configuration - every everyauth auth module is defined as a series of steps and configurable parameters. As a result, you have surgery-like precision over what parameters (e.g., hostname, callback url, etc.) or steps (e.g., getAccessToken, addToSession) you want altered. With connect-auth, if you want to change one thing besides the few built in configurable parameters each auth strategy provides, you have to re-write the entire this.authenticate function that defines the logic for all of that strategy. In other words, it has less precision of configurability than everyauth. In other cases, you cannot use connect-auth, as is -- for example, achieving dynamic facebook "scope" configurability - i.e., asking users for more facebook permissions progressively as they get to portions of your app that require more permissions.
In addition to configurable steps and parameters, you can also take advantage of everyauth's auth module inheritance. All modules inherit prototypically from the everymodule auth module. All OAuth2-based modules inherit from the oauth2 module. Let's say you want all oauth2 modules to behave differently. All you need to do is modify the oauth2 auth module, and all oauth2-based modules will inherit that new behavior from it.
On debugging - everyauth, in my opinion, has better debugging visibility because of the very fact that it segments each module explicitly into the steps of which it is composed. By setting
everyauth.debug = true;
you get output of what steps in the authentication process have completed and which ones have failed. You also have granular control over how long each step in each auth strategy has before it times out.
On extensibility - I designed everyauth to maximize code re-use. As mentioned before, all modules inherit prototypically from the everymodule auth module. This means that you can achieve very modular systems while at the same time having fine grained control in terms of over-riding a specific step from an ancestor module. To see how easy it is to extend everyauth with your own auth module, just take a look at any of the specific auth modules in everyauth's source.
On readability - I find everyauth source easier to read in terms of what is going on. With connect-auth, I found myself jumping around the several files to understand under what contexts (i.e., during what steps, in everyauth parlance) each nested callback configured by a strategy was running. With everyauth, you know exactly what piece of logic is associated with which context (aka step). For instance, here is the code describing what happens when an oauth2 provider redirects back to your service:
.get('callbackPath',
'the callback path that the 3rd party OAuth provider redirects to after an OAuth authorization result - e.g., "/auth/facebook/callback"')
.step('getCode')
.description('retrieves a verifier code from the url query')
.accepts('req res')
.promises('code')
.canBreakTo('authCallbackErrorSteps')
.step('getAccessToken')
.accepts('code')
.promises('accessToken extra')
.step('fetchOAuthUser')
.accepts('accessToken')
.promises('oauthUser')
.step('getSession')
.accepts('req')
.promises('session')
.step('findOrCreateUser')
.accepts('session accessToken extra oauthUser')
.promises('user')
.step('compile')
.accepts('accessToken extra oauthUser user')
.promises('auth')
.step('addToSession')
.accepts('session auth')
.promises(null)
.step('sendResponse')
.accepts('res')
.promises(null)
Without needing to explain how this works, its pretty straightforward to see what an oauth2 strategy does. Want to know what getCode does? The source again is very straightforward:
.getCode( function (req, res) {
var parsedUrl = url.parse(req.url, true);
if (this._authCallbackDidErr(req)) {
return this.breakTo('authCallbackErrorSteps', req, res);
}
return parsedUrl.query && parsedUrl.query.code;
})
Compare this all to connect-auth's facebook code, which for me at least, took more time than I thought it should have to figure out what is getting executed when. This is mostly because of the way in which the code is spread across files and because of the use of a single authenticate method and nested callbacks to define authentication logic (everyauth uses promises to break down the logic into easily digestible steps):
that.authenticate= function(request, response, callback) {
//todo: makw the call timeout ....
var parsedUrl= url.parse(request.url, true);
var self= this;
if( parsedUrl.query && parsedUrl.query.code ) {
my._oAuth.getOAuthAccessToken(parsedUrl.query && parsedUrl.query.code ,
{redirect_uri: my._redirectUri}, function( error, access_token, refresh_token ){
if( error ) callback(error)
else {
request.session["access_token"]= access_token;
if( refresh_token ) request.session["refresh_token"]= refresh_token;
my._oAuth.getProtectedResource("https://graph.facebook.com/me", request.session["access_token"], function (error, data, response) {
if( error ) {
self.fail(callback);
}else {
self.success(JSON.parse(data), callback)
}
})
}
});
}
else {
request.session['facebook_redirect_url']= request.url;
var redirectUrl= my._oAuth.getAuthorizeUrl({redirect_uri : my._redirectUri, scope: my.scope })
self.redirect(response, redirectUrl, callback);
}
}
A few other differences:
everyauth supports traditional password-based authentication. connect-auth, as of this writing, does not.
The foursquare and google support in connect-auth is based on the older OAuth1.a spec. everyauth's foursquare and google modules are based on their implementations of the newer OAuth2 spec.
OpenId support in everyauth.
everyauth's documentation is much more comprehensive than connect-auth's.
Finally, to comment on Nathan's answer which was unsure about everyauth support for multiple providers at the same time, everyauth does support multiple, concurrent providers. The README on everyauth's github page provides instructions about how to use everyauth to achieve this.
To conclude, I think choice of libraries depends on the developer. I, and others, find everyauth more powerful from their point of view. As Nathan's answer illustrates, yet others find connect-auth more tuned in to their preferences. Whatever your choice, I hope this write-up about why I wrote everyauth helps you in your decision.
Both libraries are pretty close in feature sets, especially in terms of supported providers. connect-auth provides support out-of-the-box to make your own oAuth providers, so that could well help you out if you’re going to need that sort of thing.
The main thing I’ve noted between the two is that I find connect-auth much cleaner with the way it creates and accepts the middleware; you only have to look at the amount of pre-configuration required for the middleware in everyauth to see that it's going to get messy.
Another thing that isn't clear is whether everyauth supports multiple providers at the same time; with connect-auth, it seems possible/more straightforward, though I’ve yet to try this myself.
Thought I'd mention that there is now a new player in the town called PassportJS which features a lot of the same benefits as everyauth however authentication providers are provided by npm modules - so optin in rather than include everything.
I spent a morning playing with everyauth and mongoose-auth to find the examples were broke and they were dead projects. Here are the commit histories:
https:// github.com/jaredhanson/passport/commits/master - June 5th (it's June 18th)
https:// github.com/ciaranj/connect-auth/commits/master - April 18th (2 months ago)
https:// github.com/bnoguchi/mongoose-auth/commits/master - Feb 2012
Here's a google Trends of everyauth, connect-auth, and passportjs
http://www.google.com/trends/explore?q=passportjs%2C+connect-auth%2C+everyauth#q=passportjs%2C%20connect-auth%2C%20everyauth&cmpt=q
My experience with each:
everyauth is far more configurable. For example, I wish to handle my own sessions. With everyauth, with its modularity and introspection, a straightforward task. On the other hand have found everyauth filled with minor bugs and incomplete and/or incorrect documentation. For me, each authentication strategy has required its own understanding and troubleshooting.
passport might be a good bet if you're doing everything "by the book". But any deviation could make life very difficult very quickly, making it, for me, a non-starter.

Resources