Best way to separate logic in sailsjs (nodejs) - node.js

My application structure consist of few parts. Public API, Admin API, Admin Console (GUI), Private API, Auth API (oauth2, local, socials), etc. They kinda different each to other, but using the same models. Some routes going to have high number of requests per second and couldn't be cached.
I'm interesting in best practices to split everything properly. I'm also opened to another frameworks or even io.js.
Right now I got 3 variants:
Create separate apps.
Group controllers by folders, and find a way to group routes.
Create another instance of sails app and run it into another process (So I can have all controllers, models, but how should I organize subapp structure using this way?)

I think most answers will be opinionated, but putting controllers into subfolders is the easiest way to share models. (easiest but not only)
And you can easily run policies based on those subfolders as well.
However you really need to flesh out more aspects of your question and think about if there will be more shared (like templates or assets) than different or if differences would prohibit a shared app. Will they all use sessions or will they even use the same sessions.
In the end, based on your limited question, sails can do what you want.

Related

Node.js application structure

In my Node.js app, I have a folder for API, Lib, and Utils in addition to Server.js and app.js files.
Is there a structure that is best for a Node Application that makes multiple api calls to different endpoints? I'm struggling with how to best organize the code in my applcation.
Folder structure for your application is already a step in the correct path!
Use an app layer, controller, service, and data access layer for your application. Ensure you name your folders so it's readable. Check out this article for more details - https://blog.logrocket.com/the-perfect-architecture-flow-for-your-next-node-js-project/
One common setup is to have the entry points/initiators call services to do the heavy-lifting work, so you could have a top-level directory called that. This is definitely common at Directly. The two top-level services that stand out would be directlyService.js and stackOverflowService.js (or forumService.js or some other non-vendor-specific name). Those two services could call other services (hopefully there are obvious groupings of the other business processes) to subdivide processing further.

loopback4 Project Structure

I come from express.js background and pretty new to loopback framework, especially loopback4 which i am using for my current project. I have gone through the loopback4 documentation few times and got some good progress in setting up the project. As the project is running as expected, I am not much convinced with project structure, Please help me to solve below problem,
As per docs, database operations should be in repositories and routes should be in controllers. Now suppose, My API consist lots of business logic along with database operations say thousand of lines. Which makes controllers routes difficult to maintain. More difficulty would arise, if some API demands version upgrade.
Is there any way to organise the code in controllers in more
scalable and reusable manner? What if i add one more service layer
between controllers and repositories and put business logic there?
how to implement it in the correct way? Is there any official way to
do that which is suggested by loopback community only?
Thanks in advance!!
Is there any way to organise the code in controllers in more scalable and reusable manner?
Yes, services can be used to abstract complex logic into its own separate class(es). Once defined, the service can be injected into the dependent controller(s) which can then call the respective service functions.
How the service is designed is dependent on the user requirements as LoopBack 4 does not necessarily enforce a strict design requirement.

sharing code between microservices

I have a suite i'm working on that has a few micro-services working togther.
I'm using Docker to setup the environment and it works great.
My project components are as follows:
MongoDB
Node.js worker that does some processing on the DB
Node.js Rest API that serves the user
As you can probably guess the 2 Node.js servers are suppose to work with the same DB.
Now I've defined my models in one of the projects but I'm wondering what is the best practice when it comes to handling the second.
I would really love to avoid copy pasting my code because that means I have to keep both of them up to date when I do changes to the Schema.
is there a good way to share the code between them?
my project looks like this:
rest-api // My first Node.js application
models
MyFirstModel.js // This is identical to the one in the worker/models folder
MySecondModel.js
index.js
package.json
Dockerfile
worker // My second Node.js application
models
MyFirstModel.js
MySecondModel.js
index.js
package.json
Dockerfile
docker-compose.yml
Any input will be helpful.
Thanks.
Of course you can.
What you have to do is to put your common files in an volume, and share this volume with both Node containers.
You should setup a data volume in which you put all the files you want to share. More about this here or anywhere else by googling it.
Cheers.
The common opinion is the following: two microservices should not share same data model. There are several article about it and some question related to this topic.
How to deal with shared models in micro service architectures
However I think there are some cases when you need it and acceptable. Trust is a luxury even if everything is internal, thus security and conformity must be considered. Any incoming object must be normalised, validated and checked before initiate any process with it. The two service should handle the data with the same way.
My solution that I used for an API and an Admin services which shared the models:
I created 3 repositories, one for the API and one for the Admin and a 3th one for the models directory. Models should be present in both repositories so and I added it as a git submodule. Whenever you change something on a schema, you should commit it separately, but I think it is the best solution to manage the changes without duplicating the code.

Express route handler like functionality in Sails applications

TL;DR Is there a way to have Sails applications work similar to Express route handlers?
I'm working on an application that has a few major components (e.g. ecommerce, blogging and so on). It's working all fine, but the huge number of models, controllers and views is making it difficult for me to visualize them as separate components and making me feel a little too congested for comfort.
I've been through threads like this and I understand nesting is available for controllers, but I really want to split my project into discrete components.
What I'm doing now is splitting the project into different applications and exposing their REST APIs for the gluing application to show its magic. At least this would do the much-needed segregation. But how do I make the endpoints accessible to the gluing application? Using req? While that would give me the liberty to host the other components on another machine, it would slow things down considerably, right? Is there a way to interact with the other processes "directly", something like route handlers in Express (though they're not separate processes)? I would ideally like to have them running as separate processes so one component doesn't bring everything down on failure. At the same time, I want the interaction to be as "local" as possible.
Am I missing something fundamental here? Any suggestions would be appreciated.
Also, is this more of a serverfault question?

How do I keep value objects from the server?

I communicate with the server through jsons, which both in Nodejs and in Actionscript are objects (serialized through string).
Those objects I use in my client, by reading / modifying them and also creating secondary objects (from Classes) relative to what came from the server.
I have one of two options to design my client and I am stuck at deciding which of them is more flexible/futureproof.
Keep data as it comes, create many methods to modify the objects, keep secondary objects somewhere separate.
Convert the data into instances of classes where each class has its own group of methods instead of piling the methods in the same place.
Usually I go with 2 because OOP is delicious but going with 1 seems much simpler in terms of quantity.
I guess my problem is that I can't figure out if my client is basically a View (from MVC) where the server is the Control (also from MVC), or if my client and server are two independent / separate projects that communicate, and I should consider the client as a MVC project in itself.
I would appreciate your 2 cents.
From your question it's not clear what 1. and 2. differ but looks like 1. is tightly coupled while 2. has better separation of concerns.
It depends on your application. Do you need to create client heavy app with rich UI/UX elements, or maybe a mobile app where bandwidth is limited? If the answer is yes, then go with a second approach (2.): build your MVC like structure or use existing MV* libraries, like Ember, Angular, Backbone, Knockout, etc.
If you need SEO support and don't have much of fron-end code, then rendering on the server-side is still an option. Even with this approach ORM like Mongoose can come in handy.
PS: JavaScript doesn't really have classes, because objects inherit from other objects. You can use prorotypal inheritance patterns for that.

Resources