How do I keep value objects from the server? - node.js

I communicate with the server through jsons, which both in Nodejs and in Actionscript are objects (serialized through string).
Those objects I use in my client, by reading / modifying them and also creating secondary objects (from Classes) relative to what came from the server.
I have one of two options to design my client and I am stuck at deciding which of them is more flexible/futureproof.
Keep data as it comes, create many methods to modify the objects, keep secondary objects somewhere separate.
Convert the data into instances of classes where each class has its own group of methods instead of piling the methods in the same place.
Usually I go with 2 because OOP is delicious but going with 1 seems much simpler in terms of quantity.
I guess my problem is that I can't figure out if my client is basically a View (from MVC) where the server is the Control (also from MVC), or if my client and server are two independent / separate projects that communicate, and I should consider the client as a MVC project in itself.
I would appreciate your 2 cents.

From your question it's not clear what 1. and 2. differ but looks like 1. is tightly coupled while 2. has better separation of concerns.
It depends on your application. Do you need to create client heavy app with rich UI/UX elements, or maybe a mobile app where bandwidth is limited? If the answer is yes, then go with a second approach (2.): build your MVC like structure or use existing MV* libraries, like Ember, Angular, Backbone, Knockout, etc.
If you need SEO support and don't have much of fron-end code, then rendering on the server-side is still an option. Even with this approach ORM like Mongoose can come in handy.
PS: JavaScript doesn't really have classes, because objects inherit from other objects. You can use prorotypal inheritance patterns for that.

Related

API Architecture - Business logic tightly coupled to routes?

To speed up development for my next Node-API I was looking for a suitable Framework. In the past I was building my APIs with express only.
One Design pattern I always found useful is to completely seperate the business logic from route-handling in services. Those services only accept the required information (like a user id or data) and return a promise resolving the result of the operation.
This way it is easy to reuse these services in other routes, to combine them, test them, or call them based on schedules or other events - totally independent from endpoint-calls. Routing and Middleware take care of access-controll, error-handling and respondig.
Looking at the documentations of those frameworks (sailsjs, keystonejs, ...) I mostly see the business-logic tightly coupled to individual routes, directly accepting request objects and handling the responses. Only as an afterthought it seems there is sometimes offered a way to extract "often used code" into helper functions.
Am I missing something? How come this pattern seems to be the standard of API design? Is this a best practice for a reason?
It might have to do with Node.js services being smaller in size. If you're coming from an enterprise background, you're well aware mixing business-logic with controller code doesn't fly in the long run. Perhaps small projects can get away with defying that, but once the size increases, you can't avoid the laws of physics. It's best to separate concerns and keep the codebase maintainable.
I'd also add that below services, it's good to have a separate layer that handles talking to outside process boundaries. That way, you can test business logic in isolation by providing appropriate test doubles for integrations. Here's a longer explanation of how it would work in a Node project: Organize Node.js API project using 3-layer architecture.

How to share models across different services/repos using postgres/knex.js/objection.js in node.js?

In standard micro-service architecture, each service is responsible for their own data with boundaries set. The only way to manipulate this data is through RESTful endpoints provided by the service.
I have a unique case where I would like to have a few clustered scraper processes running, populating a table with raw data. These scraper processes can also be configured for specific cases, say one to scrape text, one to scrape images, etc.
The raw data will then be consumed and aggregated into a normalized structure in another table by another process. I'd like to split out all this processes into small, deployable components, but that means that I must somehow share the model definitions across multiple repositories/projects since the aggregation logic must consume all the raw data.
It's possible that the aggregation logic makes request to each clustered scraper process, but the state control for that would be a lot more complex than just querying a table.
I know it's possible to define the model definitions in an isolated repo and then import as a dependency in other projects, but is this the correct architecture?
The best case for when to use microservices is when you have very distinct bounded contexts in your problem domain. When you have overlapping context boundaries like the scenario you've described, microservices will probably cost you more than you'd gain. Do you feel like you'd gain productivity by deconstructing your application into microservices despite this issue?
Without a better look at your application, it's hard to give definitive answers, but when you're bumping into problems like this at the outset, there's a good chance that this isn't a good case for a microservice architecture. Bear in mind, that's just my two cents.
Sharing physical repositories for configuration sounds pretty onerous, and I'd avoid it if at all possible!

How to write a "middleware" in Node.JS

Apologies if this question is too general. This is not an invitation for "opinion-based" answers. Unfortunately, at this stage of the project and with my limited knowledge in this space, I just need some guidance from more experienced people.
In a large company project, I have a web service that is based on NOSQL data models. I have little influence over the design of this service. Due to the data structure, when overseeing the development of a large and complex mobile app (for multiple platforms), I noticed that sometimes it was necessary to make calls to multiple endpoints in sequence to get the required information.
For example: there is the need to first call an endpoint sending certain parameters to get a user ID, then use that user ID to get details about the user. The system cannot deliver the user details on the first call. This leads to complex data parsing and background processes on the clients.
To simplify mobile development, we now want to build a "middleware" layer that simplifies the API for the mobile clients. The app would call the middleware as the single point of entry, the middleware would call the existing endpoints to gather the necessary data and deliver the result back to the client.
For example, the client would ask for finding a certain user and delivering certain attributes of this user (e.g. the first names of all friends of the user) with one API endpoint. The middleware would need to make many calls to the backend: search for the user, use the result (user ID) to get details and friends of the user, use the delivered friends' userIDs to gather data about the friends. Then the middleware would package the information and deliver it back to the client.
Initial recommendations from colleagues indicate that Node.JS would be a good framework for developing this type of functionality in a maintainable, scalable way.
OK, I know how to run a simple server and manage routes on a node system, but how would you organize this project, e.g. the file structure? Which components would you encapsulate. Are there any frameworks on top of Node that would help with a task like that?
I am not looking for "opinions", just for some insightful recommendations based on experience or knowledge. Feel free to down-vote this question after you have stated what you do not like about it and have asked specific questions to clarify (I will comply as soon as possible). Thanks.
how would you organize this project, e.g. the file structure?
I have described my filesystem layout in my express code structure github repository, which is also posted as a stackoverflow answer here
Which components would you encapsulate.
I think it's OK to encapsulate the interface to each backing API as a "model" or "service" type module. If you are making database queries, encapsulating those either into models or at least modules of related queries is OK.
Are there any frameworks on top of Node that would help with a task like that?
Yes, many. I prefer express as the basic web app framework with hapi being the other strong choice. There are other options both smaller (more 1-purpose libraries) and larger (closer to full-featured frameworks) like loopback or sails.js.

Express route handler like functionality in Sails applications

TL;DR Is there a way to have Sails applications work similar to Express route handlers?
I'm working on an application that has a few major components (e.g. ecommerce, blogging and so on). It's working all fine, but the huge number of models, controllers and views is making it difficult for me to visualize them as separate components and making me feel a little too congested for comfort.
I've been through threads like this and I understand nesting is available for controllers, but I really want to split my project into discrete components.
What I'm doing now is splitting the project into different applications and exposing their REST APIs for the gluing application to show its magic. At least this would do the much-needed segregation. But how do I make the endpoints accessible to the gluing application? Using req? While that would give me the liberty to host the other components on another machine, it would slow things down considerably, right? Is there a way to interact with the other processes "directly", something like route handlers in Express (though they're not separate processes)? I would ideally like to have them running as separate processes so one component doesn't bring everything down on failure. At the same time, I want the interaction to be as "local" as possible.
Am I missing something fundamental here? Any suggestions would be appreciated.
Also, is this more of a serverfault question?

Best way to separate logic in sailsjs (nodejs)

My application structure consist of few parts. Public API, Admin API, Admin Console (GUI), Private API, Auth API (oauth2, local, socials), etc. They kinda different each to other, but using the same models. Some routes going to have high number of requests per second and couldn't be cached.
I'm interesting in best practices to split everything properly. I'm also opened to another frameworks or even io.js.
Right now I got 3 variants:
Create separate apps.
Group controllers by folders, and find a way to group routes.
Create another instance of sails app and run it into another process (So I can have all controllers, models, but how should I organize subapp structure using this way?)
I think most answers will be opinionated, but putting controllers into subfolders is the easiest way to share models. (easiest but not only)
And you can easily run policies based on those subfolders as well.
However you really need to flesh out more aspects of your question and think about if there will be more shared (like templates or assets) than different or if differences would prohibit a shared app. Will they all use sessions or will they even use the same sessions.
In the end, based on your limited question, sails can do what you want.

Resources