I have a suite i'm working on that has a few micro-services working togther.
I'm using Docker to setup the environment and it works great.
My project components are as follows:
MongoDB
Node.js worker that does some processing on the DB
Node.js Rest API that serves the user
As you can probably guess the 2 Node.js servers are suppose to work with the same DB.
Now I've defined my models in one of the projects but I'm wondering what is the best practice when it comes to handling the second.
I would really love to avoid copy pasting my code because that means I have to keep both of them up to date when I do changes to the Schema.
is there a good way to share the code between them?
my project looks like this:
rest-api // My first Node.js application
models
MyFirstModel.js // This is identical to the one in the worker/models folder
MySecondModel.js
index.js
package.json
Dockerfile
worker // My second Node.js application
models
MyFirstModel.js
MySecondModel.js
index.js
package.json
Dockerfile
docker-compose.yml
Any input will be helpful.
Thanks.
Of course you can.
What you have to do is to put your common files in an volume, and share this volume with both Node containers.
You should setup a data volume in which you put all the files you want to share. More about this here or anywhere else by googling it.
Cheers.
The common opinion is the following: two microservices should not share same data model. There are several article about it and some question related to this topic.
How to deal with shared models in micro service architectures
However I think there are some cases when you need it and acceptable. Trust is a luxury even if everything is internal, thus security and conformity must be considered. Any incoming object must be normalised, validated and checked before initiate any process with it. The two service should handle the data with the same way.
My solution that I used for an API and an Admin services which shared the models:
I created 3 repositories, one for the API and one for the Admin and a 3th one for the models directory. Models should be present in both repositories so and I added it as a git submodule. Whenever you change something on a schema, you should commit it separately, but I think it is the best solution to manage the changes without duplicating the code.
Related
In my Node.js app, I have a folder for API, Lib, and Utils in addition to Server.js and app.js files.
Is there a structure that is best for a Node Application that makes multiple api calls to different endpoints? I'm struggling with how to best organize the code in my applcation.
Folder structure for your application is already a step in the correct path!
Use an app layer, controller, service, and data access layer for your application. Ensure you name your folders so it's readable. Check out this article for more details - https://blog.logrocket.com/the-perfect-architecture-flow-for-your-next-node-js-project/
One common setup is to have the entry points/initiators call services to do the heavy-lifting work, so you could have a top-level directory called that. This is definitely common at Directly. The two top-level services that stand out would be directlyService.js and stackOverflowService.js (or forumService.js or some other non-vendor-specific name). Those two services could call other services (hopefully there are obvious groupings of the other business processes) to subdivide processing further.
I am developing some server. This server consists of one front-end and two back-ends. So far, I have completed the development of one back-end, and I want to develop the other one. Both are express servers and db is using mongodb. At this time, I am developing using the mongoose module, and I want to share a collection (ie schema). But I have already created a model file on one server. If so, I am wondering if I need to generate the same model file on the server I am developing now. Because if I modify the model file later, I have to modify both.
If there is a good way, please let me know with an example.
Thank you.
I have two answers for you one is direct and the other will to introduce the concept of microservice.
Answer 1 - Shared module (NPM or GIT)
You can create an additional project that will be an NPM lib (It can be installed via NPM or git submodules).
This lib will expose a factory method that will accept the mongoose option and return the mongoose connection.
Using a single Shared module will make it easier to update each backend after updating the DB (A bit cumbersome if you have many backends).
Answer 2 - The microservice approach
In the microservice approach, each service (backend) manages its own DB and only it. This means that each service needs to expose an internal API for other services to use.
So instead of sharing lib, each service has a well-defined internal API that other services can use.
I would recommend looking into NestJS (NodeJS microservice framework) to get a better feel of how to approach microservice
It goes without saying that I prefer Answer 2 but it's more complex and you may need to learn more before giving it a go. But I highlight recommend it because microservice (If implemented right) will make your code more future proof.
I'm trying to make a service architecture which includes two Node.js apps which shares the same database. The overall service architecture looks like below (simplified version)
I'm planning to use Sequelize as an ORM to access the database. As far as I know, if a service uses Sequelize, it needs model to get the structure of data tables. In my case, api and service will access the same database, which means they should share the same Sequelize model.
So here is the question: where should I locate the common Sequelize relevant files? It seems I have two choices:
put them on the upper common location (assuming the project structure is monorepo) so that each apps can use the single same files
maintain copies of files in each apps' project folders. In this case, each apps will be independent(Let's say I want to dockerize each apps) but in case the Sequelize files modified, the same action should be done for the other.
I'm not sure how I understood is correct. Is my question valid? If so, what is the better choice and practice? I appreciate for your answers in advance.
There is no correct answer, it depends on the specific situation, but sharing a database between multiple microservices is a bad design.
Sharing a database means tight coupling at the data level. The direct consequence is that when a service modifies the database table structure, such as deleting the name field of the user table, it may break the APIs of other services and all use the sequelize user model. All services need to update the model definition and modify the implementation code of the API.
If all of your services are maintained by a team, I suggest you choose the first solution, which costs less and is easier to maintain. If your services are maintained by different teams, the two solutions are actually similar, because as long as the table structure is modified, the application layer model needs to be modified or verified whether it still works well.
Therefore, I recommend following the best practices of microservice architecture, first splitting the database vertically according to the business model, and building application APIs on top of it.
Core principles of microservices:
loose coupling
high cohesion
I come from express.js background and pretty new to loopback framework, especially loopback4 which i am using for my current project. I have gone through the loopback4 documentation few times and got some good progress in setting up the project. As the project is running as expected, I am not much convinced with project structure, Please help me to solve below problem,
As per docs, database operations should be in repositories and routes should be in controllers. Now suppose, My API consist lots of business logic along with database operations say thousand of lines. Which makes controllers routes difficult to maintain. More difficulty would arise, if some API demands version upgrade.
Is there any way to organise the code in controllers in more
scalable and reusable manner? What if i add one more service layer
between controllers and repositories and put business logic there?
how to implement it in the correct way? Is there any official way to
do that which is suggested by loopback community only?
Thanks in advance!!
Is there any way to organise the code in controllers in more scalable and reusable manner?
Yes, services can be used to abstract complex logic into its own separate class(es). Once defined, the service can be injected into the dependent controller(s) which can then call the respective service functions.
How the service is designed is dependent on the user requirements as LoopBack 4 does not necessarily enforce a strict design requirement.
My application structure consist of few parts. Public API, Admin API, Admin Console (GUI), Private API, Auth API (oauth2, local, socials), etc. They kinda different each to other, but using the same models. Some routes going to have high number of requests per second and couldn't be cached.
I'm interesting in best practices to split everything properly. I'm also opened to another frameworks or even io.js.
Right now I got 3 variants:
Create separate apps.
Group controllers by folders, and find a way to group routes.
Create another instance of sails app and run it into another process (So I can have all controllers, models, but how should I organize subapp structure using this way?)
I think most answers will be opinionated, but putting controllers into subfolders is the easiest way to share models. (easiest but not only)
And you can easily run policies based on those subfolders as well.
However you really need to flesh out more aspects of your question and think about if there will be more shared (like templates or assets) than different or if differences would prohibit a shared app. Will they all use sessions or will they even use the same sessions.
In the end, based on your limited question, sails can do what you want.