I've been reading articles about node.js tips, tricks and best practices in general. I found one that mentioned that it is convenient to really think and approach apps in terms of microservices. I've toyed around with that before, but I'm still not quite clear when is best or what criteria to use.
For example, now I'm working on an app (not for work, but a hobby of mine) to record quotes from books I read. So, I've written node.js API with two routes: A POST route to recording the quote on a MongoDB instance running on a cloud, and a GET route to read quotes.
This is all one single app. Does "thinking", in terms of microservices, mean that I should write two different apps, one for posting, one for getting, each running on its own container?
I'm familiar with Kubernetes, and Openshift so the tech details are not much of a problem. My concern is in regards to how to make a decision in regards to splitting the apps, the separation of concerns bit of the architectural design, so to speak.
Thanks in advance.
Typically, microservices are broken around logically distinct pieces of data and operations. Since both the POST and the GET in your example deal with the same data, and complement each other's operations, breaking the GET and POST operations apart as separate microservices makes little sense to me.
As described, your application is small enough that there's no obvious boundary between components that could be further isolated for better flexibility.
Or put another way, this service already fits most definitions of a microservice, and doesn't lend itself to further decomposition.
Related
I am new in microservices. I am coming from monolithic background in current environment i have different kinds services for different purposes like search, file, email, notification. I have taken so many courses but in that the instructor separate each entity and make it's own database also create API for that(like separate shopping cart entity, product entity) it makes no sense, I am not getting what is real world use of microservices or how to make separate component to build it's own microservice.
Can anyone give Real Project example?
Thanks in advance
Read this and this. Also look here and here. I don't think that anyone will give a link to the real working project, so you can try this.
I am not getting what is real world use of microservices
mostly as you heard in all of those tutorials the microservices architecture leverage advantages of:
the smaller services are easy to maintain and develop
easily can scale specific services rather than the whole project(monolith). for example you scale service-1 to 4 instances that request traffic split into these 4 instance and service-2 to 2 instances and go on (load balance). and these services may distributed in to different servers and locations.
if one service failed to work it does not terminate the whole system since they are independent.
services can be reusable for other scenarios or features.
small team can works for each services and its easy to manage both project and development flow.
and also it suffer from disadvantages of
services are simple and small but all as a whole system is complex so designing part are very critical.
poor performance and it requires do some extras to improve the performance (different types of caching on different levels).
transactions are complex and its developments are time costly. imagine simple update should be projected to other services if its required and you have to consider failure and rollback strategy ( SAGA ).
how to make separate component to build it's own microservice
this is the most challenging part of microservices. you need deep study on Domain driven design DDD.
Decompose by subdomain
Decompose by Business Capabilities
Can anyone give Real Project example?
there are many projects the develop microservices with different patterns. I think you have to start your own and make your hands dirty.
I have two separate cloud-based APIs that I am working on integrating together. Neither software directly talks to each other so I am creating something in the middle to get them to communicate. I have had trouble finding examples or documentation on how exactly to do this, does anyone know of any resources that could help me out?
My plan going in was to use a MERN Stack, running on a local server to do GET and POST requests to both APIs, use some mapping and logic to transpose the data into the correct format and send it to the other software. I do not have a client per se (other than myself) on my end, so I really will be skipping the React part of MERN, at least that is what I'm thinking. I'll be using Mongo to keep track of both sets of data for redundancy. I also considered using a LAMP Stack but felt that MERN would be faster in handling the data, and Mongo is more flexible in handling different data formats. If there is another process or technology that could help me that I'm not thinking of, I would be grateful to hear about it.
Has anyone encountered something like this before? Thank you.
As with most architecture questions, there's no completely right or wrong answer here. You could certainly design a well-built system to handle for this purpose with either stack; even more-so when you mention that your front-end framework is not an important consideration. Instead, ask yourself questions like this:
Which stack do you have more experience with, and is this an appropriate time to learn a new set of technologies, or is it important to do the best work you're capable of right now (how important is time, cost, or quality in this case)?
Another generalization I'll stick my neck out for is a data-first approach; what sort of data are you dealing with from each cloud integration, and what kind of data do you need to support and/or create in order to make your system work? Mongo, being a NoSQL persistence layer, will allow you to change your data model and handle more varied data in a quicker and easier manner than a SQL solution will. This is a double-edged sword, however, as lack of validation and a strongly-constrained (typed?) data model will make your application harder to work with and debug as it grows. In short - how big might this application grow?
If you have a handy and familiar way to manage the three different data models you're dealing with (cloud service 1, cloud service 2, and your app) via MySQL, then that's a compelling reason to use it. However, if your style is to start dumping data into your database and you're comfortable with a more iterative approach (which may require more, albeit shorter rounds of refactoring), then Mongo with MERN may be the preferable choice.
Finally, will others ever be working on this application? If so, which language would you prefer to be dealing with them upon - PHP or Javascript?
To speed up development for my next Node-API I was looking for a suitable Framework. In the past I was building my APIs with express only.
One Design pattern I always found useful is to completely seperate the business logic from route-handling in services. Those services only accept the required information (like a user id or data) and return a promise resolving the result of the operation.
This way it is easy to reuse these services in other routes, to combine them, test them, or call them based on schedules or other events - totally independent from endpoint-calls. Routing and Middleware take care of access-controll, error-handling and respondig.
Looking at the documentations of those frameworks (sailsjs, keystonejs, ...) I mostly see the business-logic tightly coupled to individual routes, directly accepting request objects and handling the responses. Only as an afterthought it seems there is sometimes offered a way to extract "often used code" into helper functions.
Am I missing something? How come this pattern seems to be the standard of API design? Is this a best practice for a reason?
It might have to do with Node.js services being smaller in size. If you're coming from an enterprise background, you're well aware mixing business-logic with controller code doesn't fly in the long run. Perhaps small projects can get away with defying that, but once the size increases, you can't avoid the laws of physics. It's best to separate concerns and keep the codebase maintainable.
I'd also add that below services, it's good to have a separate layer that handles talking to outside process boundaries. That way, you can test business logic in isolation by providing appropriate test doubles for integrations. Here's a longer explanation of how it would work in a Node project: Organize Node.js API project using 3-layer architecture.
In "Implementing Domain-Driven Design", Vernon give detailed examples for integrating bounded context with a messaging or REST based solution, it also mention database integration, but I understand it is not a very clean solution to share database or at least db tables between BC.
But what if the 2 BCs I want to integrate are hosted locally on the same server, is it really a good idea to use a messaging/rest/rpc solution ? (which seems more suitable for a remotely hosted BC to me)
Otherwise, except with DB integration, what are the other alternatives ? Hosting both BC in the same process and calling it directly (still using adapters and translators for clean seperation) ?
Thanks
You could look into using something like 0MQ for inter-process communication on the same server. I've also in the past just hosted things in the same process as you suggest and just used interfaces / in-memory messaging to separate out contexts.
Everything is about trade-offs in the end, so you just need to decide what level of isolation you are willing to accept. The simplest solution would be to separate inside a solution via folders and interfaces, the other end of the spectrum being completely separate servers.
I don't think that location should come into play w.r.t. integration between BCs.
There really are other factors to consider such as guaranteed delivery to the recipient in order to ensure that the processing takes place. This should be required whether or not the two BCs are hosted on the same server.
Another reason to ignore location is that when you need to scale, your architecture should be able to handle it from the get-go.
As tomliversidge mentioned it is possible to use some deployment mechanisms such as non-durable messaging to speed up things but there will definitely be a trade-off and that has to be a conscious decision.
We can run more than one node app for a code base, all we need to start them on a diff port every time, but i am not sure if doing so is good or not.
I can see the following pros & cons of this approach
Pros:
multiple domains like sub1.domain.com, sub2.domain.com and so on, sharing same code base.
updates code at single place.
Any other pros you like to mention?
Cons:
May be it can cause some dead lock on reading some files or some other multi process issue.
Any other cons you like to mention?
Is it a good move to share code base?
Please share your experience.
Thank You
You are essentially spawning several instances of you application which is not a bad or a good thing in itself, it has to do with what you application does. If the application does not access any ressources which will be shared with instances of itself, it is not a problem and you can spawn as many instances as you like, for what ever purpose you see fit.
BUT if your application uses any shared ressources such as a database or flat files, you need to take race conditions and dead locks into account. This is very well handled on ACID compliant databases, on document oriented databases this is not as mature and requires you do have a good grasp on the techniques and languages used.
If there is no obvious reason to run multiple instances of your application, do not do it.
Once you start going down the route of multiple instances, you have to design around bottlenecks, network traffic, backups and a lot of other things that give people headaches, do not do it just because you can.