I need your help in Loopback Framework.
Actually, my need is how we can achieve Microservices related functionality with Loopback Framework.
Please share any links/tutorials/knowledge if you have any.
I have gone through below links,
https://strongloop.com/strongblog/creating-a-multi-tenant-connector-microservice-using-loopback/
I have downloaded the related demo from below links but doesn't work it.
https://github.com/strongloop/loopback4-example-microservices
https://github.com/strongloop/loopback-example-facade
Thanks,
Basically it depends on your budget and size of your system. You can make some robust and complex implementations using tools like Spring Cloud or KrakenD. As a matter of fact, your question is too broad. I've some microservices architecture knowledge and I can recommend just splitting your functionality into containerized solutions, probably orchestrated by Kubernetes. In that way, you can expose for example, the User microservice with loopback, and another Authentication microservice with loopback and/or any other language/framework.
You could (but shouldn't) add communication between those microservices (as you should expose some REST functionality) with something like gRPC.
The biggest cloud providers have some already made solutions, eg AWS has ECS or Fargate. For GCP you have Kubernetes.
We have created an open source catalog of microservices which can be used in any microservice project using LB4. Also, you can get an idea of how to create microservices using LB4. https://github.com/sourcefuse/loopback4-microservice-catalog
Related
So we have decided to use NestJS to build our web-app with, and we have this ongoing argument about whether we should use a Microservice or a Standalone app to implement our queue-interactions module with.
A bit of background - we use Amazon SQS as our queue provider, and we use the
bbc/sqs-consumer package for handling the connection.
Now one approach is to use a microservice, in a similar fashion to what is done here: https://github.com/algoan/nestjs-components/tree/master/packages/google-pubsub-microservice
I believe the implications are pretty clear, and it seems as if the NestJS documentation really pushes you towards microservices here, if only because all the biult-in implementations are for queues/pubSub services (rabbitMQ, kafka, redis...).
On the other hand, you can choose to use a standalone app, which I feel is basically a microservice but without controllers.
Since we opted for using a 3rd party package to handle the actual transport and all the technical details, this feels in a way more appropriate. We don't actually need to send the messages from the messageHandler to some controller and then process it, if we can process it directly from the messageHandler, no controllers included.
Personally, it seems to me that if we don't want to go into details with the transport implementation (i.e. use sqs-consumer package for it) then the microservice approach, while works perfectly, is an overkill. A standalone app feels like it would give us the benefits of separating the "main" and the "queues" processes, while maintaining simplicity of implementation as much as possible.
Conversely, using a Microservice feels more natural to others. The way to think about it is that it doesn't matter whether we choose to implement transport ourselves or use some package, the semantic meaning is the same in the way that we have some messages coming into our app from outside, thus using a custom transport Microservice really is the most appropriate solution.
What do you guys think about it?
Would you use the Microservice or the standalone approach?
And in general, when would you choose Microservice over a Standalone app and vice-versa?
I'm interesed in design microservices in two different environments, Spring and NodeJs.
While in Spring it's easy to find plenty of resources about Netflix Eureka (it's probably the number one with Consul), in NodeJs I found more opportunities to allow microservices to communicate themselves and there isn't a way to go.
These are some methods to design a microservice architecture:
static ip/port map in code or config file
through DNS
Service Discovery (Eureka/Consul)
P2P (like blockchain?) [also seneca?]
Regarding NodeJs on youtube you can find many videos, one that got my attention is the video of the seneca's father Richard Rodger titled "NodeJs microservices without a registry".
The problem with seneca is that to make it work you need a base/main microservice that to me it looks like service discovery since all the other microservices must know its ip and port.
From author website
At the moment, our implementation still depends on “well-known” entry
points. You have to run a few base nodes at predetermined locations,
so that microservices know where to look to join the network – Peter
is fixing that one for us, and soon the network will be completely
self-managing.
Maybe after Peter will finish it should look like a P2P microservices architecture, where the knowledge of microservices will be spread with SWIM protocol, it is right?
The other difference is that Seneca use pattern matching to forward the message between microservices, but it always travel through the base/main microservice.
Seneca without service registry, isn't it a service registry itself (know TAGS and where they are instead of IPs)?
Sorry for no code, but I still be in a theorical env at the moment.
We have set of NodeJS microservices and all of our micro services has individual configurations for different environments like
default.json
dev.json
staging.json
production.json
How can I understand these things?
Is it feasible to create centralised configuration for all micro services instead of having individual?
Which is preferred centralised config or individual config?
I also google it but no info regarding this. I am mainly looking for suggestions on how this can be achieved.
Do not do it
The idea of splitting your application into microservices is to keep it independent. Therefore centralised configuration breaks this idea plus doing so (for example with some kind of proxy microservice) you would have to probably run them on the same machine.
Is is for local development ?
If it is, simply create docker-compose containers to allow developers easy setup of development environment. Still this will require multiple configurations for each container/service
Do not do microservices
Maybe what you want to active is not microservice architecture. Take a look here. Might be what you wanted instead and services should be easy to port into bounded context.
Also keep in mind that bounded contexts are not microservices
In theory I understand how Microservices work and why they can be helpful in various cases but I still don´t get how it works in practice.
Let´s say there´s an online shop based on a CMS as a monolith application.
And there´s now the need to run the online shop in a MIcroservices architecture.
How would this Microservices architecture differ technically from the current, monolith, architecture?
For example, I pick out the productsearch.php. If i want to scale this function, normally I had to set up a new server and copy the whole CMS ressources folder to it for loadbalancing.
And with Microservices, productsearch.php would be a single Microservice I guess, and I would have to just copy this php file to scale without the need to copy other ressources?
I have tried to explain it using this diagram of a fictitious CMS. With micro services architecture, we can independently scale each micro service. Each micro service may be developed by a different team, they may be even developed using different technology. But we great flexibility comes great maintenance overhead, I believe it is worth it as most of it can be automated.
Put simply, each module in a molithic application is a potential candidate for microservice. Howerver, microservices can be more granular than a traditional module.
This provides a good job at explaining how to decompose your monolithic application. http://microservices.io/patterns/decomposition/decompose-by-business-capability.html
Technically and conceptually, a microservice is independent of other services (where in a monolith you'd have modules with inter-dependencies).
Technically, a microservice built on modern microservices platforms (such as Node.JS, Spring Boot or .NetCore) will be more easily able to take advantages of containerization systems (such as Docker), perhaps supported by service registry and configuration management technologies (such as Kubernetes, ZooKeeper, Eureka and so on).
The advantage of containerization is that it'll be easier to scale-out (add more containers). Going further, the whole microservice / containerization concepts, and related technologies, also help enable things like CI/CD.
I am new to Spring-Boot(Cloud) and going to work on one new project.
Our project architect have designed this new application like this:
One front end Spring boot application(which is also microservice) with Angular-2.
One Eureka server, to which other microservices will connect.
ZUUL proxy server, which will connect to Front end and mircoservices.
Now, followings are the things I am confuse about and I can't ask him as he too senior to me:
Do I need to have separate ZUUL proxy server ? I mean, what is the pro and cons of using same front end application as ZUUL server?
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why ? because I can directly invoke ReST api of NodeJS-1 from the Microservice-1.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
Can anyone who have worked with similar kind of scenario shed some light on my doubts?
I do not have the full context of the problem that you are trying to solve, therefore answers bellow are quite general, but still may be useful:
Do I need to have separate ZUUL proxy server? I mean, what is the pro and cons of using same front end application as ZUUL server?
Yes, you gonna need a separate API Gateway service, which may be Zuul (or other gateways, e.g tyk.io).
The main idea here is that you can have hundreds or even thousands of microservices (like Amazon, Netflix, etc.) and they can be scattered across different machines or data centres. It would be really silly to enforce your API consumers (in your case it's Angular 2) to memorise all the possible locations of each microservice. Better to have one API Gateway that knows about all the services under it, so your clients can call your gateway and have access to the underlying services through one single place. Also having one in your system will decouple your clients from your services, so it's possible to evolve them independently.
Another benefit is that you can have access control, logging, security, etc. in one single place. And, BTW, I think that you are missing one more thing in your architecture - it's Authorization Server. A common approach in building security for microservices is OAuth 2.0.
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why? because I can directly invoke ReST API of NodeJS-1 from the Microservice-1.
I think you could use Sidecar, but I have never used it. I suppose that the question 'why' is related to the Discovery Service (Eureka in your architecture).
You can't call microservice NodeJS-1 directly because there may be several instances of NodeJS-1, which one to call? Furthermore, you can't know whether service is down or alive at any given point in time. That's why we use Discovery Services like Eureka - they handle all of these. When any given service has started, it must register itself in Eureka. So if you have started several instances of NodeJS-1 they all will be registered in Eureka and whenever Microservice-1 wants to call NodeJS-1 service it asks Eureka for locations of live instances of NodeJS-1. The service then chooses which one to call.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
I can only assume that the NodeJS has been chosen because it has an outstanding performance for IO operations, including HTTP requests that may come in hand when calling 3-rd party services. I do not have any other rational explanations for this.
In general, microservices give you a possibility to have your microservices written in different languages and it's cool indeed since each language solves some problems better than the other. On the other hand, this decision should be made with caution and answer the question - "do we really need a new language in our stack to solve problem X?".