Spring Cloud: ZUUL + Eureka + NodeJS - node.js

I am new to Spring-Boot(Cloud) and going to work on one new project.
Our project architect have designed this new application like this:
One front end Spring boot application(which is also microservice) with Angular-2.
One Eureka server, to which other microservices will connect.
ZUUL proxy server, which will connect to Front end and mircoservices.
Now, followings are the things I am confuse about and I can't ask him as he too senior to me:
Do I need to have separate ZUUL proxy server ? I mean, what is the pro and cons of using same front end application as ZUUL server?
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why ? because I can directly invoke ReST api of NodeJS-1 from the Microservice-1.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
Can anyone who have worked with similar kind of scenario shed some light on my doubts?

I do not have the full context of the problem that you are trying to solve, therefore answers bellow are quite general, but still may be useful:
Do I need to have separate ZUUL proxy server? I mean, what is the pro and cons of using same front end application as ZUUL server?
Yes, you gonna need a separate API Gateway service, which may be Zuul (or other gateways, e.g tyk.io).
The main idea here is that you can have hundreds or even thousands of microservices (like Amazon, Netflix, etc.) and they can be scattered across different machines or data centres. It would be really silly to enforce your API consumers (in your case it's Angular 2) to memorise all the possible locations of each microservice. Better to have one API Gateway that knows about all the services under it, so your clients can call your gateway and have access to the underlying services through one single place. Also having one in your system will decouple your clients from your services, so it's possible to evolve them independently.
Another benefit is that you can have access control, logging, security, etc. in one single place. And, BTW, I think that you are missing one more thing in your architecture - it's Authorization Server. A common approach in building security for microservices is OAuth 2.0.
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why? because I can directly invoke ReST API of NodeJS-1 from the Microservice-1.
I think you could use Sidecar, but I have never used it. I suppose that the question 'why' is related to the Discovery Service (Eureka in your architecture).
You can't call microservice NodeJS-1 directly because there may be several instances of NodeJS-1, which one to call? Furthermore, you can't know whether service is down or alive at any given point in time. That's why we use Discovery Services like Eureka - they handle all of these. When any given service has started, it must register itself in Eureka. So if you have started several instances of NodeJS-1 they all will be registered in Eureka and whenever Microservice-1 wants to call NodeJS-1 service it asks Eureka for locations of live instances of NodeJS-1. The service then chooses which one to call.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
I can only assume that the NodeJS has been chosen because it has an outstanding performance for IO operations, including HTTP requests that may come in hand when calling 3-rd party services. I do not have any other rational explanations for this.
In general, microservices give you a possibility to have your microservices written in different languages and it's cool indeed since each language solves some problems better than the other. On the other hand, this decision should be made with caution and answer the question - "do we really need a new language in our stack to solve problem X?".

Related

Architecture question - The need for a separate backend running Mongo, Express & Node in MERN or MEVN project

Are my assumptions correct here, is there a different need for a separate backend, or can a monolithic (hopefully simpler) solution accomplish the same functionality perhaps at the cost of not scaling as well?
Looking at existing MERN or MEVN solutions, they always seem to involve two node processes where the front end process is running the client framework and backend processes DB requests using Node, Express & Mongo. This seems like a good solution performance wise when balancing across at least two servers. For my solution, where performance is not the issue, I've wondered what is the need for a separate backend. Why not just use try/catch with async/await in the front end to get to the DB data instead of an api call to a separate backend. Then once I started trying to design my more monolithic solution, I realized there is a problem that the separate backend actually solves. Trying to avoid DB concurrency issues, I realized the separate backend solution actually lowers the need for logical transactions since node is single threaded and only processes one request/response at a time.
So if I understand correctly, you're asking why do you need a backend that talks to the DB (and potential other backend services) instead of calling the DB from the front-end directly.
The answer is the following:
If you run this application on your local machine only (front-end and DB), then I can see why you wouldn't need a backend.
If this application is exposed in Internet, then your DB will hijacked in a matter of minutes maybe.
Security is the main concern here, everything that runs on client-side (JS stuff) can be seen pretty easily by the user -- this includes endpoints, passwords, etc. Not to mention that your business logic is fully exposed to the attackers.
For that reason, the backend plays a very important role in protecting the access to the DB, rate-limiting and resource usage capping, and many others.

Questions pertaining to micro-service architecture

I have a couple of questions that exist around micro service architecture, for example take the following services:
orders,
account,
communication &
management
Question 1: From what I read I understand that each service is suppose to have ownership of the data pertaining to that service, so orders would have an orders database. How important is that data ownership? Would micro-services make sense if they all called from one traditional database such that all data pertaining to the services would exist in one database? If so, are there an implications of structuring the services this way.
Question 2: Services should be able to communicate with one and other. How would that statement be any different than simply curling an existing API? & basing the logic on that response? Is calling a service more efficient than simply curling the API?
Question 3: Is it worth it? Now I understand this is a massive generality , and it's fundamentally predicated on the needs of the business. But when that discussion has been had, was the re-build worth it? & what challenges can you expect to face
I will try to answer all the questions.
Respect to all services using the same database. If you do so you have two main problems. First the database would become a bottleneck because all requests will go to the same point. And second you will have coupled all your services, so if the database goes down or it needs to update, all your services will be affected. (The database will became a single point of failure)
The communication between services could be whatever your services need (syncrhonous, asynchronous, via message passing (message broker), etc..) it all depends on the use cases you have to support. The recommended way to do to avoid temporal decoupling is to use a message broker like kafka, doing this your services don't have to known each other and in case some of them go down the others will still working. And when they are up again, they can continue to process the messages that have pending. However, if your services need to respond in synchronous way, you can define synchronous communication between services and use a circuit breaker to behave properly in case the callee service is down.
Microservices architecture is far more complicated to make it work, to monitoring and to debug than a traditional monolith architecture so, it is only worth if you will have very large requirements of scalability and availability and/or if the system is very large and it will require several teams working in different parts of the system and it is recommendable to avoid dependencies among them. So each team can work at their own pace deploying their own services

Splitting load of an API between multiple servers

I'm planning to build an API for one of my projects. But I'm looking for a good way to manage it, and manage server load.
Would I be better off just creating everything on one server, or should I create multiple?
Thoughts:
If I create one server and that server crashes, the whole system would go down. But if I create multiple servers to handle this, and one of them crashes, only that part would go down.
How I was thinking to accomplish this:
1) Create one API ENDPOINT
2) When a user sends a REQUEST to that API ENDPOINT, the ENDPOINT would send another request to the correct server containing the special task, when the task is done it would return the data back to the user.
AKA:
User => ENDPOINT => ENDPOINT 1, ENDPOINT 2, ENDPOINT 3, => ENDPOINT => User
Is this how I should do it?
P.S. I don't know if this the right terminology but I'm trying to learn how to scale my ENDPOINTS/API/code.
About the load balancer, you should use specific web server applications to do that, like nginxor apache. This kind of web server tools already have implemented load balance mechanisms, you just need to configure it.
Also, I recommend you to pack your server in docker images. This way you could use Docker Swarm or Kubernetes to deploy and scale up/down your application. It's easier to manage your services, check applications states and deploy new versions.
You could use docker with nginx, where each docker container has an instance of your application and nginx will take care of redirect/distribute your requests between your instances.
What you are basically looking for is a comparison between microservices based architecture (or SOA) and a monolith.
In microservices, there are multiple services performing specific tasks. They all in-turn are used to perform complex tasks. Monoliths on the other hand consist of a big server which does everything and is also the single point of failure like your pointed.
Should you move to microservices?
It is widely agreed that a project should be built in monolithic architecture and then moved to microservices as the complexity grows. Martin Fowler's article explains this concept well.
This is because there are certain disadvantages and tradeoffs associated with this architecture -- inconsistency and latency, for instance.
TLDR; Stick to one server if starting, break into services when it becomes complex.

Design Microservice with and without service registry [seneca/eureka]

I'm interesed in design microservices in two different environments, Spring and NodeJs.
While in Spring it's easy to find plenty of resources about Netflix Eureka (it's probably the number one with Consul), in NodeJs I found more opportunities to allow microservices to communicate themselves and there isn't a way to go.
These are some methods to design a microservice architecture:
static ip/port map in code or config file
through DNS
Service Discovery (Eureka/Consul)
P2P (like blockchain?) [also seneca?]
Regarding NodeJs on youtube you can find many videos, one that got my attention is the video of the seneca's father Richard Rodger titled "NodeJs microservices without a registry".
The problem with seneca is that to make it work you need a base/main microservice that to me it looks like service discovery since all the other microservices must know its ip and port.
From author website
At the moment, our implementation still depends on “well-known” entry
points. You have to run a few base nodes at predetermined locations,
so that microservices know where to look to join the network – Peter
is fixing that one for us, and soon the network will be completely
self-managing.
Maybe after Peter will finish it should look like a P2P microservices architecture, where the knowledge of microservices will be spread with SWIM protocol, it is right?
The other difference is that Seneca use pattern matching to forward the message between microservices, but it always travel through the base/main microservice.
Seneca without service registry, isn't it a service registry itself (know TAGS and where they are instead of IPs)?
Sorry for no code, but I still be in a theorical env at the moment.

Can I run a microservice which keeps a port open in the cloud?

I'm new to microservices. I envision them as a set of processes running in two or more machines (I suppose for a given process two instances must be run in separate machines for reliability). In that setup, depending on the kind of clients I have there may be one process working as a TCP server serving on a specific high port and speaking a non-HTTP protocol.
However, for my low-bandwidth, testing purposes, I haven't found a free cloud service which provides that kind of environment (machines to run processes on – say, Java on Linux – while keeping a high port open).
Maybe the facilities I'm expecting are only available to paying customers, or maybe implementing a microservice architecture in the cloud goes beyond simply running processes in machines and sharing a database? Could someone clarify? (and if possible direct me to one such free service)
Yes, you are right when you say Microservices are more about independent service (processes) that can be deployed in one or more cloud machines. Each service can communicate to other using non-http protocol like Message brokers, Thrift, Remote Procedure call (RPC) etc.
As the architecture point of view, services should mostly be decoupled enough to handle complexity of distributed computing. see the image on Microservices Architecture link
There's a concept of API Gateway which could be used for authentication and service registration and discovery purpose.
Coming back to your question, you can test microservices on single cloud (by running each service on different port) and use API Gateway to discover the service path for references here are the links which are worth to look at these.
For concept see links: Microservices.io and stackoverflow question
For Implementation: zookeeper and Auth0 (this is what i'm using)
If you are java lover great to look at infoQ article
Some of the free source that might can help in building and testing microservices are: Google App Engine, hook.io

Resources