Splitting load of an API between multiple servers - node.js

I'm planning to build an API for one of my projects. But I'm looking for a good way to manage it, and manage server load.
Would I be better off just creating everything on one server, or should I create multiple?
Thoughts:
If I create one server and that server crashes, the whole system would go down. But if I create multiple servers to handle this, and one of them crashes, only that part would go down.
How I was thinking to accomplish this:
1) Create one API ENDPOINT
2) When a user sends a REQUEST to that API ENDPOINT, the ENDPOINT would send another request to the correct server containing the special task, when the task is done it would return the data back to the user.
AKA:
User => ENDPOINT => ENDPOINT 1, ENDPOINT 2, ENDPOINT 3, => ENDPOINT => User
Is this how I should do it?
P.S. I don't know if this the right terminology but I'm trying to learn how to scale my ENDPOINTS/API/code.

About the load balancer, you should use specific web server applications to do that, like nginxor apache. This kind of web server tools already have implemented load balance mechanisms, you just need to configure it.
Also, I recommend you to pack your server in docker images. This way you could use Docker Swarm or Kubernetes to deploy and scale up/down your application. It's easier to manage your services, check applications states and deploy new versions.
You could use docker with nginx, where each docker container has an instance of your application and nginx will take care of redirect/distribute your requests between your instances.

What you are basically looking for is a comparison between microservices based architecture (or SOA) and a monolith.
In microservices, there are multiple services performing specific tasks. They all in-turn are used to perform complex tasks. Monoliths on the other hand consist of a big server which does everything and is also the single point of failure like your pointed.
Should you move to microservices?
It is widely agreed that a project should be built in monolithic architecture and then moved to microservices as the complexity grows. Martin Fowler's article explains this concept well.
This is because there are certain disadvantages and tradeoffs associated with this architecture -- inconsistency and latency, for instance.
TLDR; Stick to one server if starting, break into services when it becomes complex.

Related

Create multiple front-ends hitting same data source

I want to create and host 4-5 websites using the same database. The only difference between the sites will be:
branding (colours and header)
data will be filtered per website (through sql query) and
Each site will be on a separate domain (but can be hosted on same server)
My 1st thought was to use API / Rest model and provision five front-ends in their own sub-domain. But as sites can be hosted on same server (I'm assuming one hosting account which enables multiple sub-domains), I think I can simply connect all sites with connection string to same database, avoiding complexities of using REST.
Is this possible and would i run into database conflicts doing this?
If later, I wanted to add a mobile app client, then will I need to build out a rest interface anyway?
Thanks
The right thing to do here depends a lot on your specific use case, expected load, preferred backend/edge technology, future plans, etc.
Site domains and servers -
The main point here is that you can host your domains/subdomains on the same or different servers. You simply need to update the DNS to point to the correct IP (update the subdomain's A record).
Note: If these sites are all public-facing, then I highly recommend using an edge/proxy server and even consider a load balancer, depending on expected number of visitors (Nginx, or Apache Web Server)
Decoupled architecture is almost always preferred -
I would definitely have an API/REST layer to abstract the database from the sites. This ensures that you establish a contract through which any clients can interact with the backend, including your mobile application. You also don't have to duplicate DB-specific code across the various clients. What if you decided to change your schema? Or even your database solution? Then all clients will be broken and your customers would be unhappy. As a guiding principle, think: if I change any one thing in my architecture, how many other things will need to change as a result? In terms of scalability, this architecture will also allow you to easily spin up more instances of whatever it is you need (databases, REST service, etc) should the need arise.
How do I build and deploy a REST API?Re: #2, to set up a simple custom REST service running on Node.js (and express), this is a good tutorial. The example also walks through setting up and integrating with an in-memory MongoDB database.
Database collisions?If you follow the above steps, this should be a moot point. Node.js/express and the databases expose ways to configure connection pools if the defaults do not suffice. Again, this will depend on your needs - how many concurrent users you expect.

Sails.js (Node.js) server architecture, scaling and performance

I want to create Sails.js (Node.js) server app, which will provide API for single-page-app. This server will consist of multiple modules:
user management
forum
chat
admin GUI
content management
payment gateway
...
All these modules will share one database. The server must be able to handle as many requests and web sockets as possible. Clean architecture and performance are my primary goals.
My questions:
Should I create multiple servers running on multiple ports? I mean, one server for content management module. Another server for forum management module.
Or is it better to create only one big universal server, which will consists of multiple separate modules (hooks in Sails.js) and runs on one port? Will performance of the server decrease in this case ?
I was thinking about vertical scaling one big universal server, running on single port with pm2. Or is it better to scale Node.js horizontaly and split server to multiple smaller servers ?
Im new to Node.js so I appreciate any advice.
I think it really boils down to the scale of the project.
For very simple things there's no real reason to scale past a single but reliable server is there?
However for broader projects that have a back-end that is resource intensive and a lot of users and traffic, you may a want to split the back / front end aspects depending on the requirements.
In which case you might have a single server (or more) dealing with the specific administrative requests or routines then have the client / user API running through a load balancer and spread across multiple servers in multiple regions or break it down further into an auto scaling group so as to accommodate for fluctuations in traffic.
It would be worthwhile to note too that this is really suited for higher volumes of traffic or resource usage as you're dedicating the server infrastructure for this purpose, for smaller applications where there is infrequent usage then breaking things down into micro services from the start and getting billed for the runtime rather than dedicated infrastructure utilization might make more sense to me. You could take a look at AWS API Gateway and Lambda services for some more information on that (I am not affiliated to AWS in any way, I just appreciate what they have managed to put together there).

Spring Cloud: ZUUL + Eureka + NodeJS

I am new to Spring-Boot(Cloud) and going to work on one new project.
Our project architect have designed this new application like this:
One front end Spring boot application(which is also microservice) with Angular-2.
One Eureka server, to which other microservices will connect.
ZUUL proxy server, which will connect to Front end and mircoservices.
Now, followings are the things I am confuse about and I can't ask him as he too senior to me:
Do I need to have separate ZUUL proxy server ? I mean, what is the pro and cons of using same front end application as ZUUL server?
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why ? because I can directly invoke ReST api of NodeJS-1 from the Microservice-1.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
Can anyone who have worked with similar kind of scenario shed some light on my doubts?
I do not have the full context of the problem that you are trying to solve, therefore answers bellow are quite general, but still may be useful:
Do I need to have separate ZUUL proxy server? I mean, what is the pro and cons of using same front end application as ZUUL server?
Yes, you gonna need a separate API Gateway service, which may be Zuul (or other gateways, e.g tyk.io).
The main idea here is that you can have hundreds or even thousands of microservices (like Amazon, Netflix, etc.) and they can be scattered across different machines or data centres. It would be really silly to enforce your API consumers (in your case it's Angular 2) to memorise all the possible locations of each microservice. Better to have one API Gateway that knows about all the services under it, so your clients can call your gateway and have access to the underlying services through one single place. Also having one in your system will decouple your clients from your services, so it's possible to evolve them independently.
Another benefit is that you can have access control, logging, security, etc. in one single place. And, BTW, I think that you are missing one more thing in your architecture - it's Authorization Server. A common approach in building security for microservices is OAuth 2.0.
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why? because I can directly invoke ReST API of NodeJS-1 from the Microservice-1.
I think you could use Sidecar, but I have never used it. I suppose that the question 'why' is related to the Discovery Service (Eureka in your architecture).
You can't call microservice NodeJS-1 directly because there may be several instances of NodeJS-1, which one to call? Furthermore, you can't know whether service is down or alive at any given point in time. That's why we use Discovery Services like Eureka - they handle all of these. When any given service has started, it must register itself in Eureka. So if you have started several instances of NodeJS-1 they all will be registered in Eureka and whenever Microservice-1 wants to call NodeJS-1 service it asks Eureka for locations of live instances of NodeJS-1. The service then chooses which one to call.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
I can only assume that the NodeJS has been chosen because it has an outstanding performance for IO operations, including HTTP requests that may come in hand when calling 3-rd party services. I do not have any other rational explanations for this.
In general, microservices give you a possibility to have your microservices written in different languages and it's cool indeed since each language solves some problems better than the other. On the other hand, this decision should be made with caution and answer the question - "do we really need a new language in our stack to solve problem X?".

Can I run a microservice which keeps a port open in the cloud?

I'm new to microservices. I envision them as a set of processes running in two or more machines (I suppose for a given process two instances must be run in separate machines for reliability). In that setup, depending on the kind of clients I have there may be one process working as a TCP server serving on a specific high port and speaking a non-HTTP protocol.
However, for my low-bandwidth, testing purposes, I haven't found a free cloud service which provides that kind of environment (machines to run processes on – say, Java on Linux – while keeping a high port open).
Maybe the facilities I'm expecting are only available to paying customers, or maybe implementing a microservice architecture in the cloud goes beyond simply running processes in machines and sharing a database? Could someone clarify? (and if possible direct me to one such free service)
Yes, you are right when you say Microservices are more about independent service (processes) that can be deployed in one or more cloud machines. Each service can communicate to other using non-http protocol like Message brokers, Thrift, Remote Procedure call (RPC) etc.
As the architecture point of view, services should mostly be decoupled enough to handle complexity of distributed computing. see the image on Microservices Architecture link
There's a concept of API Gateway which could be used for authentication and service registration and discovery purpose.
Coming back to your question, you can test microservices on single cloud (by running each service on different port) and use API Gateway to discover the service path for references here are the links which are worth to look at these.
For concept see links: Microservices.io and stackoverflow question
For Implementation: zookeeper and Auth0 (this is what i'm using)
If you are java lover great to look at infoQ article
Some of the free source that might can help in building and testing microservices are: Google App Engine, hook.io

How to organize different Node.js services?

This question does not necessarily pertain to the organization of Node project structure, and more of how to represent separate, logical services. Within our team, we have requirements to create and support several services (i.e., a set of API endpoints). These services aren't directly related, so my initial reaction is they should be separate projects with separate code bases running in separate Node (or Express) servers. I'm wondering if this approach would complicate deployment and management. The alternative would be to have a single "entry point" (i.e., a single Node server) that delegates to the respective services depending on which context root or URL is seen. I'm curious which approach seems more logical and how people are handling these "microservices" in the wild now?
These services aren't directly related
These services should be separate projects/repos with distinct entry points.
I'm wondering if this approach would complicate deployment and management.
Yes, absolutely. I have several NodeJS JSON APIs in production and for each, I have 2-3 environments (canary, staging, production). When you get to about 3 production services in the wild, things can get unwieldy without some discipline.
You can manage this with documentation (via wiki or in repo) about each service and their environments as well as any other dependencies (services that this service depends on).
This also helps with emergencies where a service is slow or not responding. Sometimes, the service itself is fine but a service's dependency could be down. For example, the github API may be a dependency...it goes down.
The alternative would be to have a single "entry point" (i.e., a single Node server) that delegates to the respective services depending on which context root or URL is seen.
In some cases, you may have to also build a "gateway" service which consumes your other single-purpose services. One reason to do this is to support authentication and authorization (i.e. OAuth).
In other words, you may need multiple micro-services and a gateway service.

Resources