Apache mod_jk (or HAProxy) with multiple clusters - liferay

We have two separate clusters (one for Liferay portal, another - for Tableau BI). They are independent of each other. Now I want to use load balancer in high-availability manner (2 servers with heartbeat). The question is the following: is it possible to configure apache mod_jk (or haproxy) so that it do load balancing separately for both clusters? Or we should have independent load balancer for each cluster?
Thanks

Both approaches are possible and It would depend lastly in your own preferences. Thus, I can think just a only advantage in using two load balancers, which is the ability to manage two load balancers separately using status worker.
This way, you could stop one of the workers in just one loadbalancer so one of your applications will not receive requests in one worker node whilst the other application would continue handling incoming requests. I see this approach like more production grade.

Related

How to use clusters in node js?

I am very new to Node.js and express. I am currently learning it by building my own services.
I recently read about clusters. I understood what clusters do. What I am not able to understand is how to make use of clusters in a production application.
One way I can think of is to use the Master process to just sit in front and route the incoming request to the next available child process in a round robin fashion. I am not sure if this is how it is designed to be used. I would like to know how should clusters be used in a typical web application.
Thanks.
The node.js cluster modules is used with node.js any time you want to spread out the request processing across multiple node.js processes. This is most often used when you wish to increase your ability to handle more requests/second and you have multiple CPU cores in your server. By default, a single instance of node.js will not fully utilize multiple cores because the core Javascript you run in your server is single threaded (uses one core). Node.js itself does use threads for some things internally, but that's still unlikely to fully utilize a mult-core system. Setting up a clustered node.js process for each CPU core will allow you to better maximize the available compute resources.
Clustering also provides you with some additional fault tolerance. If one cluster process goes down, you can still have other live clusters serving requests while the disabled cluster restarts.
The cluster module for node.js has a couple different scheduling algorithms - the round robin you mention is one. You can read more about that here: Cluster Round-Robin Load Balancing.
Because each cluster is a separate process, there is no automatic shared data among the different cluster processes. As such, clustering is simplest to implement either where there is no shared data or where the shared data is already in a place that it can be accessed by multiple processes (such as in a database).
Keep in mind that a single node.js process (if written to properly use async I/O and not heavily compute bound) can server many requests itself at once. Clustering is when you want to expand scalability beyond what one instance can deliver.
I have created a poc on cluster in nodejs and added some details in the below blogs. Once go through it. It may provide some clearance.
https://jksnu.blogspot.com/2022/02/cluster-in-node-js-application.html
https://jksnu.blogspot.com/2022/02/cluster-management-in-node-js.html

Scaling microservices using Docker

I've created a Node.js (Meteor) application and I'm looking at strategies to handle scaling in the future. I've designed my application as a set of microservices, and I'm now considering implementing this in production.
What I'd like to do however is have many microservices running on one server instance to maximise resource usage whilst they are using a small number of resources. I know containers are useful for this, but I'm curious if there's a way to create a dynamically scaling set of containers where I can:
Write commands such as "provision another app container on this server if the containers running this app reach > 80% CPU/other limiting metrics",
Provision and prepare other servers if needed for extra containers,
Load balance connections between these containers (and does this affect server load balancing, e.g., send less connections to servers with fewer containers?)
I've looked into AWS EC2, Docker Compose and nginx, but I'm uncertain if I'm going in the right direction.
Investigate Kubernetes and/or Mesos, and you'll never look back. They're tailor-made for what you're looking to do. The two components you should focus on are:
Service Discovery: This allows inter-dependent services (micro-service "A" calls "B") to "find" each other. It's typically done using DNS, but with registration features on top of it that handle what happens as instances are scaled.
Scheduling: In Docker-land, scheduling isn't about CRON jobs, it means how containers are scaled and "packed" into servers in various ways to maximize efficient usage of available resources.
There are actually dozens of options here: Docker Swarm, Rancher, etc. are also competing alternatives. Many cloud vendors like Amazon also offer dedicated services (such as ECS) with these features. But Kubernetes and Mesos are emerging as standard choices, so you'd be in good company if you at least start there.
Metrics could be collected via Docker API ( and cool blog post ) and it's often used for that.
Tinkering with DAPI and docker stack tools (compose/swarm/machine) could provide alot of tools to scale microservice architecture efficiently.
I could advise in favor of Consul to manage discovery in such resource-aware system.
We are using AWS to host our miroservices application, and using ECS (AWS docker service) to containerize the different API.
And in this context, we use AWS auto scaling feature to manage the scale in and scale out. Check this.
Hope it helps.

which load balancer would be better in performance mod_jk or mod_proxy or any other open source

I am willing to host an application in a single machine with out any fail-over or load balancing at the hardware level as per my budget.
But, to my knowledge, as the no of hits increases to the tomcat, it has a drawback of going down. So, to get rid of that I want to go with multiple instances for the same application. So to do so, which load balancer would be better either mod_jk or mod_proxy. You can even suggest any other open source tool that helps me in load balancing the application hits.
My application contains structs and not even springs and my OS is rhel 6.x. Please suggest according to the good performance also.
Thanks in advance.
Running multiple instances of one application on the same machine only leads to the application sharing the resources - minus the overhead for the additional Tomcat instances and the loadbalancer.
This is pretty much the same as dividing cargo on individual tires because a car gets slow when it is too loaded. Staying in the picture: what you want is a turbocharger.
Translated that would be a reverse proxy cache.
I'd suggest using varnish. You can configure it to serve static resources like images and stylesheets from RAM after they were delivered the first time, reducing the requests passed to your application drastically.
It may be configured as a classical load balancer, too.

Server architecture for a scalable web application

we're planing to deploy a web-application with Amazon OpsWork and I just wanted to check with you, if our architecture might have any design flaws.
We've 4 components:
A load balanacer (Amazon preferably)
Express based on Node.js
MongoDB
ElasticSearch
Here's a communication diagram of our components:
At the front is a load balancer which distributes http requests to multiple web servers.
The web server is stateless and therefore can be cloned each time the load requires it. All web server instances are equal. Session information is saved in the MongoDB.
In the "backend" we're planing to use the build-in cluster functionalities from MongoDB and ElasticSearch. Therefore each web server instance only connects to a single MongoDB and ElasticSearch master instance. MongoDB and ElasticSearch are then scaling accordingly. Furthermore the the ElasticSearch master speaks to the MongoDB master to retrieve data for building the index.
How we see it, the most challenging task to setup such a system, is to configure OpsWorks with the MongoDB and ElasticSearch cluster.
Many thanks in advance!
if our architecture might have any design flaws.
Well, keep in mind that we can't tell much from a generic diagram. But here are some notes:
1) MongoDB isn't as easy to scale as other databases such as DynamoDB, Riak or Cassandra. For example, if you ever exceed the capacity of a single master (no matter how many slaves you have, all writes go to the single master), you'll have to shard. But switching to sharding is very disruptive and very tedious to set up.
If you don't expect to exceed the write capacity of one node, then you'll be fine on MongoDB.
2) What will you do for async tasks such as sending emails, creating long reports, etc?
It's possible to do these things in the request loop, and that's probably a fine way to get started. But as you have more boxes, the chances of failure go up. When a box dies, all the async tasks go away and nobody will know what they were. You also can have problems where one box gets heavily loaded with async tasks (using too much CPU or memory), and the problem will get worse and worse as it gets more tasks and completes them more slowly.
Also, a front-end like ELB will have a 60-second limit, which can cause problems if some of your requests could take longer. (Spin them off into async jobs with polling or something.)
3) ELB doesn't support web sockets. Consider that if you think you might want websockets down the road.
There's no such thing as a master in elastic search. You have master copies of shards and replicas of shards but they are basically moved around through your cluster by elastic search. Nodes might be master for one shard and a replica for another. So, you could simply put a load balancer in front of it.
However, you can specialize nodes to be data nodes or routing nodes as explained here: http://www.elasticsearch.org/guide/reference/modules/node/
The routing nodes effectively become load balancers. You could have a few of those (redundancy) and distribute load between those. Alternatively, you could run a dedicated router node on each web server. Basically routing nodes are pretty light and you save a bit of bandwidth/latency since your web server talks to localhost and from there it is all elastic search internal cluster traffic.
I'd recommend to replace MongoDB with Amazon Dynamo DB (it has node.js SDK).

How to make a distributed node.js application?

Creating a node.js application is simple enough.
var app = require('express')();
app.get('/',function(req,res){
res.send("Hello world!");
});
But suppose people became obsessed with your Hello World! application and exhausted your resources. How could this example be scaled up on practice? I don't understand it, because yes, you could open several node.js instance in different computers - but when someone access http://your_site.com/ it aims directly that specific machine, that specific port, that specific node process. So how?
There are many many ways to deal with this, but it boils down to 2 things:
being able to use more cores per server
being able to scale beyond more than one server.
node-cluster
For the first option, you can user node-cluster or the same solution as for the seconde option. node-cluster (http://nodejs.org/api/cluster.html) essentially is a built in way to fork the node process into one master and multiple workers. Typically, you'd want 1 master and n-1 to n workers (n being your number of available cores).
load balancers
The second option is to use a load balancer that distributes the requests amongst multiple workers (on the same server, or across servers).
Here you have multiple options as well. Here are a few:
a node based option: Load balancing with node.js using http-proxy
nginx: Node.js + Nginx - What now? (using more than one upstream server)
apache: (no clearly helpful link I could use, but a valid option)
One more thing, once you start having multiple processes serving requests, you can no longer use memory to store state, you need an additional service to store shared states, Redis (http://redis.io) is a popular choice, but by no means the only one.
If you use services such as cloudfoundry, heroku, and others, they set it up for you so you only have to worry about your app's logic (and using a service to deal with shared state)
I've been working with node for quite some time but recently got the opportunity to try scaling my node apps and have been researching on the same topic for some time now and have come across following pre-requisites for scaling:
My app needs to be available on a distributed system each running multiple instances of node
Each system should have a load balancer that helps distribute traffic across the node instances.
There should be a master load balancer that should distribute traffic across the node instances on distributed systems.
The master balancer should always be running OR should have a dependable restart mechanism to keep the app stable.
For the above requisites I've come across the following:
Use modules like cluster to start multiple instances of node in a system.
Use nginx always. It's one of the most simplest mechanism for creating a load balancer i've came across so far
Use HAProxy to act as a master load balancer. A few pointers on how to use it and keep it forever running.
Useful resources:
Horizontal scaling node.js and websockets.
Using cluster to take advantages of multiple cores.
I'll keep updating this answer as I progress.
The basic way to use multiple machines is to put them behind a load balancer, and point all your traffic to the load balancer. That way, someone going to http://my_domain.com, and it will point at the load balancer machine. The sole purpose (for this example anyways; in theory more could be done) of the load balancer is to delegate the traffic to a given machine running your application. This means that you can have x number of machines running your application, however an external machine (in this case a browser) can go to the load balancer address and get to one of them. The client doesn't (and doesn't have to) know what machine is actually handling its request. If you are using AWS, it's pretty easy to set up and manage this. Note that Pascal's answer has more detail about your options here.
With Node specifically, you may want to look at the Node Cluster module. I don't really have alot of experience with this module, however it should allow you to spawn multiple process of your application on one machine all sharing the same port. Also node that it's still experimental and I'm not sure how reliably it will be.
I'd recommend to take a look to http://senecajs.org, a microservices toolkit for Node.js. That is a good start point for beginners and to start thinking in "services" instead of monolitic applications.
Having said that, building distributed applcations is hard, take time to learn, take LOT of time to master it, and usually you will face a lot trade-off between performance, reliability, manteinance, etc.

Resources