Using Node.js Cluster Module on Openshift - node.js

I am trying to use both cpus on openshift gears with node.js, but i have no idea on how to proceed with cluster module as the port is a openshift variable. Do i assign arbitrary values for the ports or use the same variable for the cluster

OpenShift will only give you access to one core per application instance (per "gear").
If you want to cluster Nodejs on OpenShift, I'd recommend using OpenShift's HAProxy tooling to scale up and down.
I wrote up a few notes on this topic here: https://www.openshift.com/blogs/10-reasons-openshift-is-the-best-place-to-host-your-nodejs-app#scale

Related

Load balancing in node server

I have created a node server using express. I using this architecture as follows:
-> I am serving node port as proxy on domain using apache.
-> I am using pm2 for handling node process. I have created two cluster and ran individually on different cores. (http://pm2.keymetrics.io/docs/usage/cluster-mode/)
My question is
Am i doing this correct way as production standard?
Do i need load balancing on apache level? because clusters will come
in picture after apache?
Am i correct?
Yes, that's correct architecture.
But Nginx and Pm2 go more hand in hand. Apache is okay too.

Deploy node.js in production

What are the best practices for deploying a nodejs application in production?
I would like to know how deploy for production Api's nodejs is being done today, today my application is in docker and running locally.
I wonder if I should use a Nginx inside the container and deploy my server on it or just upload my image node that is already running today.
*I need load balance
There are few main types of deployment that are popular today.
Using platform as a service like Heroku
Using a VPS like AWS, Digital Ocean etc.
Using a dedicated server
This list is in the order of growing difficulty and control. So it's easiest with PaaS but you get more control with a dedicated server - thought it gets significantly more difficult, especially when you need to scale out and build clusters.
See this answer for more details on how to install Node on a VPS or a dedicated server:
how to run node js on dedicated server?
I can only add from experience on AWS using a NAT Gateway which is a dedicated Node server with a MongoDB server behind the gateway. (Obviously this is a scalable system and project.)
With or without Docker, you need to control the production environment. This means clearly defining which NPM libraries you will need for production, how you handle environment variables and clusters for cores.
I would suggest, very strongly, using a tool like PM2 to handle clusters, server shutdowns and restarts and logs. (Workers & slaves also if you need them and code for them).
This list can go on and on, but keep in mind this is only from an AWS perspective. Setting up a Gateway correctly on AWS is also not an easy process. Be prepared for some gotcha's along the way.

How to configure the security group on AWS to run node app

AWS is new to me. I want to configure three VM on AWS to run one node.js app.
I want to set three VMs to run MongoDB, Memcached and node seperatedly.
The question description says that You
should also carefully configure the security groups inside of AWS, so that only your node instance can access your mongo and mcd instances, and so that your node instance is only reachable on port 8080.
When I am setting the security group, I feel really confused. If somebody can tell me how to configure this?
PS: I wanted to comment to OP's question, but I can't as I don't have enough reputation.
You need to go through some docs on AWS to understand this. If you are building enterprise level app you want to look into this docs where you can get more info on security groups and how you can setup your architecture on AWS with security.
Secondly, Security groups are the rules which are applied on instance level - consider as firewall, for your system more info here. In your case you can open node.js ports for mongodb (27017/18 TCP) and Memcached (11211 - TCP) instances as node only requires to connect to mongodb and memcached, also you can setup NAT if you want to keep your instances in private subnet.

nodejs, docker, nginx and amazon aws deployment

There have been many questions regarding docker, node and amazon aws and I have read most of them but I haven't got my answer.
I have been working on a production node.js API project for last some weeks and now that the API's are complete I have to deploy them.
There are a total of 2 microservices (this may increase later) and some worker processes. Different components of the system will communicate with each other using SQS and SNS. Each of the microservices uses mongo DB as the nosql storage and mongoose as the ODM. I chose mongolab as the mongoDB hosting provider. Currently I can connect to mongolab DB using MONGOLAB_URI environment variable (obviously this will not be enough during production any suggestion on this one is welcome)
I am going ahead with amazon aws platform.
My thought process is:
I will docerize each of the components. For worker processes it is straight forward.
For microservices I will have 2 docker images which I will deploy using amazon EC2 container service. I will have a third nginx docker image which I will put in front of node applications.
I am planning to create a cluster of 2 machines (c2 large) initially and host these 3 dockerized microservices and nginx images on these machines.
Obviously the node process will run on some port. Lets assume it to be 3100
Till now it is perfectly clear the problem came when I want to exposes these API's to outer world
The microservices exposes some endpoints like
service 1:- /users, /login, /me etc
service 2: /offers, /gifts etc
My Question is:
I want to resolve
mydomain.com/api/v1/users to service1:3100/users
similarly for other API's
I assume this can be done by nginx but I am not much familiar with it.
The constraints are:
I don't want to host each of the microservice on a separate machine (budget constraint).
I don't know which service will run on which machine (this I assume since I read that ec2 container service will automatically start docker processes on random machines and distribute load).
How Can I do this ?

Best way to avoid a single point of failure with an elasticsearch cluster and a web server cluster

We have a web application running on AWS with the following architecture:
1 elasticseach cluster with 2 data nodes
1 auto-scaling load-balanced cluster of web servers
As elasticsearch does some clever internal load balancing we could just point all the web servers at one of the data nodes. But this would create a single point of failure - if that node goes down then I'm not going to get any query results.
My solution thus far has been to have elasticsearch running on each web server as non-data nodes. Each web server queries its local elasticsearch node, which in turn farms the request off to one of the data nodes. This seems to be the suggested approach on the elasticsearch website
This is great in that if one of the data nodes fails in some way we don't lose the ability to serve search queries. However, it does mean elasticsearch is using resources on each web server, and if we migrate to using elastic beanstalk (which I'm keen to do) then we'll need to some how get elasticsearch installed on our web instances. EDIT: I've succeeded with this now, but have yet to figure out how to specify a different config for each environment.
Is there another way to avoid a single point of failure without having elasticsearch running on each web server?
I thought about using a load balancer in front of the data nodes to serve queries from the web servers, but that would also mean opening the cluster up to public access without setting up VPC to restrict access.
Is there a simpler solution I'm missing?
I don't think this directly answers your question, but if you are still ok with running ES on your web server nodes, you can customize the software that is installed using the .ebextensions mechanism, which allows you to run scripts and/or install packages when new Elastic Beanstalk instances are started up. If this isn't sufficient you can start your Elastic Beanstalk instances using a custom AMI.
Also, you may not be aware that you can run Elastic Beanstalk in a VPC.

Resources