Redis deployment configuration - master slave replication - node.js

Currently I have two servers which I have deployed node.js/Express.JS based web services API. I am using Redis for caching the JSON strings.
What will be the best option deploying this setup in to production? I see here it advices to go with a dedicated server redis. OK. I take it and use a dedicated server for running redis master. Can I use existing app servers as slave nodes? Note : these app servers are running an Node/Express application.
What other other options do I have?

You can.
It all depends on the load that those other servers have, it's a problem of resource sharing. To be honest my main issue with your architecture is not the dedicated vs the non-dedicated servers, it's the fact that you are placing a Redis server (master or not) on a host that most likely will be facing the internet (expressJS app), meaning, it's quite exposed.
If you can simulate HTTP load into your Node/Express JS servers, see the difference between running some benchmark tests on your dedicated server vs the non dedicated ones:
On a running redis server type in:
redis-benchmark -q -n 100000
If the app servers are being hammered and using all cores frequently you should see a substantial difference in the benchmarks.
My suggestion is, go ahead with your first setup and add monitoring for the redis response times, and only act when you have to, which might be now if the benchmarks show very poor results.
As a side note, consider the option of not sharing hosts for services that you expose to the internet with services that perform internal functions to your application.

Related

What is the best architecture for a web-app communicating with a gRPC service?

I have built a website with chess.js and java chess libraries that communicates with a custom c++ chess engine via gRPC with python. I am new to web dev and especially gRPC, so I am not sure on the architecture I should be going for when it comes to hosting.
My questions are below:
Do the website and gRPC service need to be hosted on separate server instances and connected via API?
Everything right now is hosted locally and I use two ports as it is right now (5000 for the website and 8080 for the server). If the site and server aren't separate, is this how they will communicate to each other on a single server (one local port)?
I am using this website just for a showcase of my portfolio for job searching, so I am looking for free/cheap hosting that also provides a decent RAM availability since the c++ chess engine is fairly computationally intense. Does anyone have any suggestions for what hosting service I should use for this?
I was considering a free hosting for the website and then a cheap dedicated server for the service (if the two should be separate). Is this a bad idea?
Taking all tips and tricks that anyone has to offer. Again, totally novice to web dev, hosting, servers, etc.
NOTE This is an architecture rather than a programming question and discouraged on stack overflow.
The website and gRPC service may be hosted on the same server (as you're doing locally). You have the flexibility in running both processes (website and gRPC service) on a single more powerful host or separately on two hosts.
NOTE Although most often gRPC communicates over TCP sockets, it is possible to use UNIX sockets and even buffered memory too.
If you run both processes on a single host, you will want to consider connecting the website to the gRPC service via localhost (127.0.0.1 or the loopback device). Using localhost, network traffic doesn't leave the host.
If you run both processes on different hosts, traffic must travel across a network. This is slower and will likely incur charges when hosted.
You will want to decide whether the gRPC service should be exposed to any network traffic other than your website. In many cases, a gRPC service is used to provide an API to facilitate integration by 3rd-parties. If you definitely don't want the gRPC service accessed by other things, then you'll want to ensure either that it's bound to localhost (see above; and thereby inaccessible to anything other than other processes e.g. your website on the host) or firewalled such that only the website is permitted to send traffic to it.
You can find cheap hosting of virtual machines (VMs) and you'll likely want to consider hosting both processes on a single VM, ensure that you constrain the resources that you pay for and that you secure traffic (as above).
You may wish to consider containerizing the application. In this case, while it's possible to run both processes in a single container, this is considered not good practice. You should thus consider 2 containers (website and gRPC server). Many hosting|cloud platforms provide container hosting and this is generally easier than managing VMs (since you don't need to patch|update the OS and any dependencies). If you can find a platform that accepts a Docker Compose describing or a Kubernetes Deployment in which you describe both your services and how they interact such that the gRPC service is only accessible to the website, that could be ideal.

Deploy node.js in production

What are the best practices for deploying a nodejs application in production?
I would like to know how deploy for production Api's nodejs is being done today, today my application is in docker and running locally.
I wonder if I should use a Nginx inside the container and deploy my server on it or just upload my image node that is already running today.
*I need load balance
There are few main types of deployment that are popular today.
Using platform as a service like Heroku
Using a VPS like AWS, Digital Ocean etc.
Using a dedicated server
This list is in the order of growing difficulty and control. So it's easiest with PaaS but you get more control with a dedicated server - thought it gets significantly more difficult, especially when you need to scale out and build clusters.
See this answer for more details on how to install Node on a VPS or a dedicated server:
how to run node js on dedicated server?
I can only add from experience on AWS using a NAT Gateway which is a dedicated Node server with a MongoDB server behind the gateway. (Obviously this is a scalable system and project.)
With or without Docker, you need to control the production environment. This means clearly defining which NPM libraries you will need for production, how you handle environment variables and clusters for cores.
I would suggest, very strongly, using a tool like PM2 to handle clusters, server shutdowns and restarts and logs. (Workers & slaves also if you need them and code for them).
This list can go on and on, but keep in mind this is only from an AWS perspective. Setting up a Gateway correctly on AWS is also not an easy process. Be prepared for some gotcha's along the way.

Setup Node server with multiple websites and have each site on its own thread

I have a laptop that I am running node on, a Ubuntu Server with a quad core processor.
There is a plan for 2-3 sites on this server and I am not a really good admin and needed help getting this one site going so I dont want to start from scratch and run a hypervisor. Is there a way to have node host 3 sites and have each of the run on their own thread of the processor? I understand Node is single threaded and while I really dont need to do this for performance (because its just for development) I do like this as an exercise in doing things in node and it would be cool! There is an entire second laptop for the database so Im not worried about resources.
So 3 sites on one instance of Ubuntu Server all on different threads.....
It's not entirely clear what you're trying to accomplish. Here are a couple scenarios:
Create three separate node.js servers, each listening to their own port and they will each be running their own node.js process independent of the other. Then have each client connect to the appropriate port.
Create three separate node.js servers, each listening to their own port and they will each be running their own node.js process independent of the other. Use NGINX as a proxy in front of the three web servers and you can let NGINX direct requests all on port 80 from each of the three domains to the appropriate node.js web server. Using NGINX this way, all three web servers can appear to be be running on the default port 80 (or 443) and NGINX will separate them out and direct them to the appropriate web server process.
Create your own master node.js process that receives requests for all three domains, looks that the host header to see what domain the request was actually directed at and then forward that request to the appropriate child process. This would be similar to the way clustering works in node.js, but each child process would be each of your different web servers. Personally, I'd use the pre-built functionality in NGINX to do this for you (as described in option 2 above), but you could code it yourself if you didn't want to run NGINX.
Instead of NGINX, use some sort of load balancer that your ISP may already have to direct the incoming connections to the right server process.
If you run 3 different applications ie. sites then they will be running as different processes on your server which assuming all run on different ports, there should be no problem running them simultaneously. When you refer to node being single threaded that applies to a single process so each process has its own event loop running.

how to deploy and run multi language multi process app on heroku

I want to build an app with potentially large number of io and computation heavy logic.
I learnt that one way to tackle this is to use node.js as a client facing service, then pass on client request to another backend service for heavy computing, then return the result asychronously.
This means I'll have at least two processes in my app: a node.js server and a backend service written in another language.
I also want to use heroku, but how do I deploy such an architecture to heroku? Do I have to confine all web and worker processes in the same box? For example, if I have 10 dynos, I want 2 of them run the node web server, the other 8 run my backend service as worker, how do I deploy such an architecture?
First, if you really need multiple languages, you can use heroku-buildpack-multi.
Actually, both your Web dyno and your worker dynos could be node.js, if that works for you. In that case, you could use the exact setup described here.
At any rate, you can separately configure as many Web and Worker dynos as you want with heroku ps:scale as described in Heroku scaling. See also What is a Heroku Dyno?
You should check out MEAN stack.
http://mean.io/#!/
Mongo (or any Database really, for heroku I recommend Postgres if you don't want a NoSQL database).
Express This is you backend that handles communications from you front-end to you database
Angular Super powerful front-end Single Page Application with a HUGE community and tons of add-ons and goodies.
Node The server that makes this all possible.
It's all written in javascript.
Heroku itself is a platform similar to an AWS Server that has node on it that allows you to run Express. You will build your apps then use the heroku command line to deploy your app to their servers after you have created your account. Heroku will also host your database via heroku add-ons.
https://devcenter.heroku.com/articles/getting-started-with-nodejs
Dynos will dynamically handle you processes as needed. Node isn't really run so much as a service as its running your backend (Express). Think of Node like Apache, it's what is running your server. Express is the backend running on that server and can run async so you can have as many processes running as you want, that's where the dynos will start to load balance for you.

RESTful API 2x nodejs apps on same server, with fallback

Micro Services I would like to have front-end-web and back-end-api nodejs applications running. Communicating via RESTful HTTP APIs, on a single machine (read ec2).
Stateless I would like to scale these horizontally across ec2 instances in the future. Using Redis (ElasiCache) and MySQL (RDS) (read stateless)
Load Balanced When scaled i would load balance with an ELB. no problme there.
QUESTION: If a back-end-api goes down on a machine, is it possible to somehow fallback to another ec2 server running an back-end-api instance? How would i do this.
Why not seperate the API app? well i would like to keep them on the same server for latency and maintainability.
oh btw i use docker :-)

Resources