Setup Node server with multiple websites and have each site on its own thread - node.js

I have a laptop that I am running node on, a Ubuntu Server with a quad core processor.
There is a plan for 2-3 sites on this server and I am not a really good admin and needed help getting this one site going so I dont want to start from scratch and run a hypervisor. Is there a way to have node host 3 sites and have each of the run on their own thread of the processor? I understand Node is single threaded and while I really dont need to do this for performance (because its just for development) I do like this as an exercise in doing things in node and it would be cool! There is an entire second laptop for the database so Im not worried about resources.
So 3 sites on one instance of Ubuntu Server all on different threads.....

It's not entirely clear what you're trying to accomplish. Here are a couple scenarios:
Create three separate node.js servers, each listening to their own port and they will each be running their own node.js process independent of the other. Then have each client connect to the appropriate port.
Create three separate node.js servers, each listening to their own port and they will each be running their own node.js process independent of the other. Use NGINX as a proxy in front of the three web servers and you can let NGINX direct requests all on port 80 from each of the three domains to the appropriate node.js web server. Using NGINX this way, all three web servers can appear to be be running on the default port 80 (or 443) and NGINX will separate them out and direct them to the appropriate web server process.
Create your own master node.js process that receives requests for all three domains, looks that the host header to see what domain the request was actually directed at and then forward that request to the appropriate child process. This would be similar to the way clustering works in node.js, but each child process would be each of your different web servers. Personally, I'd use the pre-built functionality in NGINX to do this for you (as described in option 2 above), but you could code it yourself if you didn't want to run NGINX.
Instead of NGINX, use some sort of load balancer that your ISP may already have to direct the incoming connections to the right server process.

If you run 3 different applications ie. sites then they will be running as different processes on your server which assuming all run on different ports, there should be no problem running them simultaneously. When you refer to node being single threaded that applies to a single process so each process has its own event loop running.

Related

What is the best architecture for a web-app communicating with a gRPC service?

I have built a website with chess.js and java chess libraries that communicates with a custom c++ chess engine via gRPC with python. I am new to web dev and especially gRPC, so I am not sure on the architecture I should be going for when it comes to hosting.
My questions are below:
Do the website and gRPC service need to be hosted on separate server instances and connected via API?
Everything right now is hosted locally and I use two ports as it is right now (5000 for the website and 8080 for the server). If the site and server aren't separate, is this how they will communicate to each other on a single server (one local port)?
I am using this website just for a showcase of my portfolio for job searching, so I am looking for free/cheap hosting that also provides a decent RAM availability since the c++ chess engine is fairly computationally intense. Does anyone have any suggestions for what hosting service I should use for this?
I was considering a free hosting for the website and then a cheap dedicated server for the service (if the two should be separate). Is this a bad idea?
Taking all tips and tricks that anyone has to offer. Again, totally novice to web dev, hosting, servers, etc.
NOTE This is an architecture rather than a programming question and discouraged on stack overflow.
The website and gRPC service may be hosted on the same server (as you're doing locally). You have the flexibility in running both processes (website and gRPC service) on a single more powerful host or separately on two hosts.
NOTE Although most often gRPC communicates over TCP sockets, it is possible to use UNIX sockets and even buffered memory too.
If you run both processes on a single host, you will want to consider connecting the website to the gRPC service via localhost (127.0.0.1 or the loopback device). Using localhost, network traffic doesn't leave the host.
If you run both processes on different hosts, traffic must travel across a network. This is slower and will likely incur charges when hosted.
You will want to decide whether the gRPC service should be exposed to any network traffic other than your website. In many cases, a gRPC service is used to provide an API to facilitate integration by 3rd-parties. If you definitely don't want the gRPC service accessed by other things, then you'll want to ensure either that it's bound to localhost (see above; and thereby inaccessible to anything other than other processes e.g. your website on the host) or firewalled such that only the website is permitted to send traffic to it.
You can find cheap hosting of virtual machines (VMs) and you'll likely want to consider hosting both processes on a single VM, ensure that you constrain the resources that you pay for and that you secure traffic (as above).
You may wish to consider containerizing the application. In this case, while it's possible to run both processes in a single container, this is considered not good practice. You should thus consider 2 containers (website and gRPC server). Many hosting|cloud platforms provide container hosting and this is generally easier than managing VMs (since you don't need to patch|update the OS and any dependencies). If you can find a platform that accepts a Docker Compose describing or a Kubernetes Deployment in which you describe both your services and how they interact such that the gRPC service is only accessible to the website, that could be ideal.

Hosting multiple .net core application on Ubuntu using Nginx

I created sample web api on .net core and registered it on default file in Nginx and was able to access it from outside.
The API looked like https://<>/api/values.
Now I want to add more configurations to host more web api with different port number. The problem is how will default file differentiate between multiple APIs since base URL is same i.e localhost\<> for all.
You need to create server blocks. Each of these server blocks will handle/listen/respond to different app. You can host as many apps you want to on a single Ubuntu machine using nginx this way.
This will be very helpful and describe the entire process of creating server blocks for your nginx server.

How to use Redis / Node.Js on a production server

I get what Redis and Node.js are but i don't understand how to run them on a live server. Locally its an install and you use the command line to get them running but i don't know how to install them on a live server.
I've already browsed around a bit but im still confused and also isn't Node.JS a server itself so its like running a server on a server? wouldn't that have effects on performance and what not?
I'm just confused on how it would work, any explanation will be great.. thanks
Redis & Node.js = Software
You install those on a physical machine, a computer. A node.js server is not a physical server, but an application that can handle HTTP requests. Normally, a node.js server runs on a port on a physical machine. So any HTTP requests sent to that port are handled by the node.js application. You can use a webserver, which is another piece of software that handles HTTP requests, like Nginx or Apache to manage multiple domains on a physical machine (the server). Redis also runs on a physical machine and listens on a specified port.
For example, I have a VPS with 4 websites on it managed by Nginx. Two of those websites are Laravel projects that connect to a MySQL server (on another machine) and to a Redis server on the same machine. The other two are node.js applications which don't need a database or Redis, so they just listen on their own ports and Nginx proxies all connections to their domainnames to those ports.
So you're not actually running a server on a server, but you're running software that handles certain things on a server.
There are different ways to run a node service. I strongly recommend docker to run everything but here is a short list of the most popular ones:
https://www.docker.com/ https://hub.docker.com/_/node/
http://pm2.keymetrics.io/
https://www.npmjs.com/package/forever (seems like a bit outdated)

Redis deployment configuration - master slave replication

Currently I have two servers which I have deployed node.js/Express.JS based web services API. I am using Redis for caching the JSON strings.
What will be the best option deploying this setup in to production? I see here it advices to go with a dedicated server redis. OK. I take it and use a dedicated server for running redis master. Can I use existing app servers as slave nodes? Note : these app servers are running an Node/Express application.
What other other options do I have?
You can.
It all depends on the load that those other servers have, it's a problem of resource sharing. To be honest my main issue with your architecture is not the dedicated vs the non-dedicated servers, it's the fact that you are placing a Redis server (master or not) on a host that most likely will be facing the internet (expressJS app), meaning, it's quite exposed.
If you can simulate HTTP load into your Node/Express JS servers, see the difference between running some benchmark tests on your dedicated server vs the non dedicated ones:
On a running redis server type in:
redis-benchmark -q -n 100000
If the app servers are being hammered and using all cores frequently you should see a substantial difference in the benchmarks.
My suggestion is, go ahead with your first setup and add monitoring for the redis response times, and only act when you have to, which might be now if the benchmarks show very poor results.
As a side note, consider the option of not sharing hosts for services that you expose to the internet with services that perform internal functions to your application.

Separate service on NodeJS server

I want to know how to structure my NodeJS server.
I want to separate services proposed on my website to mount cluster in the future and to have many servers (each allowed to one special task).
Example :
The 'main' server which have one project : ExpressJS and Database
The 'communication server' which have one project : Chat + Forum
Others projects : For complex computing (generating chart / stats / emailing)
Could you explain me different approach for this type of complex website ?
Like Benjamin Gruenbaym said, the architecture belongs somewhere else.
If you are wondering about how to setup the applications on an individual server, there are a few things to keep in mind.
NodeJS runs in a single process, so it should ideally take up 1 core of the CPU. If you run a database on the same server, that is another core. So it may be fine to host all node applications on the same server, if it has a sufficient number of cores.
To run two different Node processes on the same machine, you simply start them one after another, but make sure that they listen on different ports.
To make sure that you can scale out your application later, it is important that you use domain names, instead of IP adresses when you identify your services to each other. So the nodeJS app should know about the database as mydatabase.mycompany.com, not as 192.168.1.10 or any other ip address. This will allow you to later move the database to another network address or to use a load balancer.

Resources