Load Balancing of Process in 1 Server - linux

I have 1 process that receives incoming connection from port 1000 in 1 linux server. However, 1 process is not fast enough to handle all the incoming request.
I want to run multiple processes in the server but with 1 end-point. In this way, the client will only see 1 end-point/process not multiple.
I have checked LVS and other Load Balancing Solution. Those solutions seem geared towards multiple servers load-balancing.
Any other solution to help on my case?
i am looking something more like nginx where i will need to run multiple copies of my app.
Let me try it out.
Thanks for the help.

You also may want to go with a web server like nginx. It can load balance your app against multiple ports of the same app, and is commonly used to load balance Ruby on Rails apps (which are single threaded). The downside is that you need to run multiple copies of your app (one on each port) for this load balancing to work.

The question is a little unclear to me, but I suspect the answer you are looking for is to have a single process accepting tasks from the network, and then forking off 'worker processes' to actually perform the work (before returning the result to the user).
In that way, the work which is being done does not block the acceptance of more requests.
As you point out, the term load balancing carries the implication of multiple servers - what you want to look for is information about how to write a linux network daemon.
The two kes system calls you'll want to look at are called fork and exec.

It sounds like you just need to integrate your server with xinetd.
This is a server that listens on predefined ports (that you control through config) and forks off processes to handle the actual communication on that port.

You need multi-processing or multi-threading. You aren't specific on the details of the server, so I can't give you advice on what to do exactly. fork and exec as Matt suggested can be a solution, but really: what kind of protocol/server are we talking about?

i am thinking to run multiple application similar to ypops.

nginx is great but if you don't fancy a whole new web server, apache 2.2 with mod proxy balancer will do the same job

Perhaps you can modify your client to round-robin ports (say) 1000-1009 and run 10 copies of the process?
Alternatively there must be some way of internally refactoring it.
It's possible for several processes to listen to the same socket at once by having it opened before calling fork(), but (if it's a TCP socket) once accept() is called the resulting socket then belongs to whichever process successfully accepted the connection.
So essentially you could use:
Prefork, where you open the socket, fork a specified number of children which then share the load
Post-fork, where you have one master process which accepts all the connections and forks children to handle individual sockets
Threads - you can share the sockets in whatever way you like with those, as the file descriptors are not cloned, they're just available to any thread.

Related

What happens if i didn't use NGINX with uWSGI or Gunicorn?

Can someone brief me on what happens if I didn't use any webserver(NGINX) in front of my Application server (uWSGI or GUNICORN)?
My requirement is exposing a simple python script as a web-service. I don't have any static content to render. In that scenario can I go without NGINX?
Brief me what are the issues I will face if I go with plain app server? Max requests per second would be some 50 to 80(This is the upper limit).
Thanks, Vijay
If your script acts like a webserver then it is a webserver and you don't need any layer on top of it.
You have to make sure though it acts like one:
listens for connections
handles them concurrently
wakes up upon server restart, etc…
Also:
handles internal connections correctly (eg. to the database)
doesn't leak memory
doesn't die upon an exception
Having a http server in front of a script has one great benefit: the script executes and simply dies. No problem with memory handling and so on… imagine your script becomes unresponsive, ask yourself what then…

Why can't Node set up a named pipe server in a worker in Windows?

I'm working on enabling cluster support in a project I'm working on. This question comes directly from a statement in the Nodejs docs on the cluster module:
from: https://nodejs.org/api/cluster.html#cluster_cluster
Please note that, on Windows, it is not yet possible to set up a named pipe server in a worker.
What exactly does this mean?
What are the implications of this?
From the docs, and other research I've done, the actual practical consequences to this limitation are not clear to me.
A process can expose a named pipe as a way to communicate with other interested parties - ie. an nginx server could expose a named pipe where all incoming requests would be sent (just an idea - I am not sure if nginx can even do that).
From Node.js process (not a cluster worker, though), you could then start an http server (or even a plain TCP server, for that matter) which listens for messages sent to this named pipe:
http.createServer().listen('\\.\pipe\nginx')
Docs for the .listen() method's signature are here, specifically this part is of interest:
Start a server listening for connections on a given handle that has already been bound to a port, a UNIX domain socket, or a Windows named pipe
However, as per the warning, this functionality is not available from a cluster worker, for reasons beyond my understanding.
Here is a relevant commit in Node.js which hints at this limitation. You can find it by opening the Markdown document for cluster, look at git blame and go further into history a bit until you arrive at the commit which introduces this note.
Normal interprocess communication is not affected by this limitation, so a cluster works just the same on Win32 as it does on Unix systems.
Note: Upon further thought, that nginx example is a bit misleading since a named pipe, to my understanding, cannot be used for stateful bidirectional communication. It's just one-way, ie. source->listener. But I do hope I conveyed the general idea behind the limitation.

multiple child_process with node.js / socket.io

This is more of a design question rather than implementation but I am kind of wondering if I can design something like this. I have an interactive app (similar to python shell). I want to host a server (lets say using either node.js http server or socket.io since I am not sure which one would be better) which would spawn a new child_process for every client that connects to it and maintains a different context for that particular client. I am a complete noob in terms of node.js or socket.io. The max I have managed is to have one child process on a socket.io server and connect the client to it.
So the question is, would this work ? If not is there any other way in node to get it to work or am I better off with a local server.
Thanks
Node.js - is single process web platform. Using clustering (child_process), you will create independent execution of same application with separate thread.
Each thread cost memory, and this is generally why most of traditional systems is not much scalable as will require thread per client. For node it will be extremely inefficient from hardware resources point of view.
Node is event based, and you dont need to worry much about scope as far as your application logic does not exploit it.
Count of workers is recommended to be equal of CPU Cores on hardware.
There is always a master application, that will create workers. Each worker will create http + socket.io listeners which technically will be bound to master socket and routed from there.
http requests will be routed for to different workers while sockets will be routed on connection moment, but then that worker will handle this socket until it gets disconnected.

SUN RPC: Does the server satisfy requests one by one?

I am new to SunRPC. I would like to know what the server will do if multiple clients send requests to the server concurrently. Will the server queue the requests and reply one by one. Or will it respond parallelly? Because i remember reading somewhere that it can respond parallelly.
Btw, I am talking about a simple single threaded server.
Thanks
It really depends upon the server in question. It's possible to write servers to work in both fashions. At least the stereotypical Sun RPC server, NFSd, is usually written with the intention of supporting hundreds or thousands of clients simultaneously -- a file server that serves files to one client at a time is pretty useless. But the server is simplified because the NFS protocol is (mostly) stateless -- each request stands on its own. (Newer NFS protocol versions are less stateless and complicate both servers and clients significantly.)
If the server is very simple, you can probably start it with inetd(8), the Internet super-server, which will run configurable servers when connections arrive at configurable ports. inetd(8) was much more common back in the days when even swap space was at a premium and it made sense to execute new programs on every client connect. The bonus is each server is independent of the other servers -- each one is spawned in its own fork(2)ed process -- and only use of shared data would require extra programming effort.

Node.js high-level servers' communication API

folks. I wander whether there is any high-level API for servers' communication in Node.js framework? For example, I have several servers, where my application runs and I want to control loading of this servers. Sometimes, if some server is overloaded I want to redirect some connection requests to another(more free one). Are there any functions which could help me? Or I have to implement my own functionality?
Try looking at cluster. This allows you to control multiple node proccess and scale nicely.
Alternatively just set up TCP sockets and pass messages around over TCP or pass messages around over a database like redis.
You should be able to pipe HTTP connection down streams. You have one HTTP server as a load balancer and this server just passes messages on to your other servers and passes them back.
You're looking for what's called a load balancer. There are many off-the-shelf solutions, nginx being one of the standards today (and VERY quick/easy to set up).
I don't know of a node-native solution, but it's not that hard to write one. In general, however, load balancers don't actually monitor server load, they monitor whether a server is live or not and distribute traffic relatively equally.
As for your communications question, no -- there's no standardized API to communicate to/from node.js servers. Again, however, not hard to set up -- Assuming you're already hosting HTTP (using express, or native), just listen for specific requests, perhaps to /comm/ or whatever you deem appropriate and pass JSON back-and-forth.
Not too sure for Nodejs but I've heard ppl using Capistrano with Nodejs

Resources