SUN RPC: Does the server satisfy requests one by one? - rpc

I am new to SunRPC. I would like to know what the server will do if multiple clients send requests to the server concurrently. Will the server queue the requests and reply one by one. Or will it respond parallelly? Because i remember reading somewhere that it can respond parallelly.
Btw, I am talking about a simple single threaded server.
Thanks

It really depends upon the server in question. It's possible to write servers to work in both fashions. At least the stereotypical Sun RPC server, NFSd, is usually written with the intention of supporting hundreds or thousands of clients simultaneously -- a file server that serves files to one client at a time is pretty useless. But the server is simplified because the NFS protocol is (mostly) stateless -- each request stands on its own. (Newer NFS protocol versions are less stateless and complicate both servers and clients significantly.)
If the server is very simple, you can probably start it with inetd(8), the Internet super-server, which will run configurable servers when connections arrive at configurable ports. inetd(8) was much more common back in the days when even swap space was at a premium and it made sense to execute new programs on every client connect. The bonus is each server is independent of the other servers -- each one is spawned in its own fork(2)ed process -- and only use of shared data would require extra programming effort.

Related

Node.js design approach. Server polling periodically from clients

I'm trying to learn Node.js and adequate design approaches.
I've implemented a little API server (using express) that fetches a set of data from several remote sites, according to client requests that use the API.
This process can take some time (several fecth / await), so I want the user to know how is his request doing. I've read about socket.io / websockets but maybe that's somewhat an overkill solution for this case.
So what I did is:
For each client request, a requestID is generated and returned to the client.
With that ID, the client can query the API (via another endpoint) to know his request status at any time.
Using setTimeout() on the client page and some DOM manipulation, I can update and display the current request status every X, like a polling approach.
Although the solution works fine, even with several clients connecting concurrently, maybe there's a better solution?. Are there any caveats I'm not considering?
TL;DR The approach you're using is just fine, although it may not scale very well. Websockets are a different approach to solve the same problem, but again, may not scale very well.
You've identified what are basically the only two options for real-time (or close to it) updates on a web site:
polling the server - the client requests information periodically
using Websockets - the server can push updates to the client when something happens
There are a couple of things to consider.
How important are "real time" updates? If the user can wait several seconds (or longer), then go with polling.
What sort of load can the server handle? If load is a concern, then Websockets might be the way to go.
That last question is really the crux of the issue. If you're expecting a few or a few dozen clients to use this functionality, then either solution will work just fine.
If you're expecting thousands or more to be connecting, then polling starts to become a concern, because now we're talking about many repeated requests to the server. Of course, if the interval is longer, the load will be lower.
It is my understanding that the overhead for Websockets is lower, but still can be a concern when you're talking about large numbers of clients. Again, a lot of clients means the server is managing a lot of open connections.
The way large services handle this is to design their applications in such a way that they can be distributed over many identical servers and which server you connect to is managed by a load balancer. This is true for either polling or Websockets.

Simple message passing Nodejs server accepting only 4 requests at a time

We have a simple express node server deployed on windows server 2012 that recieves GET requests with just 3 parameters. It does some minor processing on these parameters, has a very simple in-memory node-cache for caching some of these parameter combinations, interfaces with an external license server to fetch license for the requesting user and sets it in the cookie, followed by which, it interfaces with some workers via a load balancer (running with zmq) to download some large files (in chunks, and unzips and extracts them, writes them to some directories) and display them to the user. On deploying these files, some other calls to the workers are initiated as well.
The node server does not talk to any database or disk. It simply waits for response from the load balancer running on some other machines (these are long operations taking typically between 2-3 minutes to send response). So, essentially, the computation and database interactions happens on other machines. The node server is only a simple message passing/handshaking server that waits for response in event handlers, initiates other requests and renders the response.
We are not using a 'cluster' module or nginx at the moment. With a bare bones node server, is it possible to accept and process atleast 16 requests simultaneously ? Pages such as these http://adrianmejia.com/blog/2016/03/23/how-to-scale-a-nodejs-app-based-on-number-of-users/ mention that a simple node server can handle only 2-9 requests at a time. But even with our bare bones implementation, not more than 4 requests are accepted at a time.
Is using a cluster module or nginx necessary even for this case ? How to scale this application for a few hundred users to begin with ?
An Express server can handle many more than 9 requests at a time, especially if it isn't talking to a datebase.
The article you're referring to assumes some database access on each request and serving static assets via node itself, rather than a CDN. All of this taking place on a single CPU with 1GB of RAM. That's a database and web server all running on a single core with minimal RAM.
There really are not hard numbers on this sort of thing; You build it and see how it performs. If it doesn't perform well enough, put a reverse proxy in front of it like nginx or haproxy to do load balancing.
However, based on your problem, if you really are running into bottlenecks where only 4 connections are possible at a time, it sounds like you're keeping those connections open way too long and blocking others. Better to have those long running processes kicked off by node, close the connections, then have those servers call back somehow when they're done.

What is the best way to communicate between two servers?

I am building a web app which has two parts. In one part it uses a real time connection between the server and the client and in the other part it does some cpu intensive task to provide relevant data.
Implementing the real time communication in nodejs and the cpu intensive part in python/java. What is the best way the nodejs server can participate in a duplex communication with the other server ?
For a basic solution you can use Socket.IO if you are already using it and know how it works, it will get the job done since it allows for communication between a client and server where the client can be a different server in a different language.
If you want a more robust solution with additional options and controls or which can handle higher traffic throughput (though this shouldn't be an issue if you are ultimately just sending it through the relatively slow internet) you can look at something like ØMQ (ZeroMQ). It is a messaging queue which gives you more control and lots of different communications methods beyond just request-response.
When you set either up I would recommend using your CPU intensive server as the stable end(server) and your web server(s) as your client. Assuming that you are using a single server for your CPU intensive tasks and you are running several NodeJS server instances to take advantage of multi-cores for your web server. This simplifies your communication since you want to have a single point to connect to.
If you foresee needing multiple CPU servers you will want to setup a routing server that can route between multiple web servers and multiple CPU servers and in this case I would recommend the extra work of learning ØMQ.
You can use http.request method provided to make curl request within node's code.
http.request method is also used for implementing Authentication api.
You can put your callback in the success of request and when you get the response data in node, you can send it back to user.
While in backgrount java/python server can utilize node's request for CPU intensive task.
I maintain a node.js application that intercommunicates among 34 tasks spread across 2 servers.
In your case, for communication between the web server and the app server you might consider mqtt.
I use mqtt for this kind of communication. There are mqtt clients for most languages, including node/javascript, python and java. In my case I publish json messages using mqtt 'topics' and any task that has registered to subscribe to a 'topic' receives it's data when published. If you google "pub sub", "mqtt" and "mosquitto" you'll find lots of references and examples. Mosquitto (now an Eclipse project) is only one of a number of mqtt brokers that are available. Another very good broker that is written in Java is called hivemq.
This is a very simple, reliable solution that scales well. In my case literally millions of messages reliably pass through mqtt every day.
You must be looking for socketio
Socket.IO enables real-time bidirectional event-based communication.
It works on every platform, browser or device, focusing equally on reliability and speed.
Sockets have traditionally been the solution around which most
realtime systems are architected, providing a bi-directional
communication channel between a client and a server.

Node.js high-level servers' communication API

folks. I wander whether there is any high-level API for servers' communication in Node.js framework? For example, I have several servers, where my application runs and I want to control loading of this servers. Sometimes, if some server is overloaded I want to redirect some connection requests to another(more free one). Are there any functions which could help me? Or I have to implement my own functionality?
Try looking at cluster. This allows you to control multiple node proccess and scale nicely.
Alternatively just set up TCP sockets and pass messages around over TCP or pass messages around over a database like redis.
You should be able to pipe HTTP connection down streams. You have one HTTP server as a load balancer and this server just passes messages on to your other servers and passes them back.
You're looking for what's called a load balancer. There are many off-the-shelf solutions, nginx being one of the standards today (and VERY quick/easy to set up).
I don't know of a node-native solution, but it's not that hard to write one. In general, however, load balancers don't actually monitor server load, they monitor whether a server is live or not and distribute traffic relatively equally.
As for your communications question, no -- there's no standardized API to communicate to/from node.js servers. Again, however, not hard to set up -- Assuming you're already hosting HTTP (using express, or native), just listen for specific requests, perhaps to /comm/ or whatever you deem appropriate and pass JSON back-and-forth.
Not too sure for Nodejs but I've heard ppl using Capistrano with Nodejs

Load Balancing of Process in 1 Server

I have 1 process that receives incoming connection from port 1000 in 1 linux server. However, 1 process is not fast enough to handle all the incoming request.
I want to run multiple processes in the server but with 1 end-point. In this way, the client will only see 1 end-point/process not multiple.
I have checked LVS and other Load Balancing Solution. Those solutions seem geared towards multiple servers load-balancing.
Any other solution to help on my case?
i am looking something more like nginx where i will need to run multiple copies of my app.
Let me try it out.
Thanks for the help.
You also may want to go with a web server like nginx. It can load balance your app against multiple ports of the same app, and is commonly used to load balance Ruby on Rails apps (which are single threaded). The downside is that you need to run multiple copies of your app (one on each port) for this load balancing to work.
The question is a little unclear to me, but I suspect the answer you are looking for is to have a single process accepting tasks from the network, and then forking off 'worker processes' to actually perform the work (before returning the result to the user).
In that way, the work which is being done does not block the acceptance of more requests.
As you point out, the term load balancing carries the implication of multiple servers - what you want to look for is information about how to write a linux network daemon.
The two kes system calls you'll want to look at are called fork and exec.
It sounds like you just need to integrate your server with xinetd.
This is a server that listens on predefined ports (that you control through config) and forks off processes to handle the actual communication on that port.
You need multi-processing or multi-threading. You aren't specific on the details of the server, so I can't give you advice on what to do exactly. fork and exec as Matt suggested can be a solution, but really: what kind of protocol/server are we talking about?
i am thinking to run multiple application similar to ypops.
nginx is great but if you don't fancy a whole new web server, apache 2.2 with mod proxy balancer will do the same job
Perhaps you can modify your client to round-robin ports (say) 1000-1009 and run 10 copies of the process?
Alternatively there must be some way of internally refactoring it.
It's possible for several processes to listen to the same socket at once by having it opened before calling fork(), but (if it's a TCP socket) once accept() is called the resulting socket then belongs to whichever process successfully accepted the connection.
So essentially you could use:
Prefork, where you open the socket, fork a specified number of children which then share the load
Post-fork, where you have one master process which accepts all the connections and forks children to handle individual sockets
Threads - you can share the sockets in whatever way you like with those, as the file descriptors are not cloned, they're just available to any thread.

Resources