What's the difference between RPC and Browser/Server? - rpc

It seems Browser/Server is the same as RPC in that the Browser sends a request to the Server,
and the Server returns the data after calling related routines.
So what's the difference?

Those are loosely related concepts. "Browser/Server" (usually named client/server) indicates an architecture where you have a process listening for requests (a server) and processes making requests (clients). The client may or may not be calling the server using an RPC mechanism. HTTP, for example, is a client/server protocol that is not considered RPC.
RPC means Remote Procedure Call, which means that the client calls a method on a proxy object, and the proxy object sends a request to the server. The server then translates the request into a method (procedure) call to its target object. Therefore to the client, it looks like it's simply calling a method on a server object, but client/server code is what enables this.

There are a few differences also to be considered though:
RPC works with stubs. Client calls the 'client-stub' which in turns calls the 'server-stub' for the call of the procedure. If you talk about browser-server, also RPC (RMI) technology is implemented sometimes to achieve the same effect.
Additionally, to call a disadvantage, the call of RPC is not connection-oriented. The client does not know whether the procedure was actually-called. Thus it can be failure in case of unpredictable network problems.
Also the browser technology is reliable as it confirms (if implemented) the execution of the process in the server (using AJAX etc.).

Rpc causes a procedure to execute in the remote procedure.
in client server the procedure may be execute in local host or remote location.

Related

grpc complete async java service request/response mapping

A Java service (let's call it portal) is both a gRPC client as well as server. It serves millions of gRPC clients (server), each client requesting for some task/resource. Based on the incoming request, portal will figure out the backend services and talk to one or more of them, and send the returned response(s) to the originating client. Hence, here, the requirement is:
Original millions of clients will have their own timeouts
The portal should not have a thread blocking for the millions of clients (async). It should also not have a thread blocking for each client's call to the backend services (async). We can use the same thread which received a client call for invoking the backend services.
If the original client times out, portal should be able to communicate it to the backend services or terminate the specific call to the backend services.
On error from backend services, portal should be able to communicate it back to the specific client whose call failed.
So the questions here are:
We have to use async unary calls here, correct?
How do the intermediate server (portal) match the original requests to the responses from the backend services?
In case of errors on backend services, how does the intermediate server propagate the error?
How does the intermediate server propagate the deadlines?
How does the intermediate server cancel the requests on the backend services, if the originating client terminates?
gRPC Java can make a proxy relatively easily. Using async stubs for such a proxy would be common. When the proxy creates its outgoing RPCs, it can save a reference to the original RPC in the callback of the outgoing RPC. When the outgoing RPC's callback fires, simply issue the same call to the original RPC. That solves both messages and errors.
Deadline and cancellation propagation are automatically handled by io.grpc.Context.
You may want to reference this grpc-level proxy example (which has not been merged to grpc/grpc-java). It uses ClientCall/ServerCall because it was convenient and because it did not want to parse the messages. It is possible to do the same thing using the StreamObserver API.
The main difficulty in such a proxy would be to observe flow control. The example I referenced does this. If using StreamObserver API you should cast the StreamObserver passed to the server to ServerCallStreamObserver and get a ClientCallStreamObserver by passing a ClientResponseObserver to the client stub.

Long running tasks in Thrift

I plan on using Apache Thrift but some calls will be long running/blocking but still require a return value, which would traditionally be returned via callback.
I understand that Thrift does not support callbacks (has this changed?) so I am thinking about making the function just block until a response is returned. Would this be ok? Will Thrift complain (timeout) if an RPC request takes too long?
They say Thrift wasn't intended for bi-directional communication but it should be easy enough to do with a socket.
Context: I am using Thrift or IPC between two processes locally, therefore there won't be huge load on the server alleviating any concern that long running HTTP requests would overload the server.
Am I missing a solution provided by something else?
I understand that Thrift does not support callbacks (has this changed?)
No (not supported), and no (not changed).
some calls will be long running/blocking but still require a return value, which would traditionally be returned via callback.
Yes, if you want to stick with the RPC style of doing things, or are technically limited in that regard.
so I am thinking about making the function just block until a response is
returned. Would this be ok?
Long running calls are a perfectly legal solution. Even polling could be an option, of course unless you are flooding the server with calls. Depends on what "long" means exactly.
Will Thrift complain (timeout) if an RPC request takes too long?
You can always initiate a new request after the connection has dropped.
They say Thrift wasn't intended for bi-directional communication
but it should be easy enough to do with a socket.
In a local setup having both ends acting as client and server is indeed possible, and maybe the better option in your case.
In contrast, it's much harder to do that across the interblag. Therefore, if you have strong plans to extend your solution later into such a scenario, this may create some additional headaches to rewrite the bidi solution into long running calls. If that is not the case, you can safely ignore this paragraph.

Use Apache Thrift for two-way communication?

Is it possible to implement a two-way communication between client and server with Apache Thrift? Thus not only to be able to make RPC from client to server, but also the other way round? In my project I have the requirement that the server must also push some data to the client without being asked by the client before to do this.
There are two ways how to achieve this with Thrift.
If both ends are more or less peers and you connect them through sockets or pipes, you simply set up a server and a client on both ends and you're pretty much done. This does not work in all cases, however, especially with HTTP.
If you connect server and client through HTTP or a similar channel, there is a technique called "long polling". It basically requires the client to call the server as usual, but the call will only return when the server wants to send some data back to the client. After receiving the data, the client starts another call if he's still interested in more data.
As Denis pointed out, depending on your exact use case, you might want to consider using a MQ system. Note that it is still possible to use Thrift to de/serialize the messages into and from the queues. The contrib folder has some examples that show how to use Thrift with ZMQ, Rebus and some others.
You are better to use queues then, e.g. ZeroMQ.

Can an RTSP TEARDOWN request be received before a SETUP request?

I was under the impression that since a TEARDOWN request releases resources that are normally allocated when a SETUP is made, a TEARDOWN request was only necessary after the SETUP request.
However, I just had an Android device that sent a TEARDOWN immediately after receiving the response to a DESCRIBE request (before the SETUP request, the Session: parameter of the request was empty).
This was unexpected, and I was not able to have a confirmation, even by re-reading the RFC, if this is legit or not.
Can anybody provide information on this? Ideally with am official reference...
Servers should typically be prepared to talk to various clients, and it is a good idea to design servers error prone: clients might send weird commands and servers should reasonably respond. TEARDOWN stops streaming, so it makes no sense to issue it before SETUP, however it is still legit to send this command without SETUP, the server receiving it would just have nothing to do, no resources to free. It is up to server to decide whether to respond with 200 OK, or another status indicating that command makes no sense in this context (e.g. the provided session identifier is not valid).

How can a connection be represented only by a small space in a HTTP server running on node.js?

I have read that a HTTP server created in node.js does not create new threads for each incoming connection(request). Instead it executes a function that has been registered as a callback corresponding to the event of receiving a request.
It is said that each connection is represented by some small space in the heap. I cannot figure this out. Are connections not represented by sockets ? Should sockets not be opened for every connection made to the node.js server and this would mean each connection cannot be represented by just a space allocation in the javascript heap ?
It is described on the nodejs.org website that instead of spawning threads (2mb overhead per thread!) per connection, the server uses select(), epoll, kqueue or /dev/poll to wait until a socket is ready to read / write. It is this method that allows node to avoid thread spawning per connection, and the overhead is that associated with the heap allocation of the socket descriptor for the connection. This implementation detail is largely hidden from developers, and the net.socket API exposed by the runtime provides everything you need to take advantage of that feature without even thinking about it.
Node also exposes its own event API through events.EventEmitter. Many node objects implement events to provide asynchronous (non-blocking) event notification, which is perfect for I/O operations, which in other languages - such as PHP - are synchronous (blocking) by default. In the case of the node net.socket API, events are triggered for several API methods dealing with socket I/O, and the callbacks that are passed by parameter to these methods are triggered when an event occurs. Events can have callback functions bound to them in a variety of different ways, accepting a callback function as a parameter is only a convenience for the developer.
Finally, do not confuse OS events with nodejs events. In the case of the net API, OS events are passed to the nodejs runtime, but nodejs events are javascript.
I hope this helps.

Resources