When we speak of RPC, do we refer to a specific technology on Windows? - rpc

I hold different views on the term RPC with my colleague.
We both know RPC is remote procedure call. He said RPC refers to a specific term of Windows technology. I thought it's a more general and abstract concept. I think SOAP, DCOM, COBRA, XML-RPC, WCF are one form of RPC, as there're many different languages/platform and may be based on HTTP or TCP. Only if called remotely can be viewed as RPC. Though I'm not very sure or clear about it either.
Could anyone please to clarify this acronym's scope for me?

It's both a generic term, and refers to specific (older) technologies in both Windows and Unix. You have to infer from the context.
The "remote" part is also somewhat ambiguous. It can refer to calling from one process to another on the same host, as well as calling to an entirely different host.

Related

How do Transfer Protocols work?

Hypothetically, lets say that I wanted to study/create (a) transfer protocol such as http, ftp or ptp. How would I go about doing so? What do I need to know about the internet and servers and what do I need to make to be able to send and receive data through my own homemade transfer protocol?
That's a little backwards.
First you have a problem you need to solve that involves multiple machines.
Then you write software to solve it, which requires communication between those machines.
The details of that communication is called a 'protocol'.
Since the protocol is the interface between machines, it's beneficial if it is generic enough to let you swap out the software on one side or the other.
In this way, HTTP was invented to serve web pages to browsers, FTP was invented to let users transfer files, etc. The details of the protocol indicate the elements of communication required to solve the problem in the desired way.

RPCGEN over Unix domain sockets

My requirement is to make RPC calls between different processes. By nature these calls are 1-1; meaning single sender single receiver. I am architecturally restricted to use only unix domain sockets for this purpose.
I wanted to use 'rpcgen' towards this end. But the problem is that rpcgen works over TCP/UDP as transport mechanisms. What i want is to run them over domain sockets. Given that they dont support it over domain sockets; i figured out to stub the transport routines with my own code after generation to accomplish what i need. But that does not look easy at all.
I explored an option where the generated XDR stream can be written to a local buffer which can then be transported that way i want it; ie. over domain sockets. May be i can retreive it at the remote end to make it work. This might involve another copy of data but performance is not my concern at this point in time.
Is there a readymade solution for this kind of problem? What are my best options here.
Thanks
Sudarshan

SCORM - RTE reporting mechanism security

I am investigating SCORM compliance as an option for a software project I am involved in. If this is too esoteric for SO, I am sorry - not sure where else to turn.
I am a little confused as to how the SCO (Sharable Content Object) reports a quiz score, for example, to the LMS. From what I can gather from the official documentation, this is to be done using using LMSSetValue function in the RTE API object, which is just a bunch of Javascript.
This seems wildly insecure to me, as it takes nothing to rewrite the values passed to the LMS this way.
My question is therefore, am I missing something? Are SCOs meant simply to not report such values to the LMS? It is my impression it is the only permitted mode of communication between SCOs and the LMS.
The JavaScript API is the way data is passed from the SCO to the LMS. Are there more secure ways to pass data? Sure. But the implementation is not brand-spanking new, remember. In addition, because of portability constraints, many of the most highly secure ways of passing data are not available to SCORM developers. Portability was the main priority of the standard, not security. There is a community of experts talking about what should replace SCORM. It's called Project Tin Can. And different ways of exchanging data, including cross-domain and server-side, are being discussed there.

What came before web services and SOA?

I'm interested in the history of distributed, collaborative, cross-organisational programming paradigms - web services and SOA are de-facto now, but what came before? What models have been superceded by SOA?
Thanks
Well, I suppose there was RPC - which is really what SOAP is, only they didn't piggy-back the data payload on top of a standard protocol (http in SOAP's case). So CORBA and DCE-RPC and ONC RPC all did the same thing, but only over internal networks, not over the internet.
There was also EDI as a 'standard' for exchanging data between disparate entities. This was effectively a way of defining what the data payload would look like (similar the the XML part of SOAP).
But these are still not SOAs really, they provide the same functionality but the big difference was how people thought of using them. Once you could write a machine-to-machine 'website' and have different machines talk to each other through them, it took off. You could do it before using CORBA, say, but it wasn't as easy or as widely known about. You can tell this has happened by the fact we have several terms used for effectively the same thing - SOA, SaaS, Web Services... all the same thing (but lots of money to be made 'consulting' on the difference ;) )
Maybe Silos?
...where services are just not shared across an enterprise, at least in a standard way. This is why products like BizTalk are used: to get silos to talk to each other via standard interfaces.
I don't really think you'll find anything that's been superceded by SOA. You will find that there's been progress in organizing computer programs to take advantage of the SOA type principles. As for programming models that have been in reasonably common use, well, let's see... CORBA, RPC, more generic client-server applications. Of course, computer-to-computer communications were preceded by process-to-process communication using a wide variety of conventions.
SOA as a philosophy of breaking large problems into smaller ones and then composing the results has been known and applied since humans started making bricks instead of building complete walls. Of course, that was mostly implicit. Explicit statements for SOA really started to come about with CORBA and, while SOA is independent of Web Services, the advent of HTTP and XML, and then SOAP, really started to make development of non-specialized "services" easier, more worthwhile and thus common.
This pdf A Note on Distributed Computing should be an interesting read. It is pre-SOA and would give an idea of the history up to that point (1994).
I would say distributed object technology. And before it remote procedure calls.
RPC is one of the earlier approaches and gained popularity from the Sun implementation. One of the famous uses is NFS (network file system).
As object oriented programming became more popular, distributed objects followed. Most important was Microsoft DCOM (and later COM+) and, more industry wide, CORBA.
SOA is a divide and conquer approach that is critically dependent on the concept of services. Which is different from objects as used by CORBA et al, as well as being different from resources as in REST.
Objects are created and their lifetime is typically controlled by the client. On the other hand, services are assumed to be always there provided by the server. This is one reason why SOA is not equivalent to distributed objects.
Services are also stateless, which means that the server when considering the response to a service request need not look at the history of interaction with the client. This was not a consideration when originally devising the RPC concept as scalability wasn't such an important issue then. Interestingly, large scale users of RPC did notice the relationship between scalability and statelessness. The NFS RFC explicitly mentions stateless servers, though with reliability as the main concern. Anyway, statelessness is one of the main difference between services and plain old RPC.
In short, no. I don't believe in the revisionist history of SOA being since the dawn of time. Any more than the universe being written in Lisp (or Perl for that matter). Nor is it equivalent to divide and conquer or division of labour.
SOA started as a concept at some point in the nineties. Overlapping with the development of CORBA. It is much harder to pinpoint an actual date or event and there are more than a few claims to the conceptualisation of it.

What protocol should I use for fast command/response interactions?

I need to set up a protocol for fast command/response interactions. My instinct tells me to just knock together a simple protocol with CRLF separated ascii strings like how SMTP or POP3 works, and tunnel it through SSH/SSL if I need it to be secured.
While I could just do this, I'd prefer to build on an existing technology so people could use a friendly library rather than the socket library interface the OS gives them.
I need...
Commands and responses passing structured data back and forth. (XML, S expressions, don't care.)
The ability for the server to make unscheduled notifications to the client without being polled.
Any ideas please?
If you just want request/reply, HTTP is very simple. It's already a request/response protocol. The client and server side are widely implemented in most languages. Scaling it up is well understood.
The easiest way to use it is to send commands to the server as POST requests and for the server to send back the reply in the body of the response. You could also extend HTTP with your own verbs, but that would make it more work to take advantage of caching proxies and other infrastructure that understands HTTP.
If you want async notifications, then look at pub/sub protocols (Spread, XMPP, AMQP, JMS implementations or commercial pub/sub message brokers like TibcoRV, Tibco EMS or Websphere MQ). The protocol or implementation to pick depends on the reliability, latency and throughput needs of the system you're building. For example, is it ok for notifications to be dropped when the network is congested? What happens to notifications when a client is off-line -- do they get discarded or queued up for when the client reconnects.
AMQP sounds promising. Alternatively, I think XMPP supports much of what you want, though with quite a bit of overhead.
That said, depending on what you're trying to accomplish, a simple ad hoc protocol might be easier.
How about something like SNMP? I'm not sure if it fits exactly with the model your app uses, but it supports both async notify and pull (i.e., TRAP and GET).
That's a great question with a huge number of variables to consider, and the question only mentioned a few them: packet format, asynchronous vs. synchronized messaging, and security. There are many, many others one could think about. I suggest going through a description of the 7-layer protocol stack (OSI/ISO) and asking yourself what you need at those layers, and whether you want to build that layer or get it from somewhere else. (You seem mostly interested in layer 6 and 7, but also mentioned bits of lower layers.)
Think also about whether this is in a safety-critical application or part of a system with formal V&V. Really good, trustworthy communication systems are not easy to design; also an "underpowered" protocol can put a lot of coding burden on application to do error-recovery.
Finally, I would suggest looking at how other applications similar to yours do the job (check open source, read books, etc.) Also useful is the U.S. Patent Office database, etc; one can get great ideas just from reading the description of the communication problem they were trying to solve.

Resources