I'm struggling to understand what is exactly RPC (Remote procedure call), a way of thinking or an actual protocol ?
Is it a concept that promotes a layer of indirection for distributed communication or it is an actual protocol that ships with the OS (unix/windows) and it is used ONLY between some specific services ?
Can RPC be seen as an analogy to OSI model, where it answers to "WHAT" but not to "HOW", therefore it is used just for reference ?
To my understanding, ANY (unidirectional, bidirectional) communication between two processes (local/remote) seems to be what we call "RPC material". Is it correct ?
Thanks
Related
I've been reading on packets a lot today. I was confused for sometime because smtp, http, or ftp, for example, are all called protocols. But that they also somehow utilize transport protocols like TCP. I couldn't locate them on the packet 4 layers. Until I just discovered they're simply part of the application layer.
I want to know what exactly these "protocols" offer. I'm guessing a specific format for the data which applications on the client side know how to handle? If so, does this mean that realistically, I might have to create my own "protocols" if I created an application with a unique functionality?
A protocol, in this case, is just a structured way of communicating between two or multiple parties.
If you write, for example, a PHP-App and offer an API, you created a protocol to interact with your program. It defines how others interact with it and what response they can expect while doing so. Your self-created protocol depends on others, like the HTTP and TCP.
I suggest watching following video of LiveOverflow, explaining exactly this:
https://www.youtube.com/watch?v=d-zn-wv4Di8&ab_channel=LiveOverflow
I want to know what exactly these "protocols" offer.
You can read the definition of each protocol, if you really want to
I have been developing microservices with Spring Boot for a while, using feign client, rest template and AMPQ brokers to establish communication between each microservice.
Now, I am learning NestJs and its microservice approach. I've noticed that nestjs uses TCP as the default transport layer which is different from the way it is done with Spring Boot.
Why does nestjs prefer those transport layers (TCP, AMPQ) instead of HTTP? isn't HTTP the transport protocol for rest microservices?
From NestJs documentation:
"a microservice is fundamentally an application that uses a different transport layer than HTTP"
The main reason is it is slow. The problem with HTTP approach is that, with HTTP, JSON can generate an unwanted processing time to send and translate the information.
One problem with http-json is the serialization time of JSON sent. This is an expensive process and imagine serialization for a big data.
In addition to JSON, there are a number of HTTP headers that should be
interpreted further which may be discarded. The only concern should be to maintain a single layer for sending and receiving messages. Therefore, the HTTP protocol with JSON to communicate
between microservices is very slow. There are some optimization techniques and those are complex and does not add significant performance benefits
Also,HTTP spends more time waiting than it does transfer data.
If you look a the OSI model, HTTP is part of Layer 7 (Application). TCP is Layer 4 (Transport).
When looking at Layer 4 there is no determining characteristic that makes it HTTP, AMPQ, gRPC, or RTSP. Layer 4 is explicitly how data is transmitted and received with the remote device.
Now, this is where networking and the software development worlds collide. Networking people will use "transport" meaning Layer 4, while Programming people use "transport" meaning the way a packet of data is transmitted to another component.
The meaning of "transport" (or "transporter" as used in the docs) is used as an abstraction from how messages are shared in this architecture.
Looking at the documentation if you are looking for something like AMPQ for your microservice you can use NATS or REDIS (both implementations are built by them).
https://docs.nestjs.com/microservices/basics#getting-started
I am having a hard time understanding RPC in terms of the implementation. Several articles that I have read on RPC, I have seen following examples related to RPC:
Example: A RPC api
GET /readStudent?studentid=123
Example: A RPC call
POST /student HTTP/1.1
HOST: api.demo.com
Content-Type: application/json
{"name": "John Doe"}
As far as I have read and understood, RPC allows a client application to directly call methods on a server application on a different machine as if it was a local object.
So, what is the above examples all about ? Why are we making api calls instead of invoking methods ?
I am assuming that in these RPC examples above, the URLs might be pointing to public methods and the method arguments are passed here in the query string or body.
And if that is the case, why can't I simply use REST then ? Why make an effort of exposing public methods(whose actual implementation must be elsewhere according to RPC principles) through HTTP api ?
I am also confused about what is the actual RPC way and which way should be preferred.
Your examples could indicate how one RPC implementation transports requests to a distinct process. But there should be a translation layer that lets a client simply call methods/functions/procedures, like readStudent(123) and createStudent("John Doe"). Often there is also a corresponding server-side layer that lets the application code implement only those methods/functions/procedures (and not the details of the JSON/HTTP or other transport). These translation (or "marshalling") layers are often machine-generated from an application-specific interface specification, to avoid tedious manual coding of the translation boilerplate. Such interface specification is written in an Interface Definition Language (IDL).
REST imposes some conventional semantics that method calls may not honor. And it does not necessarily offer a translation layer to give the illusion of application-specific method calls.
I'm new in AUTOSAR, I'm working on a project and my only concern is modeling (Software Components layer), without Basic Software implementation. I'm looking for a way to specify crypto information in the model (a way to specify that a specific communication has to be treated by the Crypto Service Manager). Does someone know a way to do so? Any tips or advice would be accepted.
The principle is same as with other services, model a SwcServiceDependency that aggregates a CryptoServiceNeeds. Create RoleBasedPortAssignments to indicated which PortPrototypes shall be used to interact with the Csm.
The SWC defines a way to specify the Crypto Service Needs of an SWC. This is defined in the standard/AUTOSAR_TPS_SoftwareComponentTemplate.pdf
But the actual sighing and authentication is done in the BSW by first routing incoming SecuredIPdus by the PduR to the SecOC, which will forward authentication to the CryptoStack (Csm, Cry, CryIf, CAL / CrySHE). They'll return an (authenticated) IPdu back to the PduR, which routes it up to Com, which provides you the ISignalGroups and ISignals. To transmission is just the opposite way, where the SecOC gets an IPdu and delivers back a SecuredIPdu, which is routed by PduR down to the If to the -Driver to transmit.
On the receiving side, failed authentication will, the same as other failures usually cause the IPdu to be discared to higher layers, which looks like a message was never received.
This BasicSW parts are defined in the SystemDescription, which is defined in the standard/AUTOSAR_TPS_SystemTemplate.pdf
Im new to zmq. Im using the same for local IPC in a Linux based OS (The socket is AF_UNIX type)
But I could not find a way to get the caller's (client) process Id. Is there any way to find the same using zmq ? (Finding the pid of the caller is must for my access control requirement and if zmq does not provide the same then I should switch to dbus)
Please help me.
Forget most of the low-level socket designs and worries. Think higher in the sky. ZeroMQ is a pretty higher-level messaging concept. So you will have zero-worries about most of the socket-io problems.
For more on these ZMQ principles, read Pieter Hintjens' design maxims and his resources-rich book "Code Connected, Vol.1".
That said, the solution is fully in your control.
Solution
Create a problem-specific multi-zmq-socket / multi-zmq-pattern (multiple zmq-primitives used and orchestrated by your application level logic) as a problem-specific formal communication handshaking.
Ensure the <sender> adds it's own PID into message.
Re/authorise via another register/auth-socket-pattern with the pre-registered sender from the receiver side, so as to avoid a spoofed attack under a fake/stolen PID-identity.
Adapt your access-control policy according to your ProblemDOMAIN, use and implement any level of crypto-security formal handshaking protocols for identity-validation or key-exchange, to raise your access-control policy security to adequate strengths ( including MIL-STD grades ).