AUTOSAR: expressing crypto services during modelling - autosar

I'm new in AUTOSAR, I'm working on a project and my only concern is modeling (Software Components layer), without Basic Software implementation. I'm looking for a way to specify crypto information in the model (a way to specify that a specific communication has to be treated by the Crypto Service Manager). Does someone know a way to do so? Any tips or advice would be accepted.

The principle is same as with other services, model a SwcServiceDependency that aggregates a CryptoServiceNeeds. Create RoleBasedPortAssignments to indicated which PortPrototypes shall be used to interact with the Csm.

The SWC defines a way to specify the Crypto Service Needs of an SWC. This is defined in the standard/AUTOSAR_TPS_SoftwareComponentTemplate.pdf
But the actual sighing and authentication is done in the BSW by first routing incoming SecuredIPdus by the PduR to the SecOC, which will forward authentication to the CryptoStack (Csm, Cry, CryIf, CAL / CrySHE). They'll return an (authenticated) IPdu back to the PduR, which routes it up to Com, which provides you the ISignalGroups and ISignals. To transmission is just the opposite way, where the SecOC gets an IPdu and delivers back a SecuredIPdu, which is routed by PduR down to the If to the -Driver to transmit.
On the receiving side, failed authentication will, the same as other failures usually cause the IPdu to be discared to higher layers, which looks like a message was never received.
This BasicSW parts are defined in the SystemDescription, which is defined in the standard/AUTOSAR_TPS_SystemTemplate.pdf

Related

Why is the application data of a packet is called a protocol?

I've been reading on packets a lot today. I was confused for sometime because smtp, http, or ftp, for example, are all called protocols. But that they also somehow utilize transport protocols like TCP. I couldn't locate them on the packet 4 layers. Until I just discovered they're simply part of the application layer.
I want to know what exactly these "protocols" offer. I'm guessing a specific format for the data which applications on the client side know how to handle? If so, does this mean that realistically, I might have to create my own "protocols" if I created an application with a unique functionality?
A protocol, in this case, is just a structured way of communicating between two or multiple parties.
If you write, for example, a PHP-App and offer an API, you created a protocol to interact with your program. It defines how others interact with it and what response they can expect while doing so. Your self-created protocol depends on others, like the HTTP and TCP.
I suggest watching following video of LiveOverflow, explaining exactly this:
https://www.youtube.com/watch?v=d-zn-wv4Di8&ab_channel=LiveOverflow
I want to know what exactly these "protocols" offer.
You can read the definition of each protocol, if you really want to

Separate REST API and App Server

We want to figure out, is it a good practice to split the REST API and App Server. We code both of them with NodeJs and host it on AWS, also note that we want to connect other clients (Android/iOS) with the API and have separate database server.
Our main questions are:
it is more secure ?
better performance ?
there are special features of the development, which we must consider?
that is then the REST server is down? do we have to cache the data on the App Server ?
we also have some simple logic on the client side like "password forget", which server handle this ? (App- or REST server)
which of them handle the authentication ?
it is more secure ?
No. It is not. In fact, it is less secure as the attack surface is larger. And you need to individually authenticate and authorize each services.
better performance ?
Nope. Function calls with in the same application is much faster than serializing -> network latency(http(s) overhead) -> deserializing -> processing -> serializing -> network latency(http(s) overhead) -> deserializing
there are special features of the development, which we must consider?
Yes, deployment strategy, service discovery, graceful degradation in case of upstream service unavailability.
that is then the REST server is down? do we have to cache the data on
the App Server?
This depends on the situation, there is no universal answer to this question. Trust me, nobody other than your team/product owner can answer this question. Mostly this decision will be driven by the contract that you establish with your consumers. I would suggest to read about circuit breaking/ graceful degradation/ http response codes for partial response etc
we also have some simple logic on the client side like "password
forget", which server handle this ? (App- or REST server) which of
them handle the authentication ?
From this question, I am assuming that you don't have a clear separation for individual responsibilities of each service yet. That begs the question, why split these into 2 at this point of time. Why can't all the functionality reside in one application to begin with. As you evolve, as you start feeling the pain of a monolithic application, you can revisit your architecture and break it into small pieces as it deems fit. In my opinion, for smaller applications, monolithic architecture is much more manageable than microservices.
Just one humble suggestion, I wouldn't name one service REST and other App server, that may give an impression that you may be looking this from an incorrect angle.
In my opinion, if I am coming to a point where my monolithic app is not manageable anymore, I would split it based on functionality (look at unrelated entities and take them out as a separate service)

two way security (multi-protocol)

I'm implementing an one-to-many multi-protocol server (+ clients) and I'd like to add 2-way security. Here's what I'd like to accomplish:
both client and server authenticate to each other in a secure way. there is no human interaction involved on the client side.
client's code checksum is validated on the server.
client's code may be written in an interpreted language (such as python or javascript), so I'd like to prevent the possibility to compromise the network after someone gains access to the client (this may be an overkill though, because my clients won't be executing anything on the server, just reporting the results of their actions)
How should I design the authentication flow? What techniques should I use/google for, or - on a lower level - what existing solutions could I try? (my prototype is written using node.js)
SSL can do authentication both ways. Out of the box, nothing special needed. One can even get the certificates for free (self-signed or from recognized CAs).
Client certificates can be used to distinguish clients if that's a need, similarly they can be used to prevent copies of clients that log in simultaneously.
What you fundamentally cannot do is prevent a smart malicious user from controlling a client in such a manner as that they reverse engineer how it interacts with the server and instead of running your intended client, run their own that still acts as if it is the real client but isn't.
The solution to the impossibility of trusting the client is to not let it do things that you have to trust it is running your code unaltered. That often means moving from a 2 tier model (heavy client - server) to a 3 tier model where the code that you want to run is kept on hardware you control, and only an (untrusted) user interfacing is pushed to the user controlled hardware.

Communication between RESTful API's on same server with NodeJS

I am building two sets of services on a website (all written in NodeJS on the server), both are using a RESTful approach. For the sake of modularity I decided to make both services separate entities. The first service deals with the products of the site and the second specifically deals with user related functions. So the first might have functions like getProducts, deleteProduct etc... The second would have functions like isLoggedIn, register, hasAccessTo etc... The product module will make several calls to the user module to make sure that the person making the calls has the privilege to do so.
Now the reason I separated them like this, was because in the near future I foresee a separate product range opening up, but will need to use the same user system as the first (even sharing the same database). The user system will use a database that spans the entire site and all subsequent products
My question is about communication between these projects and the users project. What is the most effective way of keeping the users module separate without suffering any significant speed hits. If the product API made a call to the user API on the same server (localhost), is there a signifcant cost to this, versus building the user API into each of the subsequent projects? Is there a better way to do this through interprocess communication maybe? Is simply having the users API run as its own service an effective solution?
If you have two nodes on same server (machine) then you have not bad performance in terms of network latency because both are on localhost.
Then, nodes will be communicating using a rest api, so on the underground, you will use node js sockets. You could use unix sockets instead of http sockets because are faster BUT are worst to debug, so I recommend you don't to that (but it's ok know alternatives).
And finally, your system looks like an "actor design pattern". At first glance this design patter is a little difficult to understand but you could have a look at this if you want more info about actor model pattern:
Actor model for NodeJS https://github.com/benlau/nactor
Actor model explanation http://en.wikipedia.org/wiki/Actor_model

How to get caller pid in zmq (local socket)

Im new to zmq. Im using the same for local IPC in a Linux based OS (The socket is AF_UNIX type)
But I could not find a way to get the caller's (client) process Id. Is there any way to find the same using zmq ? (Finding the pid of the caller is must for my access control requirement and if zmq does not provide the same then I should switch to dbus)
Please help me.
Forget most of the low-level socket designs and worries. Think higher in the sky. ZeroMQ is a pretty higher-level messaging concept. So you will have zero-worries about most of the socket-io problems.
For more on these ZMQ principles, read Pieter Hintjens' design maxims and his resources-rich book "Code Connected, Vol.1".
That said, the solution is fully in your control.
Solution
Create a problem-specific multi-zmq-socket / multi-zmq-pattern (multiple zmq-primitives used and orchestrated by your application level logic) as a problem-specific formal communication handshaking.
Ensure the <sender> adds it's own PID into message.
Re/authorise via another register/auth-socket-pattern with the pre-registered sender from the receiver side, so as to avoid a spoofed attack under a fake/stolen PID-identity.
Adapt your access-control policy according to your ProblemDOMAIN, use and implement any level of crypto-security formal handshaking protocols for identity-validation or key-exchange, to raise your access-control policy security to adequate strengths ( including MIL-STD grades ).

Resources