RPCGEN over Unix domain sockets - linux

My requirement is to make RPC calls between different processes. By nature these calls are 1-1; meaning single sender single receiver. I am architecturally restricted to use only unix domain sockets for this purpose.
I wanted to use 'rpcgen' towards this end. But the problem is that rpcgen works over TCP/UDP as transport mechanisms. What i want is to run them over domain sockets. Given that they dont support it over domain sockets; i figured out to stub the transport routines with my own code after generation to accomplish what i need. But that does not look easy at all.
I explored an option where the generated XDR stream can be written to a local buffer which can then be transported that way i want it; ie. over domain sockets. May be i can retreive it at the remote end to make it work. This might involve another copy of data but performance is not my concern at this point in time.
Is there a readymade solution for this kind of problem? What are my best options here.
Thanks
Sudarshan

Related

Verify identity of another process running on the same physical machine

I have a process A, which receives HTTP requests from a process B and accordingly performs some action. These actions are sensitive, so I have to ensure that A rejects any requests that come from processes other than B.
Is there any way at all to do this? One way I can think of is to use auth tokens the same way they're used for typical secure server-client communication. The problem is that traffic on loopback interface isn't secure and someone could read the token.
I don't necessarily have to use HTTP for passing messages, so perhaps there is some OS-specific function I could use?
Designing secure IPC is an interesting area of thought - I'm curious about why this is important to you. I would hope that if process A is handling any sensitive information, you have control over the machine on which it's running. If it's not, you might owe it to whoever that sensitive data is for/about to take a second to rethink your environment.
I think this answer and the associated post in general on the infosec StackExchange gives a lot of good food for thought which I bet is also applicable to your situation.
I would also recommend using UNIX sockets as your base communication layer if you're dedicated to securing your same-machine IPC. But again, if you're in a situation where you really have to worry about this, you might want to focus on getting out of that situation instead.

Transfer protocol for sending user uploaded files to a remote server?

I'm used to working with user-uploaded files to the same server, and transferring my own files to a remote server. But not transferring user-uploaded files to a remote server.
I'm looking for the best (industry) practice for selecting a transfer protocol in this regard.
My application is running Django on a Linux Server and the files live on a Windows Server.
Does it not matter which protocol I choose as long as it's secure (FTPS, SFTP, HTTPS)? Or is one better than the other in terms of performance/security specifically in regards to user-uploaded files?
Please do not link to questions that explain the differences of protocols, I am asking specifically in the context of user-uploaded files.
As long as you choose a standard protocol that provides (mutual) authentication, encryption and message authentication, there is not much difference security-wise. If all of this is provided by a layer of TLS in your chosen protocol (like in all of your examples), you can't make a big mistake on a design level (but implementation is key, many security bugs are bugs of implementation, and not design flaws). Such protocols might differ in the supported list of algorithms for different purposes though.
Performance-wise there can be quite significant differences, it depends on what you want to optimize for. If you choose HTTPS, you won't be able to keep a connection open for a long time, and would most probably have to bear the overhead of the whole connection setup with authentication and everything, for every transmitted file. (Well, you can actually keep a https connection open, but that would be quite a custom implementation for such file uploads.) Choosing FTPS/SFTP you will be able to keep a connection open and transmit as many files as you want, but would probably have to have more complex error handling logic (sometimes connections terminate without the underlying sockets knowing about it for a while and so on). So in short I think HTTPS would be more resilient, but secure FTP would be more performant for many small files.
It's also an architecture question, by using HTTPS, you would be able to implement all of this in your application code, while something like FTP would mean dependence on external components, which might be important from an operational point of view (think about how this will actually be deployed and whether there is already a devops function to manage proper operations).
Ultimately it's just a design decision you have to make, the above is just a few things that came to mind without knowing all the circumstances, and not at all a comprehensive list of things to consider.

How do Transfer Protocols work?

Hypothetically, lets say that I wanted to study/create (a) transfer protocol such as http, ftp or ptp. How would I go about doing so? What do I need to know about the internet and servers and what do I need to make to be able to send and receive data through my own homemade transfer protocol?
That's a little backwards.
First you have a problem you need to solve that involves multiple machines.
Then you write software to solve it, which requires communication between those machines.
The details of that communication is called a 'protocol'.
Since the protocol is the interface between machines, it's beneficial if it is generic enough to let you swap out the software on one side or the other.
In this way, HTTP was invented to serve web pages to browsers, FTP was invented to let users transfer files, etc. The details of the protocol indicate the elements of communication required to solve the problem in the desired way.

Allow or Blocking access in Linux to a port

For the project that I am currently working on, the task is to read a file from disk that is of following format:
port number [in/out/both]
So, if a port number is followed by in, only inbound connections are allowed. If it is followed by out, only outbound connections are allowed and bidirectional if it is followed by both. Block all other ports.
One way to do this, is to read the file at boot time and store port and type in a data structure and keep that in memory, and when a process tries to use a port, grant the access based in the data structure that is in memory.
The problem is, I dont know how to actually implement this, just need a push in the right directions. I know this can be done using iptables, but that is not allowed.
As a start on Linux kernel coding and for some parts of your problem, you might find this useful:
Storing struct array in kernel space, Linux
EDIT:
For your specific problem of packet filtering, I would suggest that you use the netfilter framework from within the kernel to set up the proper rules that will do what you want. Creating your own packet-filtering framework is probably way too complex - plus it's generally not a good idea to reinvent the wheel.
The netfilter subsystem is quite modular, so you might want to consider the possibility of just creating yet another module with your intended functionality for it.

What protocol should I use for fast command/response interactions?

I need to set up a protocol for fast command/response interactions. My instinct tells me to just knock together a simple protocol with CRLF separated ascii strings like how SMTP or POP3 works, and tunnel it through SSH/SSL if I need it to be secured.
While I could just do this, I'd prefer to build on an existing technology so people could use a friendly library rather than the socket library interface the OS gives them.
I need...
Commands and responses passing structured data back and forth. (XML, S expressions, don't care.)
The ability for the server to make unscheduled notifications to the client without being polled.
Any ideas please?
If you just want request/reply, HTTP is very simple. It's already a request/response protocol. The client and server side are widely implemented in most languages. Scaling it up is well understood.
The easiest way to use it is to send commands to the server as POST requests and for the server to send back the reply in the body of the response. You could also extend HTTP with your own verbs, but that would make it more work to take advantage of caching proxies and other infrastructure that understands HTTP.
If you want async notifications, then look at pub/sub protocols (Spread, XMPP, AMQP, JMS implementations or commercial pub/sub message brokers like TibcoRV, Tibco EMS or Websphere MQ). The protocol or implementation to pick depends on the reliability, latency and throughput needs of the system you're building. For example, is it ok for notifications to be dropped when the network is congested? What happens to notifications when a client is off-line -- do they get discarded or queued up for when the client reconnects.
AMQP sounds promising. Alternatively, I think XMPP supports much of what you want, though with quite a bit of overhead.
That said, depending on what you're trying to accomplish, a simple ad hoc protocol might be easier.
How about something like SNMP? I'm not sure if it fits exactly with the model your app uses, but it supports both async notify and pull (i.e., TRAP and GET).
That's a great question with a huge number of variables to consider, and the question only mentioned a few them: packet format, asynchronous vs. synchronized messaging, and security. There are many, many others one could think about. I suggest going through a description of the 7-layer protocol stack (OSI/ISO) and asking yourself what you need at those layers, and whether you want to build that layer or get it from somewhere else. (You seem mostly interested in layer 6 and 7, but also mentioned bits of lower layers.)
Think also about whether this is in a safety-critical application or part of a system with formal V&V. Really good, trustworthy communication systems are not easy to design; also an "underpowered" protocol can put a lot of coding burden on application to do error-recovery.
Finally, I would suggest looking at how other applications similar to yours do the job (check open source, read books, etc.) Also useful is the U.S. Patent Office database, etc; one can get great ideas just from reading the description of the communication problem they were trying to solve.

Resources