I need to set up a protocol for fast command/response interactions. My instinct tells me to just knock together a simple protocol with CRLF separated ascii strings like how SMTP or POP3 works, and tunnel it through SSH/SSL if I need it to be secured.
While I could just do this, I'd prefer to build on an existing technology so people could use a friendly library rather than the socket library interface the OS gives them.
I need...
Commands and responses passing structured data back and forth. (XML, S expressions, don't care.)
The ability for the server to make unscheduled notifications to the client without being polled.
Any ideas please?
If you just want request/reply, HTTP is very simple. It's already a request/response protocol. The client and server side are widely implemented in most languages. Scaling it up is well understood.
The easiest way to use it is to send commands to the server as POST requests and for the server to send back the reply in the body of the response. You could also extend HTTP with your own verbs, but that would make it more work to take advantage of caching proxies and other infrastructure that understands HTTP.
If you want async notifications, then look at pub/sub protocols (Spread, XMPP, AMQP, JMS implementations or commercial pub/sub message brokers like TibcoRV, Tibco EMS or Websphere MQ). The protocol or implementation to pick depends on the reliability, latency and throughput needs of the system you're building. For example, is it ok for notifications to be dropped when the network is congested? What happens to notifications when a client is off-line -- do they get discarded or queued up for when the client reconnects.
AMQP sounds promising. Alternatively, I think XMPP supports much of what you want, though with quite a bit of overhead.
That said, depending on what you're trying to accomplish, a simple ad hoc protocol might be easier.
How about something like SNMP? I'm not sure if it fits exactly with the model your app uses, but it supports both async notify and pull (i.e., TRAP and GET).
That's a great question with a huge number of variables to consider, and the question only mentioned a few them: packet format, asynchronous vs. synchronized messaging, and security. There are many, many others one could think about. I suggest going through a description of the 7-layer protocol stack (OSI/ISO) and asking yourself what you need at those layers, and whether you want to build that layer or get it from somewhere else. (You seem mostly interested in layer 6 and 7, but also mentioned bits of lower layers.)
Think also about whether this is in a safety-critical application or part of a system with formal V&V. Really good, trustworthy communication systems are not easy to design; also an "underpowered" protocol can put a lot of coding burden on application to do error-recovery.
Finally, I would suggest looking at how other applications similar to yours do the job (check open source, read books, etc.) Also useful is the U.S. Patent Office database, etc; one can get great ideas just from reading the description of the communication problem they were trying to solve.
Related
I have a process A, which receives HTTP requests from a process B and accordingly performs some action. These actions are sensitive, so I have to ensure that A rejects any requests that come from processes other than B.
Is there any way at all to do this? One way I can think of is to use auth tokens the same way they're used for typical secure server-client communication. The problem is that traffic on loopback interface isn't secure and someone could read the token.
I don't necessarily have to use HTTP for passing messages, so perhaps there is some OS-specific function I could use?
Designing secure IPC is an interesting area of thought - I'm curious about why this is important to you. I would hope that if process A is handling any sensitive information, you have control over the machine on which it's running. If it's not, you might owe it to whoever that sensitive data is for/about to take a second to rethink your environment.
I think this answer and the associated post in general on the infosec StackExchange gives a lot of good food for thought which I bet is also applicable to your situation.
I would also recommend using UNIX sockets as your base communication layer if you're dedicated to securing your same-machine IPC. But again, if you're in a situation where you really have to worry about this, you might want to focus on getting out of that situation instead.
I'm used to working with user-uploaded files to the same server, and transferring my own files to a remote server. But not transferring user-uploaded files to a remote server.
I'm looking for the best (industry) practice for selecting a transfer protocol in this regard.
My application is running Django on a Linux Server and the files live on a Windows Server.
Does it not matter which protocol I choose as long as it's secure (FTPS, SFTP, HTTPS)? Or is one better than the other in terms of performance/security specifically in regards to user-uploaded files?
Please do not link to questions that explain the differences of protocols, I am asking specifically in the context of user-uploaded files.
As long as you choose a standard protocol that provides (mutual) authentication, encryption and message authentication, there is not much difference security-wise. If all of this is provided by a layer of TLS in your chosen protocol (like in all of your examples), you can't make a big mistake on a design level (but implementation is key, many security bugs are bugs of implementation, and not design flaws). Such protocols might differ in the supported list of algorithms for different purposes though.
Performance-wise there can be quite significant differences, it depends on what you want to optimize for. If you choose HTTPS, you won't be able to keep a connection open for a long time, and would most probably have to bear the overhead of the whole connection setup with authentication and everything, for every transmitted file. (Well, you can actually keep a https connection open, but that would be quite a custom implementation for such file uploads.) Choosing FTPS/SFTP you will be able to keep a connection open and transmit as many files as you want, but would probably have to have more complex error handling logic (sometimes connections terminate without the underlying sockets knowing about it for a while and so on). So in short I think HTTPS would be more resilient, but secure FTP would be more performant for many small files.
It's also an architecture question, by using HTTPS, you would be able to implement all of this in your application code, while something like FTP would mean dependence on external components, which might be important from an operational point of view (think about how this will actually be deployed and whether there is already a devops function to manage proper operations).
Ultimately it's just a design decision you have to make, the above is just a few things that came to mind without knowing all the circumstances, and not at all a comprehensive list of things to consider.
I'm looking to write a unified email and messaging program. Supporting IMAP, POP, and SMTP are all pretty easy - the protocols are well documented and easy to come by.
Exchange has a SOAP API documented here, whereby you can write an Exchange client which talks with Exchange servers.
I'm looking to find out what protocol IBM (Lotus) Notes uses and how I can go about writing a standalone application which can send and receive mail. (Standalone is a key part of this - I've seen various things about automating the existing client, but I'm looking to write a new client, so I need to know what protocols it uses.)
Language is unimportant to me at this time. I'm leaning towards Python for the project, but I'm still at an exploratory stage where I'm trying to determine what frameworks exist in any language to help me write this.
That's a pretty interesting topic! There are two ways I can think of that provide mail-oriented abstractions, and two that allow you to access mail files as databases directly.
To start out with, and this is very likely the expedient route to take, Domino supports IMAP. It's far from perfect and it's not likely to improve, but it does more or less work for mail access. Not every server has it enabled by default, but it's not terribly difficult or unusual for an administrator to do so.
Recently, the Extension Library has added a JSON-based mail service that purports to provide a pretty friendly API for many operations, but is not complete - for example, it doesn't seem to cover a user's custom views or folders.
Depending on the depth of the project, then there are the routes for accessing the server using Domino's database API, which would be the most flexible but would involve far more hurdles.
The core protocol is NRPC, which, to my knowledge, is only implemented in the core Notes library. As Stan said, it's heavily tied to the presence of an ID file (server or user) and uses that for its encryption. With some setup, you could have that library and ID present and then use the C functions and structs on a platform it supports. This route would give you the most functionality (there are a number of C-level functions to assist with converting between Notes's document representation and MIME).
Alternatively, there is a remote-access protocol called DIIOP that can be used to access a remote Domino server using UN/password credentials via Java objects. This is not enabled for every server, but it's not terribly uncommon, and isn't that hard to enable. You wouldn't have access to all of the C API's functionality for edge cases, but this would cover a lot of ground.
If you want to work in Python, and you are willing to limit yourself to just the most recent versions of the Lotus Domino server, then I think that you should consider using the REST API that is known as the Donmino Data Service. Here's some on-line documentation.
How can a server, i.e. a remote host acting as a central service for multiple clients, detect malicious or invalid clients akin to Blizzard's Warden. In some way, these kinds of software ask a client for specific information every once in a while, which cannot be easily faked from a non-official client.
What I'm wondering is, how can such a mechanism be implemented so that it's hard or impossible to reverse engineer from the client side? Is there any such technique for open source client software (closed source server)?
Short answer: You can't. The client is fundamentally untrustable. Blizzard (and other purveyors of anti-cheat software) are engaged in a constant arms race with the cheaters. You can't just implement it once and be done with it; you have to constantly monitor your product (either heuristically or via player reports) for cheating, then figure out how to programmatically evaluate if someone is cheating.
The longer answer is that you keep your "secret sauce" detection off the client; the client instead just collects information, which it forwards to a trusted machine for analysis. This can make it harder for cheaters to avoid detection, since they only know what information is being collected, not what is being done with it. Eventually though, they'll figure out how to spoof that information, and your anti-cheat mechanism will need to then deal with that problem.
What you can do is implement heuristics in your server code to detect players who are sending inputs that should not otherwise be possible, and then flag those accounts for review or ban. This does nothing detect malicious software on a client, but it can detect the effects of that malicious software. So while you may not be able to pinpoint what is sending those invalid inputs, you can still act no the account.
More specifically to your question, though, it's impossible to give you examples, because you have to define what constitutes "cheating" in the context of your application, and then device methods for detecting it. This is a very domain-specific problem, and to make it more complex, you're unlikely to find open-source implementations of such systems, because they necessarily rely on obscurity to detect cheaters.
My requirement is to make RPC calls between different processes. By nature these calls are 1-1; meaning single sender single receiver. I am architecturally restricted to use only unix domain sockets for this purpose.
I wanted to use 'rpcgen' towards this end. But the problem is that rpcgen works over TCP/UDP as transport mechanisms. What i want is to run them over domain sockets. Given that they dont support it over domain sockets; i figured out to stub the transport routines with my own code after generation to accomplish what i need. But that does not look easy at all.
I explored an option where the generated XDR stream can be written to a local buffer which can then be transported that way i want it; ie. over domain sockets. May be i can retreive it at the remote end to make it work. This might involve another copy of data but performance is not my concern at this point in time.
Is there a readymade solution for this kind of problem? What are my best options here.
Thanks
Sudarshan