I'm now wondering whether we can make some sort of SSL server based on the following policies/scheme under Linux environment.
(1) As for the initial request, it should be incoming in the parent server process. After establishing SSL connection and also handling initial parsing of the request, the request (socket) will be forwarded to a request-handling process to do further processing.
(2) The request-handling process will be something which should be running beforehand. We won't use any fork-exec-pipe based scheme here in this sense.
(3) As for the communication between the parent server process and the request handling process, some IPC has been established in order to copy opened socket descriptor from the parent server process to the request-handling process by using sendmsg() - SCM_RIGHTS technique.
(4) In terms of SSL functionality, we are supposed to use OpenSSL (libssl).
(5) In the request-handling process, we are supposed to create new SSL socket by making use of the shared socket descriptor from the parent server process.
The point is that I don't want to waste any performance of transferring data between the server and the request handling process. I don't want to spawn request handling process as per request basis, either. So I would like to spawn the request handling process in advance.
Although I'm not really sure whether what I'm trying make here is making sense to you, it would be appreciated if anyone of you could give me some hint on whether the above approach is feasible.
It is not clear what exactly you are looking for, especially where do you want to do the SSL encryption/decryption.
Do you want to do the encryption/decryption inside the request handler processes?
That seems the more likely interpretation. However you talk about doing some request parsing in the master process. Is the data parsed in the master process already a part of the SSL session? If so, you would have to do an SSL handshake (initialization and key exchange) in the master process in order to access the encrypted data. If you then passed the original socket to another process, it wouldn't have access to the SSL state of the parent process so it wouldn't be able to continue decrypting where the parent left off. If it tried to reinitialize SSL on the socket as if it were a clean connection, the client will probably (correctly) treat an unsolicited handshake in the middle of a connection as a protocol error and terminate the connection. If it didn't, it would present a security hole as it could be an attacker who maliciously redirected client's network traffic, instead of your request-handling process, who is forcing the re-initialization. It's generally not possible to pass initialized SSL sessions to different processes without also informing them of the complete internal state of OpenSSL (exchanged keys, some sequence numbers, etc.) along with this, which would be hard if not impossible.
If you don't need to touch the SSL session in the parent process and you parse just some unencrypted data that come before the actual SSL session starts (analogous to e.g. the STARTTLS command in IMAP), your idea will work without problems. Just read what you need to, up to the point where the SSL exchange should start, then pass the socket to the backend process using SCM_RIGHTS (see e.g. the example in cmsg(3) or this site). There are also libraries that do the work for you, namely libancillary.
Or do you expect the master process to do SSL encryption/decryption for the request handler processes?
In that case it makes no sense to pass the original socket to the request-handler processes as the only thing they would get from it is encrypted data. In the scenario you have to open a new connection to the backend process as it will carry different data (decrypted). The master process will then read encrypted data from the network socket, decrypt it and write the result to the new socket for the request-handler. Analogically in the other direction.
NB: If you just want your request handling processes not to worry about SSL at all, I'd recommend to let them listen on the loopback interface and use something like stud to do the SSL/TLS dirty work.
In short, you have to choose one of the above. It's not possible to do both at the same time.
Related
I have a web-server with an SSL certificate, and an unsecured device on a GSM/GPRS network (arduino MKR GSM 1400). The MKR GSM 1400 library does not feature a SSL server, only an SSL Client. I would prefer to use a library if that's possible, but I don't wanna write a SSL Server class. I am considering writing my own protocol, but I'm familiar with HTTPS and will make writing the interface on the webserver side easier.
The GSM Server only has an SSL Client
I am in control of both devices
Commands are delivered by a text string
Only the webserver has SSL
My C skills are decent at best
I need the SSL Server to be able to send commands to the Arduino Device, but I want these commands to be secured (The arduino device opens and closes valves in a building).
The other option would maybe have some sort of PSK, but I wouldn't know where to start on that. Is there an easy function to encrypt and decrypt a "command string". I also don't want "attackers" to be sending commands that I've sent before.
My Basic question is, does this method provide some reasonable level of security? Or is there some way to do this that I'm not thinking of.
While in a perfect world there would be a better approach, you are currently working within the limits of what your tiny system provides.
In this situation I find your approach reasonable: the server simply tells the client using an insecure transport that there is some message awaiting (i.e. sends some trigger message, actual payload does not matter) and the client then retrieves the message using a transport which both protects the message against sniffing and modification and also makes sure that the message actually came from the server (i.e. authentication).
Since the trigger message from the server contains no actual payload (arrival of the message itself is enough payload) an attacker could not modify or fake the message to create insecure behavior in the client. The worst what could happen is that some attacker will either block the client from getting the trigger messages or that the attacker fakes trigger messages even though there is no actual command waiting from the server.
If the last case is seen as a problem it could be dealt with a rate limit, i.e. if server did not return any command although the client received a trigger message than the client will wait some minimum time before contacting the server again, no matter if a trigger message was received or not. The first case of the attacker being able to block messages from the server is harder to deal with since in this case the attacker is likely able to block further communication between client and server too - but this is a problem for any kind of communication between client and server.
I'm writing a Node HTTP server that essentially only exists for NAT punchthrough. Its job is to facilitate a client sending a file, and another client receiving that file.
Edit: The clients are other Node processes, not browsers. We're using Websockets because some client locations won't allow non-HTTP/S port connections.
The overall process works like this:
All clients keep an open websocket connection.
The receiving client (Alice) tells the server via Websocket that it wants a file from another client (Bob).
The server generates a unique token for this transaction.
The server notifies Alice that it should download the file from /downloads?token=xxxx. Alice connects, and the connection is left open.
The server notifies Bob that it should upload the file to /uploads?token=xxxx. Bob connects and begins uploading the file, since Alice is already listening on the other side.
Once the transfer is complete, both connections are closed.
This is all accomplished by storing references to the HTTP req and res objects inside of a transfers object, indexed by the token. It all works great... as long as I'm not clustering the server.
In the past, when I've used clustering, I've converted the server to be stateless. However, that's not going to work here: I need the req and res objects to be stored in a state so that I can pipe the data.
I know that I could just buffer the transferring data to disk, but I would rather avoid that if at all possible: part of the requirements is that if I buffer anything to disk I must encrypt it, and I'd rather avoid putting the additional load of encrypting/decrypting transient data on the server.
Does anyone have any suggestion on how I could implement this in a way that supports clustering, or a pointer to further research sources I could look at?
Thanks so much!
How do I find out from a socket client program that the remote connection is down (e.g. the server is down). When I do a recv and the server is down it blocks if I do not set any timeout. However in my case I cannot put any reliable timeout value to get around it since otherwise the recv times out even when the server is up but the response really takes longer than the timeout value that I have set.
Unfortunately, ZeroMQ just passes this on to the next layer. So the protocol you are implementing on top of ZeroMQ will have to handle this.
Heartbeats are recommended. Basically, just have one side send a message if the connection is otherwise idle. The other side can treat the absence of such messages as a failure condition and close the connection.
You may wish to modify your higher level protocols to be more robust. For example, you can submit a command, query its status, and allow the other side to forget about the command. That way, if the connection is lost, you can reconnect and query any outstanding commands. Any it doesn't have, you know didn't get through and can resubmit. Once you get a reply with the result of a command, you can tell the other side that it can now forget the response.
This allows you to keep the connection active while a long-running command is ongoing. Every so often you ask, "is everything okay". The other side responds, "yes". You can use long polling where the other side delays responding for a second or so while the command is in process. This allows it to return the results immediately rather than having to wait a second for your next query.
The specifics depend on your exact requirements, but you must design this correctly into your protocol.
If the remote host goes down without sending you a tcp FIN package then you have no chance to detect that. You can test that behaviour by firewalling a port after a connection has been established on that port. Your program will "hang" forever.
However, the Linux kernel supports a mechanism called TCP keep alives which are meant to close a tcp connection after a given timeout. If you can't specify a timeout for your application, than there isn't a reliable chance to use that. Last chance might be to use features of the application protocol (can you name it?), if that protocol does not support features for connection handling you may invent something on your own on top of that.
I'm new to socket programming and I need to implement a UDP based rateless file transmission system to verify a scheme in my research. Here is what I need to do:
I want a server S to send a file to a group of peers A, B, C.., etc. The file is divided into a number of packets. At the beginning, peers will send a Request message to the server to initialize transmission. Whenever S receives a request from a client, it ratelessly transmit encoded packets(how to encode is done by my design, the encoding itself has the erasure-correction capability, that's why I can transmit ratelessly via UDP) to that client. The client keeps collecting packets and try to decode them. When it finally decodes all packets and re-construct the file successfully, it sends back a Stop message to the server and S will stop transmitting to this client.
Peers request the file asynchronously (they may request the file at different time). And the server will have to be able to concurrently serve multiple peers. The encoded packets for different clients are different (they are all encoded from the same set source packets, though).
Here is what I'm thinking about the implementation. I have not much experience with unix network programming though, so I'm wondering if you can help me assess it, and see if it is possible or efficient.
I'm gonna implement the server as a concurrent UDP server with two socket ports(similar to TFTP according to the UNP book). One is to receive controlling messages, as in my context it is for the Request and Stop messages. The server will maintain a flag (=1 initially) for each request. When it receives a Stop message from the client, the flag will be set to 0.
When the serve receives a request, it will fork() a new process that use the second socket and port to send encoded packets to the client. The server keeps sending packets to the client as long as the flag is 1. When it turns to 0, the sending ends.
The client program is easy to do. Just send a Request, recvfrom() the server, progressively decode the file and send a Stop message in the end.
Is this design workable? The main concerns I have are: (1), is that efficient by forking multiple processes? Or should I use threads? (2), If I have to use multiple processes, how can the flag bit be known by the child process? Thanks for your comments.
Using UDB for file transfer is not best idea. There is no way for server or client to know if any packet has been lost so you would only know that during reconstruction assuming you have some mechanism (like counter) to detect lost packes. It would then be hard to request just one of those packets that got lost. And in the end you would have a code that would do what TCP sockets do. So I suggest to start with TCP.
Typical design of a server involves a listener thread that spawns a worker thread whenever there is a new client request. That new thread would handle communication with that particular client and then end. You should keep a limit of clients (threads) that are served simultaneously. Do not spawn a new process for each client - that is inefficient and not needed as this will get you nothing that you can't achieve with threads.
Thread programming requires carefulness so do not cut corners. Otherwise you will have hard time finding and diagnosing problems.
File transfer with UDP wil be fun :(
Your struct/class for each message should contain a sequence number and a checksum. This should enable each client to detect, and ask for the retransmission of, any missing blocks at the end of the transfer.
Where UDP might be a huge winner is on a local LAN. You could UDP-broadcast the entire file to all clients at once and then, at the end, ask each client in turn which blocks it has missing and send just those. I wish Kaspersky etc. would use such a scheme for updating all my local boxes.
I have used such a broadcast scheme on a CANBUS network where there are dozens of microControllers that need new images downloaded. Software upgrades take minutes instead of hours.
Context: Linux (Ubuntu), C, ZeroMQ
I have a server which listens on ipc:// SUB ZeroMQ socket (which physically is a Unix domain socket).
I have a client which should connect to the socket, publish its message and disconnect.
The problem: If server is killed (or otherwise dies unnaturally), socket file stays in place. If client attempts to connect to this stale socket, it blocks in zmq_term().
I need to prevent client from blocking if server is not there, but guarantee delivery if server is alive but busy.
Assume that I can not track server lifetime by some external magic (e.g. by checking a PID file).
Any hints?
Non-portable solution seems to be to read /proc/net/unix and search there for a socket name.
Without showing your code all of this is guesswork... that said...
If you have a PUB/SUB pair, the PUB will hang around to make sure that its message gets through. Perhaps you're not using the right type of zmq pair. Sounds more like you have a REP/REQ pair instead.
This way, once you connect from the client (the REQ side), you can do a zmq_poll to determine if the socket is available for writing. If yes, then go ahead with your write, otherwise shutdown the client and handle the error condition (if it is an error in your system).
Maybe you can try to connect to the socket first, with your own native socket. If the connection succeeds, it's quite high possibility your publisher could work fine.
There is another solution. Don't use ipc:// sockets. Instead use something like tcp://127.0.0.101:10001. On most UNIXes that will be almost as fast as IPC because the OS recognizes that it is a local connection and shortcuts the full IP stack processing.