Long running Asyncio server, reload certificate - python-3.x

I have a custom TCP server implemented with Python 3.5's asyncio library. I also use Lets Encrypt certificates for SSL encryption of the communication with the server. Lets Encrypt only issues certificates valid for 90 days and there is a good chance my server will not be restarted for a longer period of time than this.
I am generating the SSLContext like this:
ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
ssl_context.load_cert_chain(certfile='path/to/cert', keyfile='path/to/key')
And the server like this:
server = asyncio.streams.start_server(self._accept_client, ip, port, loop=self.loop, ssl=ssl_context)
When the certificate expires, existing connections continue to function but new connections will (correctly) fail. Apparently the SSLContext or the server itself keeps the certificate and key files loaded in memory because updating the files on disk does not solve the problem.
I've read the Python documentation quite a bit and found no solution mentioned. One idea I've had is to try call the ssl_context.load_cert_chain() on a regular interval with the hope that this will update cert and key. I can't look at the source of the load_cert_chain function to determine it's behavior as it is apparently written in C, and the documentation doesn't specify the behavior of calling this function once the context has been passed to the server.
Is there a way to update the certificate and keyfile loaded in the SSLContext at runtime without completely stopping and restarting the server?

I am fairly sure that keeping a copy of the context you use and calling the method to reload the cert chain will work fine. At least at the C openssl API level that's a reasonable thing to do and it doesn't look like python does anything to break that.

As far as I know, there is no API in asyncio.Server to update the SSL certificate when the server is live, you've got two solutions:
You can use the SNI callback to update the certificate during the SSL handshake, but it will only work with clients supporting SNI.
This means that for each client supporting this feature, a callback of your choice will be called and may return a SSLContext object that will be used to establish the connection with the client. You can make this callback so it will read the certificate/key files at each call, or be reloaded when needed by a trigger of your choice.
To setup the callback, see: https://docs.python.org/3/library/ssl.html#ssl.SSLContext.set_servername_callback
The other solution is more complicated to get right with the current state of the asyncio API: it might break if the internal implementation of asyncio changes, and I didn't check how it would work with a proactor loop. However, it will work with any kind of client.
You can create a new server with the new context, but using the socket of the first server:
# This will create a new server with the current socket, and setup what's required to use the new callback.
new_server = await asyncio.create_server(callback, None, None,
sock=old_server.sockets[0], ssl=new_context)
# Ensure that when cleaning the old_server, it doesn't try to close the socket we kept
old_server.sockets = []
old_server.close()
await old_server.wait_closed()
At one point during create_server(), asyncio will start to answer to new connections with the new ssl context. Note that once a connection is started and passed to your callback (_accept_client), it is in practice independent of the server which called it, so it will left any existing connection untouched.

Related

How to build a fully secure login: do I need SSL?

context
I'm building a project to better understand login/security, and SSL is really tripping me up.
So far I have a front end (vue.js), an API (node/express), and a DB (postgreSQL).
This is the general auth flow:
When a user logs in they send an email and password (via Axios) to the API.
The API queries the database for a matching username and then using bcrypt.compare it checks the password that is hashed in the database.
If this is successful, the API signs a JWT and sends it to the client.
The client then saves the JWT in local storage and is used for future queries.
question(s)
So far I think that all of the above is the best practice EXCEPT for the first step. From the reading, I've done so far the client needs SSL to securely send a password to the API. Is this the case? Does my server also need to be SSL or just the client/host?
I'm ultimately going to try to use firebase hosting (which is automatically SSL) for the frontend, and heroku for the API and database. If there are more secure options I'm open to suggestions.
Also, in general, I'm new to all of this security stuff - If I'm missing anything or if something else isn't secure, I would love the advice!
SSL creates a secure connection between two points. In our scenario between the client, and the server. After some initial negotiation, the client encrypts its messages in a way that only the server can decrypt. And the server does the same with its answers, or its own questions. By using SSL between these two end points, nobody but the client and server can read the messages.
This is important, since a message sent between client and server is actually seen by many more machines/processes in between. Dozens of other processes can thus see the message, and if the message is not encrypted that means all those processes can know exactly what's in the message. When the client and server communicate over SSL, the other processes still see the messages, but they can't decrypt them.
To your concrete questions: the client opens a secure connection to the server. Both the client and the server need to support this. If you write a custom server, that means you'll need to ensure it has a SSL certificate. A very common place to get these for free these days is letsencrypt.org.

Node TLS Ticket Reestablish Connection

I am doing experimenting in nodejs. I'm trying to use a TLS Ticket to resume a TLS Session. So I will make a client save off the TLS Ticket after successful connection. After shut down I would like it to use the same TLS ticket to reestablish the TLS connection.
I have found the node tls command tlsSocket.getTLSTicket() however I am not sure how to use it to reestablish a connection because it's "Useful only for debugging".
What i want is the ability to get the TLSTicket from the client and manually validate it against a TLSTicketKey in a server in nodejs.
Thanks
After spending way more time than I should have, these are what I found:
There doesn't seem to be any API or exposed JavaScript functions that would allow you to validate TLSTickets.
Reusing sessions via session tickets works out-of-the box, the implementation is completely transparent for your node.js
https://strongloop.com/strongblog/improve-the-performance-of-the-node-js-https-server/
However, as you will see in the link, it is possible to manually handle sessions with a session store (which obviously defeats the purpose of TLS tickets).
Node uses the following function provided by OpenSSL to do Ticket processing.
SSL_CTX_set_tlsext_ticket_key_cb - set a callback for session ticket processing
Full Details: https://github.com/joyent/node/blob/d13d7f74d794340ac5e126cfb4ce507fe0f803d5/deps/openssl/openssl/doc/ssl/SSL_CTX_set_tlsext_ticket_key_cb.pod
It's done from here: https://github.com/joyent/node/blob/master/src/node_crypto.cc
Node does not emit resumeSession when it receives valid TLS ticket.
The following GitHub issue describes why and is an easy reference to Node's TLS ticket implementation details.
https://github.com/joyent/node/issues/5872

It is interesting to create a new node app to handle socket.io?

I want to add on an existing project some sockets with nodeJs and Socket.io.
I already have 2 servers :
An API RESTful web service, to storage and manage my datas.
A Public web service to return HTML, assets (js, css, images, ...)
On the first try, I create my socket server on the Public one. But I think it will be better if I create an other one to handle only socket query.
What do you think ? It's a good idea or just an useless who will add more problem than solve (maybe duplicate intern lib, ..)
Also, i'm using token to communicate between Public and API, do I have to create another to communication between socket and API ? Or I can use the same one ?
------[EDIT]------
As nobody didn't understand me well I have create a schema with the infrastructure I was thinking about.
It is a good way to proceed ?
The Public Server and Socket server have to be the same ? Or can be separate ?
Do I must create a socket connection between API and Socket server for each client connected ?
Thank you !
Thanks for explaining better.
First of all, while this seems reasonable, this way of using Socket.io is not the most common one. The biggest advantage of using Socket.io is that it keeps a channel open for 2-way communication. The main advantage of this is that the server itself can send messages to the client without the latter having to poll periodically.
Think, for example, of a mail client. Without sockets, the browser would have to poll periodically to check for new mail. With an open socket connection, instead, as soon as a new mail comes the server notifies the client immediately.
In your case, the benefits could be limited, and I'm not sure the additional complexity of a Socket.io server (and cost!) would really be worth the modest speed improvement on REST requests. However, at the end it's up to you.
In answer to your points
See above
If the "public server" is not written in Node.js they can't be the same application. Wether they reside on the same server, it's up to you and your budget. Ideally they should be separate, for bigger workloads.
If you just want the socket server to act as a real-time proxy, then yes, you'll have to create a socket connection for each request. How that will work is:
The client requests a resource to the Socket.io server.
The Socket.io server does the normal HTTP request to the API server (e.g. using request)
The response is returned to the client over the socket connection
The workflow represented in #3 is the reason why you should expect only moderate performance improvement. Indeed, you'll get some better latency, but most of the overhead for starting a HTTP request is still there!

Implementing SSL server using libssl and sendmsg() SCM_RIGHTS

I'm now wondering whether we can make some sort of SSL server based on the following policies/scheme under Linux environment.
(1) As for the initial request, it should be incoming in the parent server process. After establishing SSL connection and also handling initial parsing of the request, the request (socket) will be forwarded to a request-handling process to do further processing.
(2) The request-handling process will be something which should be running beforehand. We won't use any fork-exec-pipe based scheme here in this sense.
(3) As for the communication between the parent server process and the request handling process, some IPC has been established in order to copy opened socket descriptor from the parent server process to the request-handling process by using sendmsg() - SCM_RIGHTS technique.
(4) In terms of SSL functionality, we are supposed to use OpenSSL (libssl).
(5) In the request-handling process, we are supposed to create new SSL socket by making use of the shared socket descriptor from the parent server process.
The point is that I don't want to waste any performance of transferring data between the server and the request handling process. I don't want to spawn request handling process as per request basis, either. So I would like to spawn the request handling process in advance.
Although I'm not really sure whether what I'm trying make here is making sense to you, it would be appreciated if anyone of you could give me some hint on whether the above approach is feasible.
It is not clear what exactly you are looking for, especially where do you want to do the SSL encryption/decryption.
Do you want to do the encryption/decryption inside the request handler processes?
That seems the more likely interpretation. However you talk about doing some request parsing in the master process. Is the data parsed in the master process already a part of the SSL session? If so, you would have to do an SSL handshake (initialization and key exchange) in the master process in order to access the encrypted data. If you then passed the original socket to another process, it wouldn't have access to the SSL state of the parent process so it wouldn't be able to continue decrypting where the parent left off. If it tried to reinitialize SSL on the socket as if it were a clean connection, the client will probably (correctly) treat an unsolicited handshake in the middle of a connection as a protocol error and terminate the connection. If it didn't, it would present a security hole as it could be an attacker who maliciously redirected client's network traffic, instead of your request-handling process, who is forcing the re-initialization. It's generally not possible to pass initialized SSL sessions to different processes without also informing them of the complete internal state of OpenSSL (exchanged keys, some sequence numbers, etc.) along with this, which would be hard if not impossible.
If you don't need to touch the SSL session in the parent process and you parse just some unencrypted data that come before the actual SSL session starts (analogous to e.g. the STARTTLS command in IMAP), your idea will work without problems. Just read what you need to, up to the point where the SSL exchange should start, then pass the socket to the backend process using SCM_RIGHTS (see e.g. the example in cmsg(3) or this site). There are also libraries that do the work for you, namely libancillary.
Or do you expect the master process to do SSL encryption/decryption for the request handler processes?
In that case it makes no sense to pass the original socket to the request-handler processes as the only thing they would get from it is encrypted data. In the scenario you have to open a new connection to the backend process as it will carry different data (decrypted). The master process will then read encrypted data from the network socket, decrypt it and write the result to the new socket for the request-handler. Analogically in the other direction.
NB: If you just want your request handling processes not to worry about SSL at all, I'd recommend to let them listen on the loopback interface and use something like stud to do the SSL/TLS dirty work.
In short, you have to choose one of the above. It's not possible to do both at the same time.

TCP secured connection - only via my client

so I have this TCP connections between my server and client, and anyone can connect to my server. But I want to make sure that the client is really using MY client application and not just faking messages from a fake TCP client. What would be the ways to do that, check that the connection really is from my game client?
Thanks!
EDIT
If I'm gonna use TLS, can I solve that problem?
There will probably not be a complete solution to your problem, since whatever you do, the other party might always take your program, run it in a monitored environment, manipulate the runtime data and let it use its "secure" network protocol. Since the client application is in uncontrollable hands, you can never be sure that it is your own program.
Baby example: My application runs your application and plays back the data to your server, and forwards your response back to the application. How can you tell?
That said, it might be a very promising "99%" approach to use SSL and hardcode the client's private key into the application -- with some trickery you can try and make it hard to find (e.g. see how Skype does it). If you then also build integrity checks into your program that figure out whether anyone is manipulating the memory or debugging into your program, you can try and make it a bit harder for a potential adversary. (But note that you will always have to ship the private key with your application, so it isn't really safe from discovery.)
Others have suggested useful answers to your question, but I'm going to suggest another approach. Re-examine your requirements.
Ask yourself why you want to know the identity of the client program. Is it so that you can trust your client program more than you trust 3rd-party client programs?
If you need to trust the identity or integrity of software that you have already shipped to your customers, I claim your security model is broken. Once the software runs on a client's PC, you should assume it is evil, even if you originally wrote it.
Any status, any command, any data whatsoever that comes from the network must be checked before it is relied upon.
My default response is to use a challenge/response authentication.
After connection, send a random number from the server to the client
The client then computes, using a hash/key/.... a response message and returns that to the server
If the response matches the servers computation, your chances of authenticity are better. Note though that a reverse engineer of your client will leave this method open to fraud.
You could use a public/private key pair in order to verify that you are who you say you are.
http://en.wikipedia.org/wiki/RSA#Signing_messages

Resources