We have a web service that acts as a gateway between our clients and another service. The clients send messages to, and receive random messages from, the third-party service. The client's server opens a channel to our web server via a secure socket in order to receive the incoming messages (and not have to poll the server every few minutes).
My question is: is it safe to leave this channel open indefinitely, or should we periodically close and re-open it to obtain new credentials (session keys)? If the latter, how often (hourly, daily, weekly) would be considered "best practice"? I've found a lot of information on secure communications, but nothing to answer this specific question.
Thanks
SSL/TLS (which I'm going to assume you're talking about here) does NOT automatically refresh/renegotiate the session keys being used. There is a renegotiation procedure built-in to the protocol to allow the session keys to be changed within an active session but that procedure was found to have a significant vulnerability a few years back and the renegotiation process was changed (in RFC 5746, see here) to resolve the problem. If you do want to renegotiate the session keys for SSL/TLS, make sure you're doing it in the manner described in this RFC.
That does not, however, answer your original question of IF the session keys should be changed. The answer is...it depends on your security requirements. A good guideline to be used is that any encrypted communications can be eventually decrypted if you see enough of the encrypted data (how practical/doable this is can vary wildly). So, changing your keys every so often is a very good thing to do. If you're passing a small amount of data over a secured connection and the data isn't that sensitive, then you can get away with doing this on a not-so-regular basis (indeed, your SSL/TLS session is probably going to get broken and restablished due to timeouts on one of the two parties on a somewhat regular basis anyway...). If you've got a very sensitive dataset and you're sending alot of data, then I'd suggest rotating the keys every day or so to mitigate this risk (just do it in a secure manner).
Related
Why am I using websockets?
I'm working on routing all my HTTPS requests via a WebSocket, because my app has a chat feature and I need to keep the WebSocket open when the app is running, so why not just route all the requests through it.
My problem.
This turned out to be easier said that done. Should I use the same Access token & refresh token to verify client authentication. Or should I just verify it when the connection opens and then trust it for as long as it's open. So here are my questions:
Is wss(Web socket secure) enough to stop man in the middle attacks?
Should I generate a ticket sort of mechanism for every WebSocket connection, that lasts 2 - 10 minutes and then disconnect and ask the client to reconnect?
Or should I have a Access Token with every request from the client.
How to I make sure that when the server sends the data it is going to the right client.
Should I just end to end encrypt all the payloads to avoid a lot of problems?
Or should I just verify it when the connection opens and then trust it for as long as it's open.
That is fine as long as the connection is over a trusted channel, e.g. ssl/tls.
Is wss(Web socket secure) enough to stop man in the middle attacks?
Yes. Wss is simply ws over ssl/tls.
Should I generate a ticket sort of mechanism for every WebSocket connection, that lasts 2 - 10 minutes and then disconnect and ask the client to reconnect?
I'm not sure why would you do that. On the contrary, with chat-like app you want to keep the connection open as long as possible. Although I advice implementing ping calls on the client side and timeouts on the server side. With such approach you can require action on the client side every say 30s.
Or should I have a Access Token with every request from the client.
Not necessary. With ssl/tls you can authenticate the entire connection once and just remember on the server side that is authenticated. Tokens are used with the classical HTTP because it is easier to scale horizontally such app, e.g. it doesn't matter which server the connection goes to, you can even switch servers between calls and that won't affect auth. But with chat-like app (or any app that requires bidirectional communication) the connection has to be persistent to begin with, and thus tokens introduce unnecessary overhead.
How to I make sure that when the server sends the data it is going to the right client.
I'm not sure what you mean by that. That's pretty much what tcp + ssl/tls guarantees anyway. It is the same for any other protocol over secure tcp. Or do you mean at the app level? Well you have to match a user with a corresponding connection(s) once authenticated. The server has to track this.
Should I just end to end encrypt all the payloads to avoid a lot of problems?
What problems? E2E encryption serves very different purpose: it guarantees that you, a.k.a. the server, is unable to read messages. It guarantess high level of privacy, so that even the server cannot read messages, only peers. And so this is a business decision, not technical or security decision. Do you want to have full control over conversations? Then obviously you can't go with E2E. If on the other hand you want to give the highest level of privacy to your users then it is a good (if not mandatory) approach. Note that full featured E2E is inherently more difficult to implement than non-E2E.
I need to keep the WebSocket open when the app is running, so why not just route all the requests through it.
That is an interesting approach. I myself am thinking about doing that (and most likely will try it out). The advantage is that entire communication goes through a single protocol which is easier to debug. Another advantage is that with a proper protocol you can achieve higher performance. The disadvantage is that the classical HTTP is well understood, there are lots of tools and subprotocols (e.g. REST) covering it. Security, binary streaming (e.g. file serving), etc. are often managed out of the box. So it feels a bit like reinventing the wheel. Either way, I wish you good luck with that, hopefuly you can come back to us and tell us how it went.
I'm trying to figure out the most data-use efficient way to secure our CoAP API. DTLS seems to be the right way to do it, but looking at how much data the handshake requires (and making some uninformed assumptions about how often that needs to happen) it seems that DTLS with X.509 certificates dwarfs the actual data use of CoAP itself.
The most obvious solution would be to just use symmetric keys that are programmed in at the factory, but I don't think I like the security risks that imposes. As far as I understand there is no way to recover from a server-side intrusion other than manually installing new keys on all the devices.
The solution that I'm thinking of proposing is basically a hybrid of the two, distribute the devices with a secured CA that lets the devices do a standard handshake and establish a "temporary" symmetric key. Then to save bandwidth to the device I'd store that key (session?) in a database for the device to continue for months or years at a time, but still have the ability to expire the keys if we discover any have gotten out.
I know I could just use the standard session resumption handshake to resume a session, but I'm not sure that is required since DTLS is connection-less and I can pretend the "connection" is always open. And if I can avoid having to repeat the handshake that would lower data consumption and probably lower server load somewhat too.
The things that I don't know are: Does DTLS define a limit on how long a session can remain open? Or is there a timeout where a session must be removed after some period of inactivity? If not, do the implementations of DTLS define one themselves?
Is there any thing else that I may be overlooking as to why this wouldn't work? Or is there something more straightforward that I'm not thinking of?
Timeouts are application specific, and you can set them as high as you need, or just keep the connections around as long as you can (eg. with a fixed number of usable connections, timing out the least recently used when a new one is opened).
Session resumption data can be held for as long as both parties agree the resumption data is still good (eg. no underlying certificates have expired). Session resumption should be at least as cheap as manually installed symmetric keys.
So a sensible approach seems to be to just try continuing the session if the sending party still has it open, fall back to session resumption on error, and if that doesn't work go through the full handshake again. There don't necessarily need to be agreed-on times for any of that.
(and making some uninformed assumptions about how often that needs to happen)
My feeling is, an ip-address change after a quiet periode is assumed. If that is assumed, the "DTLS session timeout" is the timeout of "NAT(like)s" on your ip-route. And that NAT timeout is still (too frequently) 30s.
If there are no "NAT(like)" on your ip-route, and so the peers are able to exchange ip-messages by their static (not changing) ip-address and port, then there is no such DTLS timeout. Except, as already answered, your application requires that. There are also some discussions in the IETF when the exchange the connection keys for some security reasons. But the number are rather high (except you want to use AES128_CCM8).
since DTLS is connection-less
DTLS requires a context with a master-key and the "association keys" (TLS "connection keys"). That master-key is assigned to the DTLS session id, and the "association keys" are usually assigned to the ip-address and port. The DTLS session resumption is then used in scenarios, where the addresses may change (e.g. because of "NAT(like)s", or, because the peers are entering a sleeping mode and gets a new ip-address on wake-up). With such ip-address changes, the DTLS session resumption is used to refresh the assignment of the "association keys" with the new address. A DTLS resumption is more, it uses also new "association keys", but it's mainly done to overcome the address change.
The most obvious solution would be to just use symmetric keys
Between PSK and x509, there is also RPK, which offers similiar security as x509 with less data used. Also PSK_ECDHE cipher suites may be a choice.
And hopefully DTLS CID will make CoAP/DTLS more efficient. At least for my tests, experiments and usage over the last 2 years, that was the technique, which brought back CoAP into "must be considered"!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have a game server where clients can connect and communicate with via TCP. Any device can connect the server if it knows the IP and port.
I am wondering if I need to add some security to the server. For example,
(1) Add some encryption for the messages sent/received. (To prevent the protocol content is revealed)
(2) Add some key to the message so if the server cannot recognize the key after decryption, the message will be dropped. (To prevent unknown connections/messages flooding in)
Do you think these things are necessary and is there any other thing I should add for such a game server.
I would have rather posted this to the gamedev question but the mods there are apparently faster than here. Before you quote me, I'd like to point out that the following isn't based on 100% book-knowledge, nor do I have a degree in any of these topics. Please improve this answer if you know better, rather than comment and/or compete.
This is a pretty comprehensive list of client/server/security issues that I've gathered from research and/or experience:
Data
The "back-end" server contains everyone's username, password, credit card details, etc., and should be a fortress. This server is for authentication only and should be on a private subnet; it will communicate only with the login server, only when a well-formed login request is received, and will only reply with "allow" or "deny". If you take people's personal information, you are obligated to protect it, and it would be wise to off-load the liability of everything security-related to a professional or hosting company. There is no non-critical attack to this server; if it is breached, you are finished. Many/most/all companies now draw their pretty login screen on top of another companies' back-end credit card/billing system.
Login
Connections to the login server should be secure. The login server is just a message pump between the public login mechanism, the private data store, and the client/server connection state. For security purposes, any HTTP access to the login system should be hosted on a separate HTTP server; the WWW server crashing should not shut down your online game (my opinion).
World/UDP
Upon successful login and authentication, the server informs the client to begin listening for "bulk data" or to initiate an in-bound connection on a specific UDP port (could be random and per-connection-attempt). Either way, the server should remain silent and wait for the client to IDENT with some type of handshake to verify that the "alleged client" is actually your code. It is easier to guess when the server asks for input sequentially; instead rely on the client knowing the proper handshake when connecting to the world and drop those that don't. The correct handshake to use can be a function of the CPU clock-ticks or whatever. The TCP will be minimally used and/or disconnected from that point on. The initial bulk data is a good place to advertise the current server-side software revision so clients that are out-of-date can update. A common pool of UDP ports can be handed out among multiple servers and the clients can be load-balanced into the correct port/server. Within the game, "zone transfers" can mean a literal disconnect from one server/port and reconnection to a different server/port. In MMO's, this usually appears as a <2 second loading screen; enough time to disconnect, reconnect, start getting data, and synchronize to the new server clock, not to mention the actual content loading.
"World server" describes a single, multiple-client, state-pumping thread running on a single core of a single processor of a single blade. One, physical, server-of-worlds can have many worlds running on it at once. Worlds can be dynamically split/merged (in a quad-tree fashion), dividing the clients between them, again, for load-balancing; synchronization between the servers occurs at LAN speeds or better. The world server will probably only serve UDP connections and should have nothing to do except process state-changes to/from the UDP connections. UDP is "blind, deaf, and dumb", so-to-speak. Messages are sent with no flow control, no error checking, etc; they are basically assumed to be received as soon as they are sent and may actually arrive late, in the wrong order, or just never arrive. Using UDP, neither the server nor the client are ever stalled, hand-shaking, error-correcting, or waiting for data. Messages need time-stamps because they may arrive late and/or out-of-order. If a UDP channel gets clogged, switch valid clients dynamically to another (potentially random) port. The world server only initiates UDP connections with successfully authenticated clients and ignores all other traffic (world servers hosted separately from HTTP and everything else).
Overly simplified and, using only the position data as an example, each client tells the server "Time:Client###:(X, Y)" over and over. If the server doesn't hear, oh well. The server says "Time:listOfClients(X, Y)" over and over, to everyone at once. If one or more of the clients doesn't hear, oh well.
This implies using prediction/extrapolation on the client; the clients will need to "guess" what should be happening and then correct themselves to agree with the server when they start getting data again. Any time you get a packet with a "future" time, even if the packet doesn't make sense or isn't useful, you can at least advance the client clock to that point and discard any now-late packets, helping a lagging client to catch up.
Un-verified supposition:
Besides the existing security concerns, I don't see a reason why two or more clients could not maintain independent, but server-managed, UDP channels between each other. By notifying other clients within close game-proximity in addition to the server, the clients, themselves, can help to load-balance. The server should always verify that what the clients say happened could/should/would happen, and has the ability to undo all of it and reset both clients to it's own known-good state. The information that the clients are able to share, internally, should be extremely restricted; basically just the most-time-critical positional and/or state-data. Client's should probably not be allowed to request specific information and, again, rely only on "dumb" broadcasts. This begins to approach distributed/cloud computing, where the clients are actually doing a lot of the server work, while the server just watches and "referees," calling foul, when appropriate.
Client1 - "I fought Client2 and won"
Client2 - "I fought Client1 and won"
Server - "I watched and Client2 cheated. Client1 wins. (Client2 is forced to agree)"
The server doesn't necessarily even need to watch; if Client2 damages Client1 in an unusual/impossible way, Client1 can request arbitration from the server.
Side-effects
If the player moves around, but the data isn't getting to the server, the player experiences "rubber-banding", where the player appears to be moving on the client but, server-side, they are not. When the client gets the next server state, the client snaps the player back to where they were when the server stopped getting updates, creating the rubber-band effect.
This often manifests another way, too. If the server sees a player moving, then fails to receive the "stopped moving" message, the server will predict their continued path for all of the other clients. In MMO-RPG's, for example, you can see "lagging" players running directly into/at walls.
Holes
The last thing I can think of is just basic code security. This is especially important if your game is moddable. Mods are, by definition, a way for users to insert their own code into yours. If you are careless about the amount of "API" access you give away, inevitably, someone WILL feel the need to be malicious. Pay particular attention to string termination/handling if the language you are using requires it. Do not build your game from plain-text ASCII content files. If your game has even one "text box," someone WILL be trying to feed HTML/LUA/etc. code into it.
Lastly, paths should use appropriate system variables whenever possible to avoid platform shenanigans and/or access violations (x86/x64, no savegames in ProgramFiles, etc.)
I am working on a game where the world simulation is performed on clients. These clients submit updated world state to a central server. That server then redistributes those changes to the rest of those clients. Simple.
The issue is, I want to protect against modified clients. That is, I want to prevent cheaters that modify variables or whatnot in the executable.
At first I thought I would use a public key/private key encryption scheme. All commands sent from the client would be encrypted and sent to the server. But I quickly realized that this doesn't offer any real protection against cheaters since they can still modify variables.
The only other solution I can think of is to store all variables in a file and to record a hash of it. Then the client can only update the server after the server verifies these hashes.
But then I realized that a cheater could just rewrite the network request to patch those hashes.
I don't know where to go from here.
What protocols can be put in place so that the server only accepts commands from trusted (i.e. known) code bases?
You don't have to be so pessimist. Do your best to avoid cheaters to modify your data. For example one of common ways is to inject a DLL to game process and modify data on memory. Most game trainers do it in this way. You can periodically check your process's loaded modules and exit if there is unknown module. You may think of game's internet connection. Well, do your best. Add encryption and some custom handshake algorithms to make it complex and hard for cheaters.
In general, there is a lot of things which you can do to make your game hard-to-cheat.
so I have this TCP connections between my server and client, and anyone can connect to my server. But I want to make sure that the client is really using MY client application and not just faking messages from a fake TCP client. What would be the ways to do that, check that the connection really is from my game client?
Thanks!
EDIT
If I'm gonna use TLS, can I solve that problem?
There will probably not be a complete solution to your problem, since whatever you do, the other party might always take your program, run it in a monitored environment, manipulate the runtime data and let it use its "secure" network protocol. Since the client application is in uncontrollable hands, you can never be sure that it is your own program.
Baby example: My application runs your application and plays back the data to your server, and forwards your response back to the application. How can you tell?
That said, it might be a very promising "99%" approach to use SSL and hardcode the client's private key into the application -- with some trickery you can try and make it hard to find (e.g. see how Skype does it). If you then also build integrity checks into your program that figure out whether anyone is manipulating the memory or debugging into your program, you can try and make it a bit harder for a potential adversary. (But note that you will always have to ship the private key with your application, so it isn't really safe from discovery.)
Others have suggested useful answers to your question, but I'm going to suggest another approach. Re-examine your requirements.
Ask yourself why you want to know the identity of the client program. Is it so that you can trust your client program more than you trust 3rd-party client programs?
If you need to trust the identity or integrity of software that you have already shipped to your customers, I claim your security model is broken. Once the software runs on a client's PC, you should assume it is evil, even if you originally wrote it.
Any status, any command, any data whatsoever that comes from the network must be checked before it is relied upon.
My default response is to use a challenge/response authentication.
After connection, send a random number from the server to the client
The client then computes, using a hash/key/.... a response message and returns that to the server
If the response matches the servers computation, your chances of authenticity are better. Note though that a reverse engineer of your client will leave this method open to fraud.
You could use a public/private key pair in order to verify that you are who you say you are.
http://en.wikipedia.org/wiki/RSA#Signing_messages