Ensuring Client / Server Authentication without credentials - security

I am looking into ways of securing the channel between my client apps and the server.
I have a rich desktop client (win) and mobile client connecting to a webservice, exchanging data.
Using SSL certificates, server and clients may trust each other. On the secured connection i can exchange username and password and therefore authenticate the user.
However i have certain circumstances where a user must connect to the server via any of the two methods without his credentials but only a literal, like say, a license plate number.
I really want to make sure that in this case i ONLY allow client connects from devices i am sure i know, since there is no further checks on the authentication and a license plate number would be a pretty common literal.
How can i ensure that only "devices" which are known to my server, can interact with my server?

If you want to authenticate the device, you'll need to find a way for the device to prove what it is, without disclosing its secret.
A system similar to a number plate would be quite easy to spoof, for anyone in a position to see that number. Depending on how much control you have on this device, you might not be able to hide it, even if the connection to your server is secured with SSL/TLS.
A potential way to do this would be to use a cryptographic hardware token (or smart card). Some of these tokens can be configured to hold a certificate and private key, with the ability to use the private key without being able to export that private key. The cryptographic operations (signing and decryption) happen on the token itself.
You can use these to perform client-certificate authentication to your server. In this case, you would know that the client has that token. This could work on the condition that you know the CAs were issued its certificates only for key pairs in such tokens: there will be a cost in administering the CA to handle this.
This would at least allow you to tie the authentication to a particular token. Whether you can integrate this with your overall device depends on the kind of device you have.

Please check if TLS Pre-Shared Keys (RFC 4279) can be used for your scenario.

Related

What are some approaches to exchange data without using SSL/TLS

When creating any kind of application web,api etc; This days the best practices recommend to secure endpoints by using TLS, but what we can learn from the cloudbleed issue, is that it may not be enough.
Therefore I would like to know what could be done to keep a certain level of security even when TLS is compromised.
For web applications what I currently use is jsencrypt, basically encrypts all data on client browser side before it is sent, but in order to to this I need first to exchange a shared secret (token/cookie) between the server and client, but when dealing with API's that don't support javascript what could be used?
Regarding the exchange of tokens, by instinct it may be obvious to say use OAUTH, OpenID Connect, json tokens , but all of them require or delegate trust to TLS, and again when this is compromised it became useless.
If I am right OpenID could be used without SSL to share a "common secret" by doing Diffie–Hellman key exchange, is there something similar that could be implemented keeping in mind that if TLS gets compromised, easy measure could be taking like revoking tokens or changing "salts" ?
For now I think by following the gpg or rsa (private/public) keys is the way to go, in a way that probably everyone could have access to the public keys but will not be available to see the content of some data signed to a specific user.
But question remains in how to exchange that very first "known secret" between client and server avoiding a possible man in the middle attack considering TLS can't be trusted.
The problem of exchanging the first "known secret" is the same for all protocols, SSL or not. SSL is a public key infrastructure where the basic information that needs to be distributed is the public key of the root certificate of the certificate issuer. The public keys for all ssl certificate issuers are distributed with the browser installation.
Any protocol will depend on some information that is communicated between the server and client in a different channel from the channel where the communication is established. If you don't trust the SSL infrastructure, you will have to send this information by email, postal mail, sms, or by some other means.
However, your problem does not start with the keys neccesary for the encryption libraries you are using in you web application. Your very web application (the javascript files) are also sent from the server to the web browser over SSL. If your SSL communication is compromised by a man-in-the-middle, this man-in-the-middle is also probably able to change the web pages and javascript code that you send to the browser. He could just rewrite your application and remove all encryption code, add new fields and messages for the user, send the user to a different site and so on.
The SSL infrastructure is really a cornerstone in web security, and a neccessity for web applications. Without it, you would have to build a custom protocol for sending encrypted web pages and write a custom browser that would understand this protocol.
With all that said, it is of course possible to add a tiny layer of extra security on top of SSL. You may i.e. create a private/public keypair for each user, send a public key to the user and encrypt all messages from your server to the user with the private key. This could protect against a scenario where a main-in-the-middle is able to listen to the communication but not able to change your messages.

Server to server authentication basics

While I understand the various options available for server to server authentication between REST services, I could use some clarification on the security implications of each approach.
I want a service to verify that a request received does originate from a legitimate calling remote service. No interactive users involved, assume the request happens as the calling service starts up. The three approaches usually mentioned are:
Use a fake user account and authenticate the client against the existing auth system
Use a shared secret / API key and sign the request
Use a client certificate (verifying the server is not a priority) 3.
The part I am missing is that it seems that all three methods depend entirely on the calling service's host (the client in the call) not being compromised. In the first approach this would give away the fake user password, but in the two other approaches an attacker could obtain the shared secret or the client certificate and impersonate the calling server just as easily as with approach number 1... so in what respect are 2 & 3 considered more secure?
If the host is compromised, the game is already over. You cannot hope to use network security techniques to provide guarantees about the end systems, that is not what they're meant for. Consider passwords, for example. When a user types in a password, the guarantee you have is that the entity that entered the password knows the password, that's all. Designing to be secure against compromised hosts is like trying to build a password scheme that only authorizes you if you're the real person - you're expecting a guarantee that the mechanism is not built to provide.
If you want to check the calling server is not compromised you might want to use TPM based verification of the calling server in case the machines have TPMs on them. Once it has been verified that it is not compromised any of the above 3 methods would be secure.(ref: http://en.wikipedia.org/wiki/Trusted_Platform_Module)

Data encryption safe even with developer access

Suppose you have a server-client application.
Server keeps sensitive information that belongs to a Client.
Server will search some parameters in side the clients's sensitive information.
Thus server should decode the sensitive information with client control temporarily.
But server should not reveal the keys, by hacking it self.
I mean a developer should not try to change server sidde code and should not extract the client keys.
Is there really a way to do that?
Somewhat client permits server to decode sensitive information, but the keys instantly disappear and developer have no tricks to reveal this password?
The answer if exist, is valid also for an ideal secure cloud application. Developer or cloud hosting company should not access to decrypted information.
I am not optimistic, but worth to try asking.
So in a word no. This does not grant any security whatsoever, as you cannot trust the client. You even call out that the server will be controlled by the client temporarily, this is generally not a wise approach. Also, do not underestimate a bored developer, it is completely feasible to write some code and rip the keys. The key here is remembering that if someone has access to the box it is no longer your box.
In general, any client access to sensitive info on the server must require authentication of the client, so that you can verify that the client is exactly who he claims he his. The authentication typically involves sending a password, or some kind of authentication token (e.g., an encrypted shared secret) that was given to the client by the server through a secure channel.
As has been said many times, many ways, allowing client access to server data without proper and sufficient authentication means that you give up control of the server.

Are OAuth2 and SSL enough to secure an API

I'm trying to figure out the best way to secure an API. I only allow SSL and I'm using OAuth2 for authentication, but that doesn't seem like enough.
The major concern I have is that anyone could inspect the requests being made by a legitimate client to the API and steal the OAuth client_id. At that point they would be able to construct any request they want to impersonate the legitimate client.
Is there any way to prevent this? I've seen people use a HMAC hash of the parameters using a secret key known only to the client and server but I see 2 problems with that.
It's very difficult (impossible?) to prevent a malicious user from decompiling your client and figuring out the secret key.
Some parameters seem odd to make an HMAC hash of. For example if a parameter was bytes of a file, do you include the whole thing in your HMAC hash?
You can deploy mutually-authenticated SSL between your legitimate clients and your API. Generate a self-signed SSL client certificate and store that within your client. Configure your server to require client-side authentication and only accept the certificate(s) you've deployed to your clients. If someone/something attempting to connect does not have that client certificate, it will be unable to establish an SSL session and the connection will not be made. Assuming you control the legitimate clients and the servers, you don't need a CA-issued certificate here; just use self-signed certificates since you control both the client-side and server-side certificate trust.
Now, you do call out that it's really hard to prevent someone from reverse engineering your client and recovering your credential (the private key belonging to the client certificate, in this case). And you're right. You'll normally store that key (and the certificate) in a keystore of sometype (a KeyStore if you're using Android) and that keystore will be encrypted. That encryption is based on a password, so you'll either need to (1) store that password in your client somewhere, or (2) ask the user for the password when they start your client app. What you need to do depends on your usecase. If (2) is acceptable, then you've protected your credential against reverse engineering since it will be encrypted and the password will not be stored anywhere (but the user will need to type it in everytime). If you do (1), then someone will be able to reverse engineer your client, get the password, get the keystore, decrypt the private key and certificate, and create another client that will be able to connect to the server.
There is nothing you can do to prevent this; you can make reverse engineering your code harder (by obfuscation, etc) but you cannot make it impossible. You need to determine what the risk you are trying to mitigate with these approaches is and how much work is worth doing to mitigate it.
Are you running the OAuth authentication step over SSL itself? That prevents all kinds of snooping though it does mean you'll have to be careful to keep your OAuth server's certificate up to date. (Note, the OAuth server can have a public SSL identity; it's still impossible to forge with even vaguely-reasonable amounts of effort. It's only the private key that needs to be kept secret.)
That said, you need to be more careful about what you are protecting against. Why do people have to use your client code at all? Why does it have to be “secret”? Easier to give that away and put the smarts (including verification of login identity) on your server. If someone wants to write their own client, let them. If someone wants to wave their account in public in a silly way, charge them the costs they incur from their foolishness…

Reliable ways to register a user's computer with a server

As part of strengthening session authentication security for a site that I am building, I am trying to compile a list of the best ways to register a user's computer as a second tier of validation - that is in addition to the standard username/password login, of course. Typical ways of registering a user's computer are by setting a cookie and or IP address validation. As prevalent as mobile computing is, IP mapping is less and less a reliable identifier. Security settings and internet security & system optimization software can make it difficult to keep a cookie in place for very long.
Are there any other methods that can be used for establishing a more reliable computer registration that doesn't require the user to add exceptions to the various cookie deleting software?
If you're looking to do device authentication, you may want to consider mutually authenticated SSL. Here, you'd deploy a client identity certificate to each endpoint you'd want to authenticate. Then, you set the server up to require client authentication, so that a client would need to present a valid identity certificate in order to form the SSL tunnel.
This, of course, is not a perfect solution. In reality, this presents much of the same weaknesses as other solutions (to various degrees) Once your client identity certificates go to your clients, they are out of your control; should a client give their certificate to anyone else, you lost the device authentication that you have based on it. SSL identity certificates are generally stored in a keystore on the client which is encrypted with a password or other credential needed to unlock them. While a client certificate could still be compromised, it's somewhat stronger that just a cookie or something like that (assuming you don't have a client that is trying to give away its credential). In addition, you'd want to come up with some validation routine that a client would need to go though in order to get a credential in the first place (how do I know that this is a client device that I want to remember/register?).
Remember, these types of approaches only do device authentication, not users. There are more in-depth schemes already developed for device authentication than what I've mentioned; for example, 802.1x is a network protocol where an endpoint needs to present a client-side certificate to the network switch to get on a LAN. This is out-of-scope for a web application scenario, like what you've described, but the idea is the same (put a cryptographic credential on the client and validate it to establish the connection).
This, like all other security matters really, is a risk decision. What are you trying to accomplish with such a countermeasure? What are the threats you're trying to prevent and what are the consequences if someone does log in on an unregistered device? Only your situation can answer those questions and let you see the real risk, if you need/should mitigate it, and, if so, how strong of a solution do you need to get the risk level down to an acceptable level?
the best ways to register a user's computer as a second tier of
validation
From my point of view this approach does not offer much in the aspect of authentication.
You are not authenticating a user and have no idea who is using the PC that you would accept as being registered.
The way you describe it, this step should be a configuration rule in the firewall to accept connections from specific IPs only.
IMO the filtering of the PCs is the responsibility of a firewall and it would be much better handled by the firewall than any application level filtering.
Just think that you would have the overhead in your application to examine each request and decide whether to accept it or not.
Better leave this preprocessing overhead to the firewall. That's why it is there.

Resources