I wrote a small Chat application, where users can write each other messages:
On first Login, a user will generate a public/private keypair, derived from the users password.
The public-key will be sent to the server (database).
If a user (A) wants to write user (B) a message, user A encrypts the message with the public key of user B and sends it to the server (and the server will send it then to user B).
But what, if somebody with database-access will change the public-key of user B in the database? Then the attacker can read all messages.
Is it somehow possible to authenticate the public key in the database and make sure, it was not changed and it 100% belongs to user B?
So you're trying to protect against the scenario where an attacker has control over the server and the server cannot be trusted. Since you can't trust any information by the server, you cannot use it directly in any form of verification either. The server can only be relegated to being a dumb transport, and the verification needs to happen directly against the other peer.
Being able to exchange the key out-of-band would help a lot here, meaning you can somehow facilitate a direct peer-to-peer exchange of the key. Since it is difficult to trust the identity of a random remote peer over the general internet, you'd need to employ a strategy like Threema: you can get any remote peer's public key anonymously, but your relationship to this peer is not verified then. Only if you're able to meet in person and exchange/verify keys by physically scanning each others QR codes is the key trustworthy.
To facilitate any sort of key exchange with a remote peer via an untrustworthy server, you'd basically need to implement a Diffie-Hellman key exchange; the server can facilitate the communication, but will have no visibility into what data is being exchanged. This will have to happen with both peers being online at the same time (or it's a very slow offline back-and-forth), so may be somewhat problematic in practice depending on your use case.
Related
When creating any kind of application web,api etc; This days the best practices recommend to secure endpoints by using TLS, but what we can learn from the cloudbleed issue, is that it may not be enough.
Therefore I would like to know what could be done to keep a certain level of security even when TLS is compromised.
For web applications what I currently use is jsencrypt, basically encrypts all data on client browser side before it is sent, but in order to to this I need first to exchange a shared secret (token/cookie) between the server and client, but when dealing with API's that don't support javascript what could be used?
Regarding the exchange of tokens, by instinct it may be obvious to say use OAUTH, OpenID Connect, json tokens , but all of them require or delegate trust to TLS, and again when this is compromised it became useless.
If I am right OpenID could be used without SSL to share a "common secret" by doing Diffie–Hellman key exchange, is there something similar that could be implemented keeping in mind that if TLS gets compromised, easy measure could be taking like revoking tokens or changing "salts" ?
For now I think by following the gpg or rsa (private/public) keys is the way to go, in a way that probably everyone could have access to the public keys but will not be available to see the content of some data signed to a specific user.
But question remains in how to exchange that very first "known secret" between client and server avoiding a possible man in the middle attack considering TLS can't be trusted.
The problem of exchanging the first "known secret" is the same for all protocols, SSL or not. SSL is a public key infrastructure where the basic information that needs to be distributed is the public key of the root certificate of the certificate issuer. The public keys for all ssl certificate issuers are distributed with the browser installation.
Any protocol will depend on some information that is communicated between the server and client in a different channel from the channel where the communication is established. If you don't trust the SSL infrastructure, you will have to send this information by email, postal mail, sms, or by some other means.
However, your problem does not start with the keys neccesary for the encryption libraries you are using in you web application. Your very web application (the javascript files) are also sent from the server to the web browser over SSL. If your SSL communication is compromised by a man-in-the-middle, this man-in-the-middle is also probably able to change the web pages and javascript code that you send to the browser. He could just rewrite your application and remove all encryption code, add new fields and messages for the user, send the user to a different site and so on.
The SSL infrastructure is really a cornerstone in web security, and a neccessity for web applications. Without it, you would have to build a custom protocol for sending encrypted web pages and write a custom browser that would understand this protocol.
With all that said, it is of course possible to add a tiny layer of extra security on top of SSL. You may i.e. create a private/public keypair for each user, send a public key to the user and encrypt all messages from your server to the user with the private key. This could protect against a scenario where a main-in-the-middle is able to listen to the communication but not able to change your messages.
I have started studying Wireless Security and in WEP security, there is something called fake-auth attack. I know it sends an authentication request and then associates with the AP and then we can proceed to an arp replay attack. I need to know how exactly the fake-auth attack works, because if we do not have the WEP key, how can we authenticate and then associate with the AP to replay ARP packets.
The explanation is pretty simple, an access point must authenticate a station before the station can associate with the access point or communicate with the network. The IEEE 802.11 standard defines two types of WEP authentication:
Open System Authentication (OSA): allows any device to join the network, assuming that the device SSID matches the access point SSID. Alternatively, the device can use the “ANY” SSID option to associate with any available access point within range, regardless of its SSID.
Shared Key Authentication: requires that the station and the access point have the same WEP key to authenticate.
A detailed tutorial on how to perform a fake-auth using shared key authentication here.
UPDATE: How can we associate to the AP without the key?
The fake authentication attack on the WEP protocol allows an attacker to join
a WEP protected network, even if the attacker has not got the secret root key.
IEEE 802.11 defines two ways a client can authenticate itself in an WEP protected
environment.
The first method is called Open System authentication: a client just sends a message to an access point, telling that he wants to join the network using Open System authentication. The access point will answer the request with successful, if he allows Open System authentication.
As you can see, the secret root key is never used during this handshake, allowing an attacker to perform this handshake too and to join an WEP protected
network without knowledge of the secret root key.
The second method is called Shared Key authentication. Shared Key authentication uses the secret root key and a challenge-response authentication mechanism, which should make it more secure (at least in theory) than Open System authentication, which provides no kind of security.
First, a client sends a frame to an access point telling him, that he wants to join
the network using Shared Key authentication. The access point answers with a
frame containing a challenge, a random byte string. The client now answers with
a frame containing this challenge which must be WEP encrypted. The access
point decrypts the frame and if the decrypted challenge matches the challenge
he send, then he answers with successful and the client is authenticated.
An attacker who is able to sni an Shared Key authentication handshake can
join the network itself. First note, that besides the APs challenge, all bytes in
the third frame are constant and therefore known by an attacker. The challenge
itself was transmitted in cleartext in frame number 2 and is therefore known by
the attacker too. The attacker can now recover the key stream which was used
by WEP to encrypt frame number 3. The attacker now knows a key stream
and the corresponding IV which is as long as frame number 3.
The attacker can now initiate an Shared Key authentication handshake with
the AP. After having received frame number 2, he can construct a valid frame
number 3 using his recovered key stream. The AP will be able to successfully
decrypt and verify the frame and respond with successful. The attacker is now
authenticated.
Reference here.
We are currently designing a smartphone application that needs an authentication protocol.
We will use HTTPS for all the messages. The idea is the following :
The client contacts the server and authenticates himself with his user/password combination.
The servers replies with a ramdom-generated token that is stored in the database.
To contact the server the client now uses his/her user/token combination.
In each message he sends, the server has a certain probability to regenerate a new token that it includes in the message it sends.
The question is : will we have security issues using this protocol ?
Note : passwords and tokens are stored hashed in the database.
The security bases on the certificate you use for encryption. In general this is enough, you may also check if it is the expected certificate. In the case that you check yourself the fingerprint of the certificate you can be sure (if you use sha1 or better) that the certificate is from you and not a successful man in the middle attack. E.g. the NSA could simple create valid certificates for your domain, but AFIK it is impossible to generate a second certficate with the same fingerprint.
By the way I hope that the passwords and tokes are also salted. That is important so it is impossible to see that two customers uses the same password and also it increases the complexity of the hash, that means that it will take much more time to crack such a password with a rainbow table.
I am looking into ways of securing the channel between my client apps and the server.
I have a rich desktop client (win) and mobile client connecting to a webservice, exchanging data.
Using SSL certificates, server and clients may trust each other. On the secured connection i can exchange username and password and therefore authenticate the user.
However i have certain circumstances where a user must connect to the server via any of the two methods without his credentials but only a literal, like say, a license plate number.
I really want to make sure that in this case i ONLY allow client connects from devices i am sure i know, since there is no further checks on the authentication and a license plate number would be a pretty common literal.
How can i ensure that only "devices" which are known to my server, can interact with my server?
If you want to authenticate the device, you'll need to find a way for the device to prove what it is, without disclosing its secret.
A system similar to a number plate would be quite easy to spoof, for anyone in a position to see that number. Depending on how much control you have on this device, you might not be able to hide it, even if the connection to your server is secured with SSL/TLS.
A potential way to do this would be to use a cryptographic hardware token (or smart card). Some of these tokens can be configured to hold a certificate and private key, with the ability to use the private key without being able to export that private key. The cryptographic operations (signing and decryption) happen on the token itself.
You can use these to perform client-certificate authentication to your server. In this case, you would know that the client has that token. This could work on the condition that you know the CAs were issued its certificates only for key pairs in such tokens: there will be a cost in administering the CA to handle this.
This would at least allow you to tie the authentication to a particular token. Whether you can integrate this with your overall device depends on the kind of device you have.
Please check if TLS Pre-Shared Keys (RFC 4279) can be used for your scenario.
My company is going to be storing sensitive data for our customers, and will be encrypting data using one of the managed .NET encryption algorithm classes. Most of the work is done, but we haven't figured out how/where to store the key. I've done some light searching and reading, and it seems like a hardware solution might be the most secure. Does anyone have any recommendations on a key storage solution or method?
Thanks for your replies, everyone.
spoulson, the issue is actually both the "scopes" that you mentioned. I suppose I should have been clearer.
The data itself, as well as the logic that encrypts it and decrypts it is abstracted away into an ASP.NET profile provider. This profile provider allows both encrypted profile properties as well as plain text ones. The encrypted property values are stored in exactly the same way the plain text ones are - with the obvious exception that they've been encrypted.
That said, the key will need to be able to be summoned for one of three reasons:
The authorized web application, running on an authorized server, needs to encrypt data.
Same as #1, but for decrypting the data.
Authorized members of our business team need to view the encrypted data.
The way I'm imagining it is that nobody would ever actually know the key - there would be a piece of software controlling the actual encrypting and decrypting of data. That said, the key still needs to come from somewhere.
Full disclosure - if you couldn't already tell, I've never done anything like this before, so if I'm completely off base in my perception of how this should work, by all means, let me know.
There only two real solutions for (the technical aspect of) this problem.
Assuming it's only the application itself that needs access the key...
Hardware Security Module (HSM) - usually pretty expensive, and not simple to implement. Can be dedicated appliance (e.g. nCipher) or specific token (e.g. Alladin eToken). And then you still have to define how to handle that hardware...
DPAPI (Windows Data Protection API). There are classes for this in System.Security.Cryptography (ProtectedMemory, ProtectedStorage, etc). This hands off key management to the OS - and it handles it well. Used in "USER_MODE", DPAPI will lock decryption of the key to the single user that encrypted it.
(Without getting too detailed, the user's password is part of the encryption/decryption scheme - and no, changing the password does not foul it up.)
ADDED: Best to use DPAPI for protecting your master key, and not encrypting your application's data directly. And don't forget to set strong ACLs on your encrypted key...
In response to #3 of this answer from the OP
One way for authorized members to be able to view the encrypted data, but without them actually knowing the key would be to use key escrow (rsa labs) (wikipedia)
In summary the key is broken up into seperate parts and given to 'trustees'. Due to the nature of private keys each segment is useless to by its self. Yet if data is needed to be decrypted then the 'trustees' can assemble thier segments into the whole key.
We have the same problem, and have been through the same process.
We need to have a process start up on one computer (client) which then logs in to a second computer (database server).
We currently believe that the best practice would be:
Operator manually starts the process on client PC.
Client PC prompts operator for his personal login credentials.
Operator enters his credentials.
Client PC uses these to login to the database server.
Client PC requests its own login credentials from database server.
Database server checks that operator's login credentials are authorised to get the client process' credentials and returns them to the client PC.
Client PC logs out of datbase server.
Client PC logs back into database server using its own credentials.
Effectively, the operator's login password is the key, but it isn't stored anywhere.
Microsoft Rights Management Server (RMS) has a similar problem. It just solves it by encrypting its configuration with a master password. ...A password on a password, if you will.
Your best bet is to physically secure the hardware the key is on. Also, don't ever write it to disk - find some way to prevent that section of memory from being paged to disk. When encrypting/decrypting the key needs to be loaded into memory, and with unsecure hardware there's always this venue of attack.
There are, like you said, hardware encryption devices but they don't scale - all encryption/decryption passes through the chip.
I think I misunderstood your question. What you're asking for is not in scope of how the application handles its key storage, but rather how your company will store it.
In that case, you have two obvious choices:
Physical: Write to USB drive, burn to CD, etc. Store in physically secure location. But you run into the recursive problem: where do you store the key to the vault? Typically, you delegate 2 or more people (or a team) to hold the keys.
Software: Cyber-Ark Private Ark is what my company uses to store its secret digital information. We store all our admin passwords, license keys, private keys, etc. It works by running a Windows "vault" server that is not joined to a domain, firewalls all ports except its own, and stores all its data encrypted on disk. Users access through a web interface that first authenticates the user, then securely communicates with the vault server via explorer-like interface. All changes and versions are logged. But, this also has the same recursive problem... a master admin access CD. This is stored in our physical vault with limited access.
Use a hard-coded key to encrypt the generated key before writing it out. Then you can write it anywhere.
Yes you can find the hard-coded key, but so long as you're assuming it's OK to store a symmetric key anywhere, it's not less secure.
Depending on your application you could use the Diffie-Hellman method for two parties to securely agree on a symmetric key.
After an initial, secure exchange, the key is agreed upon and the rest of the session (or a new session) can use this new symmetric key.
You can encrypt the symmetric key using another symmetric key that is derived from a password using something like PBKDF2.
Have the user present a password, generate a new key used to encrypt the data, generate another key using the password, then encrypt and store the data encryption key.
It isn't as secure as using a hardware token, but it might still be good enough and is pretty easy to use.