I've used the wikipedia article among other sources to understand how this protocol works and have two questions:
1) After Alice makes her initial query to the server, she receives as part of the response:
(the new session key + Alice's ID) - encrypted with Bob's secure key.
Since the secure keys in this particular protocol are symmetric and Alice knows the full contents of this "packet", won't she be able to now go and figure out Bob's secure key?
2) The wiki article explains the Achilles heel of this protocol to be a replay attack "If an attacker uses an older, compromised value" for the session key they can repack it as:
(the session key + Alice's ID) - encrypted with Bob's secure key
and use that to initiate a session with Bob.
Maybe I'm just missing something, but just because the attacker has the session key does not mean that they have Bob's secure key. How are they then to generate the above packet?
What you are referring to is a Known Plaintext attack. http://en.wikipedia.org/wiki/Known-plaintext_attack. In Kerberos 5, the encryption protocol used is AES, and it is generally considered to be 'resistant' against KP attacks. See here: https://crypto.stackexchange.com/questions/1512/why-is-aes-resistant-to-known-plaintext-attacks for a discussion of how/why AES is considered resistant.
Attacker (M) does not need to repack anything using K_B, it only records all previous communication from A to B. Later, it re-sends {K_AB, A}K_B to B. Without a nonce B does not know that this session key is not fresh. It starts a new session with A. Since A is not expecting any packet from B, all the packets coming from B in this session is dropped. But M keeps replying to B making it think that A is replying.
Note that rest of the communication in this session does not need K_B, but just K_AB. The new nonce B sends is encrypted with K_AB (that M already knows and can use to decrypt the new nonce). M can decrypt older messages from A to B, update the nonce values (according to the new nonce for this session) and repack using K_AB. This way M can make B redo the sequence of action it did in previous session.
How is that helpful? Say, B is a bank and A sent $10 to M in previous session using B. M can later replay the previous communication over and over to repeat that transfer from A to M.
Related
I'd take the following steps to send/received data between client and server. But I'm not sure if all the steps are secure enough and impossible to intercept. Please can you let me know how to patch up the security holes, if any?
Please note that:
This is all for Symmetric cryptography not the Public/Private Key method. So 'salt' and 'entropy' are the secrets between client and server.
For some reasons, I cannot/don't use SSL or TLS protocols. So there is no Certificate Authority etc.
Only standard cryptography functions and methods must be used (no inventions).
The data are not sent over a secure (HTTPS) connection.
I cannot use sessions here because they are two different applications.
First time - Sign up (user enters new username and new password)Client sideCreate a CSPRNG salt for the userSave salt on user's machineCreate an entropy in memory (base64 of a temporal value e.g. [hour of the day]+[day of the year])Post Password(base64 of the plaintext), and salt, and entropy to the serverServer sideFor the received message from the clietCheck if entropy matches (e.g. with base64 of [hour of the day]+[day of the year]) .Compute hash (SHA1) of salt and Password(base64 of the plaintext) - because hashing must always be done on the server.Save salt and the computed hash to the database
Next time - Log in (user enters his username and his password)Client sideRead salt from user's machineCompute hash (SHA1) of salt and entered password(base64 of the plaintext)Post password(base64 of the plaintext), and salt, and the computed hash to the serverServer sideRetrieve salt and 'stored hash' from the databaseFrom the received message from the clientSplit the message to 3 parts: password(base64 of the paintext); salt; 'received hash'Compare 'received hash' with 'stored hash'; if they match, the user is genuine not a hacker
Sending TemperProof queryString from user to serverClient sideRead salt from user's machineCreate an entropy in memory (base64 of a temporal value e.g. [hour of the day]+[day of the year])Compute hash (SHA1) of the queryString(base64 of the plaintext), and salt, and entropyPost querystring(base64 of the plaintext), and salt, and entropy, and the computed hash to the serverServer sideRetrieve salt from the databaseFor the received message from the clietSplit the message to 4 parts: queryString(base64 of the paintext); salt; entropy; 'received hash'Check if entropy matches (e.g. with base64 of [hour of the day]+[day of the year]) .Compute hash (SHA1) of queryString(base64 of the plaintext) and salt and entropy.Compare the computed hash with the 4th part of the splitted message ('received hash'); if they match, the queryString is genuine
Sending answer back to the user from serverServer sideCompute the answer using database queriesRetrieve salt from the databaseCreate an entropy in memory (base64 of a temporal value e.g. [hour of the day]+[day of the year])Compute hash (SHA1) of the answer(base64 of the plaintext), and salt, and entropyPost answer(base64 of the plaintext), and salt, and entropy, and the computed hash to the clientClient sideRead salt from user's machineCreate an entropy in memory (base64 of a temporal value e.g. [hour of the day]+[day of the year])Split the received message to 4 parts: answer(base64 of the paintext); salt; entropy; 'received hash'Check if entropy matches (e.g. with base64 of [hour of the day]+[day of the year]) .Compute hash (SHA1) of answer(base64 of the plaintext) and salt and entropy.Compare the computed hash with the 4th part of the splitted message ('received hash'); if they match, the answer is genuine
The followings areas are the weaknesses, I think. Can you please advise how these can be fixed and also point out the other possible holes?
A) First time - Sign up: The hacker can post rubbish to fill up the database, if he finds entropy in step 1.3
B) Next time - Log in: I cannot think of a way to add an entropy to the hashed password in step 1.2, because then I cannot compare with the one on the server database in step 2.2.2
Thanks
if you care about security at all, STOP now ...
what you need to do:
1) forget about designing your own crypto protocol ... relying on well known crypto ALSO means that you DO NOT design that kind of thing
2) think in layers ... you have the need to keep things secret while transporting them from A to B ... that means you have a transport layer ... if you want that secured, there is a name for that...
Transport Layer Security -> https://en.wikipedia.org/wiki/Transport_Layer_Security
3) when you make assumptions here like "i have 2 applications so i can not have sessions", please provide WHY you think it is that way... when you think of things like single-sign-on, you can have a lot of applications sharing one authentication method and even session data across a bunch of different plattforms ... maybe it's just you don't know that you actually can have sessions...
4) read up on the terms you use ... you misunderstood entropy ... there is no way of "checking if entropy matches" ... entropy in crypto related cases means randomness in terms of unpredictable input to a function ... if you have something like a date and the time, and even if you hash that, it might look random ... but it is very predictable by someone with a clock... if the communication can be related to the creation time of the value, then your value does not contain large amounts of entropy if it is based on the clock... again, do not design you own stuff here, and go for entropy sources that provide reliable, cryptographically secure randomness (CSPRNG ... not just PRNG)
5) ... you misunderstood/misused salt...
that is nothing to be generated on the client machine
that is nothing that needs to be kept on the client machine
7) password hashing
again ... DO NOT come up with you own stuff here ...
create a sufficently long random salt (at least hash length) for every password. use a slow hash function like PBKDF2 with a high iteration count parameter. the reason is that it becomes slow to test for passwords... your server has to calculate this once for every login attempt ... an attacker has to calculate this for every password testing attempt ... you can afford the test to take like 200ms ... for an attacker that means a lot more hardware will be needed to crack your password storage...
upd:
you want it, you get it ...
proof of concept attack on your schema by man-in-the-middle:
client: alice
server: bob
attacker/eavesdropper: eve
alice uses your sign up service and creates an account
1.1 alice creates a CSPRNG salt and stores that in a secure manner
1.2 alice gathers an arbitrary amount of entropy and encodes it with base64
1.3 alice sends Password(base64 of the plaintext), and salt, and entropy to bob
------------intercepted-------------
eve intercepts the communication between alice and bob and gains knowledge of...
2.1 ...the base 64 encoded password -> base64 decode -> the plaintext password
2.2 ...the salt
2.3 ...the entropy value
2.4 alice forwards the intercepted communication without changes to bob
--- protocol broken ---
now, bob is by no means able to distinguish between alice and eve in all further communication
upd2:
a look on your transfered (cleartext) messages:
Login:
Post password(base64 of the plaintext), and salt, and the computed hash to the server
Sending queryString from user to server:
Post querystring(base64 of the plaintext), and salt, and entropy, and the computed hash to the server
answer:
Post answer(base64 of the plaintext), and salt, and entropy, and the computed hash to the client
now for any of those messages, let's look at what information someone with malicious intent would learn from those:
all the information that is cleartext, which means, all of it ...
for the login, we gain the clear text password, means from now on an attacker can identify as a valid user
for the querystring and answer thing you want to provide a way to see if the request/answer is not tempered with.
so if an attacker now intercepts your communication how can he change the querystring without being noticed?
the attacker splits your message, and changes whatever he/she wants
then he/she computates the forged_hash as sha1(salt_from_the_original_message,tampered_querystring) and sends base64(tampered_querystring),salt_from_the_original_message,entropy_from_original_message,forged_hash to the server ...
for the answer it's the same deal:
the attacker intercepts the original answer, changes whatever in the answer and recomputes the hash based on known information (the changes, and the original salt)
The solution is to use HTTPS withTLS 1.2 and pin the certificate, there are certificate solutions that are free.
Using a hash (SHA1 in this case) to protect a password has not been good secure practice for some time. For my reference see DRAFT NIST Special Publication 800-63B Digital Authentication Guideline.
Passwords must be protected with an iterated HMAC, not a single hash. For more information on passwords see Toward Better Password Requirements by Jim Fenton, see slide 23 in particular. This seems to be at odds with the Pluralsight training, best practices have changes over time.
I'm developing own protocol for secure message exchanging.
Each message contains the following fields: HMAC, time, salt, and message itself. HMAC is computed over all other fields using known secret key.
Protocol should protect against reply attack. On large time interval "time" record protects against replay attack (both sides should have synchronized clocks). But for protection against replay attack on short time intervals (clocks are not too accurate) I'm planning replace "salt" field with counter increasing every time, when new message is send. Receiving party will throw away messages with counter value less or equal to the previous message counter.
What I'm doing wrong?
Initial counter value can be different (I can use party identifier as initial value), but it will be known to the attacker (party identifier transmitted in unencrypted form).
(https://security.stackexchange.com/questions/8246/what-is-a-good-enough-salt-for-a-saltedhash)
But attacker can precompute rainbow tables for counter+1, counter+2, counter+3... if I will not use really random salt?
I'm not certain of your design and requirements, so some of this may be off base; hopefully some of it is also useful.
First, I'm having a little trouble understanding the attack; I'm probably just missing something. Alice sends a message to Bob that includes a counter, a payload, and an HMAC of (counter||payload). Eve intercepts and replays the message. Bob has seen that one, so he throws it away. Eve tries to compute a new message with counter+1, but she is unable to compute the HMAC for this message (since the counter is different), so Bob throws it away. As long as there is a secret available, Eve should never be able to forge a message, and replaying a message does nothing.
So what is the "known secret key?" Is this key known to the attacker? (And if it is, then he can trivially forge messages, so the HMAC isn't helpful.) Since you note that you have DH, are you using that to negotiate a key?
Assuming I'm missing the attack, thinking through the rest of your question: If you have a shared secret, why not use that to encrypt the message, or at least the time+counter? By encrypting the time and counter together, a rainbow table should be impractical.
If there is some shared secret, but you don't have the processor available to encrypt, you could still do something like MD5(secret+counter) to prevent an attacker guessing ahead (you must already have MD5 available for your HMAC-MD5).
I have attacked this problem before with no shared secret and no DH. In that case, the embedded device needed a per-device public/private keypair (ideally installed during manufacturing, but it can be computed during first power-on and stored in nonvolatile memory; randomness is hard, one option is to let the server provide a random number service; if you have any piece of unique non-public information on the chip, like a serial number, that can be used to seed your key, too. Worst case, you can use your MAC plus the time plus as much entropy as you can scrounge from the network.)
With a public/private key in place, rather than using HMAC, the device just signs its messages, sending its public key to the server in its first message. The public key becomes the identifier of the device. The nice thing about this approach is that there is no negotiation phase. The device can just start talking, and if the server has never heard of this public key, it creates a new record.
There's a small denial-of-service problem here, because attackers could fill your database with junk. The best solution to that is to generate the keys during manufacturing, and immediately insert the public keys into your database. That's impractical for some contract manufacturers. So you can resort to including a shared secret that the device can use to authenticate itself to the server the first time. That's weak, but probably sufficient for the vast majority of cases.
(I'm doing this on my network, just for science). I was using airodump-ng to capture handshake. After that, I was able to open file with captured information in WireShark and find part with 4 handshake messages of EAPOL protocol. I know about millions of years needed for brute-force and I know that I can use aircrack-ng for dictionary attack.
I would like to extract just password from those 4 messages. I assume it is transfered as some sort of salted hash value. What I don't know is, in which message password resides (wireless password, for connection) and how exactly is sent? For example SHA1 of "password"+"ssid"... I would like to be able to compute exact same hash in my program (of course, that would be possible only for my network because I know my password). I'm gonna need that also for some demonstration on university.
Thanks!
The 802.11i "4 way handshake" that you have captured is where both parties agree on shared Group (read: broadcast) and Pairwise (read: unicast) transient keys. I.e. the keys generated here only exist for the duration of the 802.11 Association, or until the next rekey is issued from the AP.
Before you can even begin to decrypt the 4 way handshake messages you need the pairwise master key (PMK), which is what gets derived from the user-entered passphrase using a key derivation function (PBKDF2), or is the result of a WPS exchange which is based on Diffie-Hellman.
The point here is the ASCII passphrase you are seeking to extract is not exchanged in any of the 4 messages, as it has already been shared amongst all parties involved in the transaction (client and AP in this case) and used to generate a 256 bit PMK. And unless you have this PMK, the contents of the 4 way handshake messages are as good as random data.
The best that you can do, if you already know the PMK, is extract the GTK and PTK from M2 and M3 of the 4 way handshake, and from those pull the temporal key which can be XORed with the payload in subsequent frames to get the plaintext data - which wireshark will also do for you if you enter the PMK or passphrase into the IEE802.11 settings and enable decryption.
I am making a protocol that uses packets (i.e., not a stream) encrypted with AES. I've decided on using GCM (based off CTR) because it provides integrated authentication and is part of the NSA's Suite B. The AES keys are negotiated using ECDH, where the public keys are signed by trusted contacts as a part of a web-of-trust using something like ECDSA. I believe that I need a 128-bit nonce / initialization vector for GCM because even though I'm using a 256 bit key for AES, it's always a 128 bit block cipher (right?) I'll be using a 96 bit IV after reading the BC code.
I'm definitely not implementing my own algorithms (just the protocol -- my crypto provider is BouncyCastle), but I still need to know how to use this nonce without shooting myself in the foot. The AES key used in between two people with the same DH keys will remain constant, so I know that the same nonce should not be used for more than one packet.
Could I simply prepend a 96-bit pseudo random number to the packet and have the recipient use this as a nonce? This is peer-to-peer software and packets can be sent by either at any time (e.g., an instant message, file transfer request, etc.) and speed is a big issue so it would be good not to have to use a secure random number source. The nonce doesn't have to be secret at all, right? Or necessarily as random as a "cryptographically secure" PNRG? Wikipedia says that it should be random, or else it is susceptible to a chosen plaintext attack -- but there's a "citation needed" next to both claims and I'm not sure if that's true for block ciphers. Could I actually use a counter that counts the number of packets sent (separate from the counter of the number of 128 bit blocks) with a given AES key, starting at 1? Obviously this would make the nonce predictable. Considering that GCM authenticates as well as encrypts, would this compromise its authentication functionality?
GCM is a block cipher counter mode with authentication. A Counter mode effectively turns a block cipher into a stream cipher, and therefore many of the rules for stream ciphers still apply. Its important to note that the same Key+IV will always produce the same PRNG stream, and reusing this PRNG stream can lead to an attacker obtaining plaintext with a simple XOR. In a protocol the same Key+IV can be used for the life of the session, so long as the mode's counter doesn't wrap (int overflow). For example, a protocol could have two parties and they have a pre-shared secret key, then they could negotiate a new cryptographic Nonce that is used as the IV for each session (Remember nonce means use ONLY ONCE).
If you want to use AES as a block cipher you should look into CMAC Mode or perhaps the OMAC1 variant. With CMAC mode all of the rules for still CBC apply. In this case you would have to make sure that each packet used a unique IV that is also random. However its important to note that reusing an IV doesn't have nearly as dire consequences as reusing PRNG stream.
I'd suggest against making your own security protocol. There are several things you need to consider that even a qualified cryptographer can get it wrong. I'd refer you to the TLS
protocol (RFC5246), and the datagram TLS protocol (RFC 4347). Pick a library and use them.
Concerning your question with IV in GCM mode. I'll tell you how DTLS and TLS do it. They use an explicit nonce, i.e. the message sequence number (64-bits) that is included in every packet, with a secret part that is not transmitted (the upper 32 bits) and is derived from the initial key exchange (check RFC 5288 for more information).
I'm looking to authenticate that a particular message is coming from a particular place.
Example: A repeatedly sends the same message to B. Lets say this message is "helloworld" which is encrypted to "asdfqwerty".
How can I ensure that a third party C doesn't learn that B always receives this same encrypted string, and C starts sending "asdfqwerty" to B?
How can I ensure that when B decrypts "asdfqwerty" to "helloworld", it is always receiving this "helloworld" from A?
Thanks for any help.
For the former, you want to use a Mode of Operation for your symmetric cipher that uses an Initialization Vector. The IV ensures that every encrypted message is different, even if it contains the same plaintext.
For the latter, you want to sign your message using the private key of A(lice). If B(ob) has the public key of Alice, he can then verify she really created the message.
Finally, beware of replay attacks, where C(harlie) records a valid message from Alice, and later replays it to Bob. To avoid this, add a nonce and/or a timestamp to your encrypted message (yes, you could make the IV play double-duty as a nonce).
Add random value to the data being encrypted, and whenever it's decrypted, strip it from the original unencrypted data.
You need decent random number generator. I'm sure Google will help you on that.
C noticing that B receives twice the same encrypted message is an issue called traffic analysis and has historically been a heavy concern (but this was in times which predated public key encryption).
Any decent public encryption system includes some random padding. For instance, for RSA as described in PKCS#1, the encrypted message (of length at most 117 bytes for a 1024-bit RSA key) gets a header with at least eight random (non-zero) bytes, and a few extra data which allows the receiver to unambiguously locate the padding bytes, and see where the "real" data begins. The random bytes will be generated anew every time; hence, if A sends twice the same message to B, the encrypted messages will be different, but B will recover the original message twice.
Random padding is required for public key encryption precisely because the public key is public: if encryption was deterministic, then an attacker could "try" potential messages and look for a match (this is exhaustive search on possible messages).
Public key encryption algorithms often have heavy limitations on data size or performance (e.g. with RSA, you have a strict maximum message length, depending on the key size). Thus, it is customary to use a hybrid system: the public key encryption is used to encrypt a symmetric key K (i.e. a bunch of random bytes), and K is used to symmetrically encrypt the data (symmetric encryption is fast and does not have constraints on input message size). In a hybrid system, you generate a new K for every message, so this also gives you the randomness you need to avoid the issue of encrypting several times the same message with a given public key: at the public encryption level, you are actually never encrypting twice the same message (the same key K), even if the data which is symmetrically encrypted with K is the same than in a previous message. This would protect you from traffic analysis even if the public key encryption itself did not include random padding.
When symmetrically encrypting data with a key K, the symmetric encryption should use an "initial value" (IV) which is randomly and uniformly generated; this is integrated in the encryption mode (some modes only need a non-repeating IV without requiring a random uniform generation, but CBC needs random uniform generation). This is a third level of randomness, protecting you against traffic analysis.
When using asymmetric key agreement (static Diffie-Hellman), since are a bit more complex, because a key agreement results in a key K which you do not choose, and which could be the same ever and ever (between given sender and receiver). In that situation, protection against traffic analysis relies on the symmetric encryption IV randomness.
Asymmetric encryption protocols, such as OpenPGP, describe how the symmetric encryption, public key encryption and randomness should all be linked together, ironing out the tricky details. You are warmly encouraged not to reinvent your own protocol: it is difficult to design a secure protocol, mostly because one cannot easily test for the presence or absence of any weakness.
You may want to study block cipher modes of operation. However, the modes are designed to work on a data stream that is sent over a reliable channel. If your messages are sent out of order over an unreliable transport (e.g. UDP packets), I don't think you can use it.