I'm considering the following: I have some data stream which I'd like to protect as secure as possible -- does it make any sense to apply let's say AES with some IV, then Blowfish with some IV and finally again AES with some IV?
The encryption / decryption process will be hidden (even protected against debugging) so it wont be easy to guess which crypto method and what IVs were used (however, I'm aware of the fact the power of this crypto chain can't be depend on this fact since every protection against debugging is breakable after some time).
I have computer power for this (that amount of data isn't that big) so the question only is if it's worth of implementation. For example, TripleDES worked very similarly, using three IVs and encrypt/decrypt/encrypt scheme so it probably isn't total nonsense. Another question is how much I decrease the security when I use the same IV for 1st and 3rd part or even the same IV for all three parts?
I welcome any hints on this subject
I'm not sure about this specific combination, but it's generally a bad idea to mix things like this unless that specific combination has been extensively researched. It's possible the mathematical transformations would actually counteract one another and the end result would be easier to hack. A single pass of either AES or Blowfish should be more than sufficient.
UPDATE: From my comment below…
Using TripleDES as an example: think of how much time and effort from the world's best cryptographers went into creating that combination (note that DoubleDES had a vulnerability), and the best they could do is 112 bits of security despite 192 bits of key.
UPDATE 2: I have to agree with Diomidis that AES is extremely unlikely to be the weak link in your system. Virtually every other aspect of your system is more likely to be compromised than AES.
UPDATE 3: Depending on what you're doing with the stream, you may want to just use TLS (the successor to SSL). I recommend Practical Cryptography for more details—it does a pretty good job of addressing a lot of the concerns you'll need to address. Among other things, it discusses stream ciphers, which may or may not be more appropriate than AES (since AES is a block cipher and you specifically mentioned that you had a data stream to encrypt).
I don't think you have anything to loose by applying one encryption algorithm on top of another that is very different from the first one. I would however be wary of running a second round of the same algorithm on top of the first one, even if you've run another one in-between. The interaction between the two runs may open a vulnerability.
Having said that, I think you're agonizing too much on encryption part. Most exposures of data do not happen by breaking an industry-standard encryption algorithm, like AES, but through other weaknesses in the system. I would suggest to spend more time on looking at key management, the handling of unencrypted data, weaknesses in the algorithm's implementation (the possibility of leaking data or keys), and wider system issues, for instance, what are you doing with data backups.
A hacker will always attack the weakest element in a chain. So it helps little to make a strong element even stronger. Cracking an AES encryption is already impossible with 128 Bit key length. Same goes for Blowfish. Choosing even bigger key lengths make it even harder, but actually 128 Bit has never been cracked up to now (and probably will not within the next 10 or 20 years). So this encryption is probably not the weakest element, thus why making it stronger? It is already strong.
Think about what else might be the weakest element? The IV? Actually I wouldn't waste too much time on selecting a great IV or hiding it. The weakest key is usually the enccryption key. E.g. if you are encrypting data stored to disk, but this data needs to be read by your application, your application needs to know the IV and it needs to know the encryption key, hence both of them needs to be within the binary. This is actually the weakest element. Even if you take 20 encryption methods and chain them on your data, the IVs and encryption keys of all 20 needs to be in the binary and if a hacker can extract them, the fact that you used 20 instead of 1 encryption method provided zero additional security.
Since I still don't know what the whole process is (who encrypts the data, who decrypts the data, where is the data stored, how is it transported, who needs to know the encryption keys, and so on), it's very hard to say what the weakest element really is, but I doubt that AES or Blowfish encryption itself is your weakest element.
Who are you trying to protect your data from? Your brother, your competitor, your goverment, or the aliens?
Each of these has different levels at which you could consider the data to be "as secure as possible", within a meaningful budget (of time/cash)
I wouldn't rely on obscuring the algorithms you're using. This kind of "security by obscurity" doesn't work for long. Decompiling the code is one way of revealing the crypto you're using but usually people don't keep secrets like this for long. That's why we have private/public key crypto in the first place.
Also, don't waste time obfuscating the algorithm - apply Kirchoff's principle, and remember that AES, in and of itself, is used (and acknowledged to be used) in a large number of places where the data needs to be "secure".
Damien: you're right, I should write it more clearly. I'm talking about competitor, it's for commercial use. So there's meaningful budget available but I don't want to implement it without being sure I know why I'm doing it :)
Hank: yes, this is what I'm scared of, too. The most supportive source for this idea was mentioned TripleDES. On the other side, when I use one algorithm to encrypt some data, then apply another one, it would be very strange if the 'power' of whole encryption would be lesser than using standalone algorithm. But this doesn't mean it can't be equal... This is the reason why I'm asking for some hint, this isn't my area of knowledge...
Diomidis: this is basically my point of view but my colleague is trying to convince me it really 'boosts' security. My proposal would be to use stronger encryption key instead of one algorithm after another without any thinking or deep knowledge what I'm doing.
#Miro Kropacek - your colleague is trying to add security through Voodoo. Instead, try to build something simple that you can analyse for flaws - such as just using AES.
I'm guessing it was he (she?) who suggested enhancing the security through protection from debugging too...
You can't actually make things less secure if you encrypt more than once with distinct IVs and keys, but the gain in security may be much less than you anticipate: In the example of 2DES, the meet-in-the-middle attack means it's only twice as hard to break, rather than squaring the difficulty.
In general, though, it's much safer to stick with a single well-known algorithm and increase the key length if you need more security. Leave composing cryptosystems to the experts (and I don't number myself one of them).
Encrypting twice is more secure than encrypting once, even though this may not be clear at first.
Intuitively, it appears that encrypting twice with the same algorithm gives no extra protection because an attacker might find a key which decrypts all the way from the final cyphertext back to the plaintext. ... But this is not the case.
E.g. I start with plaintext A and encrypt with key K1 it to get B. Then I encrypt B with key K2 to get C.
Intuitively, it seems reasonable to assume that there may well be a key, K3, which I could use to encrypt A and get C directly. If this is the case, then an attacker using brute force would eventually stumble upon K3 and be able to decrypt C, with the result that the extra encryption step has not added any security.
However, it is highly unlikely that such a key exists (for any modern encryption scheme). (When I say "highly unlikely" here, I mean what a normal person would express using the word "impossible").
Why?
Consider the keys as functions which provide a mapping from plaintext to cyphertext.
If our keys are all KL bits in length, then there are 2^KL such mappings.
However, if I use 2 keys of KL bits each, this gives me (2^KL)^2 mappings.
Not all of these can be equivalent to a single-stage encryption.
Another advantage of encrypting twice, if 2 different algorithms are used, is that if a vulnerability is found in one of the algorithms, the other algorithm still provides some security.
As others have noted, brute forcing the key is typically a last resort. An attacker will often try to break the process at some other point (e.g. using social engineering to discover the passphrase).
Another way of increasing security is to simply use a longer key with one encryption algorithm.
...Feel free to correct my maths!
Yes, it can be beneficial, but probably overkill in most situations. Also, as Hank mentions certain combinations can actually weaken your encryption.
TrueCrypt provides a number of combination encryption algorithms like AES-Twofish-Serpent. Of course, there's a performance penalty when using them.
Changing the algorithm is not improving the quality (except you expect an algorithm to be broken), it's only about the key/block length and some advantage in obfuscation. Doing it several times is interesting, since even if the first key leaked, the resulting data is not distinguishable from random data. There are block sizes that are processed better on a given platform (eg. register size).
Attacking quality encryption algorithms only works by brute force and thus depending on the computing power you can spend on. This means eventually you only can increase the probable
average time somebody needs to decrypt it.
If the data is of real value, they'd better not attack the data but the key holder...
I agree with what has been said above. Multiple stages of encryption won't buy you much. If you are using a 'secure' algorithm then it is practically impossible to break. Using AES in some standard streaming mode. See http://csrc.nist.gov/groups/ST/toolkit/index.html for accepted ciphers and modes. Anything recommended on that site should be sufficiently secure when used properly. If you want to be extra secure, use AES 256, although 128 should still be sufficient anyway. The greatest risks are not attacks against the algorithm itself, but rather attacks against key management, or side channel attacks (which may or may not be a risk depending on the application and usage). If you're application is vulnerable to key management attacks or to side channel attacks then it really doesn't matter how many levels of encryption you apply. This is where I would focus your efforts.
Related
I have a large dataset (say 1GB) comprised of many blocks, some with a size of ~ 100 bytes, some around a megabyte. Each block is encrypted by AES-GCM, with the same 128b key (and different IV, naturally). I have a structure that keeps the offset and length of each encrypted block, with its IV and GCM tag.
Question: if I encrypt the structure (thus hiding the beginning, length and IV/tag of each encrypted block), will it make my data safer? Or its ok to leave all thousand(s) encrypted blocks in the open, for anybody to see where each starts and ends, and what is its IV/tag? The block size is fairly standard, and doesn't reveal much about the data. My concern is with direct attacks on the key and data (with thousands of encrypted samples available) - or other indirect attacks.
I believe in the comments you've answered most of your own question. If the question is "do I need to encrypt the structure?" then the next question (as YAHsaves notes) is "is the structure itself sensitive information?" If the answer is no, then that's your answer. To the extent that the structure itself is sensitive, it should be protected.
If there are attacks on your key due to repeated use with unique IVs, then this indicates incorrect use of GCM, and should be resolved. GCM is designed to support key reuse if used correctly. NIST provides good and explicit guidance on how to design GCM systems in NIST 800-38d. In particular, you want to read section 8, and especially 8.2.1 on the the recommended construction of IVs (and 8.3 if you do not use the recommended IV construction).
Most of NIST's guidance can be summed up as "make sure that Key+IV is never reused, ever, and if you can't 100% guarantee it, then guarantee it to at least 2^-31 (99.9999999%), no seriously, we aren't kidding, don't reuse Key+IV, not even once."
Looks like I found an additional answer here. It addresses a different question, but applied to mine, it means: Yes, its ok to leave in the open view thousands of blocks, encrypted with the same key. Actually, up to a ~ billion should be OK - in both random and deterministic IV modes of AES-GCM.
I came across a discussion in which I learned that what I'd been doing wasn't in fact salting passwords but peppering them, and I've since begun doing both with a function like:
hash_function($salt.hash_function($pepper.$password)) [multiple iterations]
Ignoring the chosen hash algorithm (I want this to be a discussion of salts & peppers and not specific algorithms but I'm using a secure one), is this a secure option or should I be doing something different? For those unfamiliar with the terms:
A salt is a randomly generated value usually stored with the string in the database designed to make it impossible to use hash tables to crack passwords. As each password has its own salt, they must all be brute-forced individually in order to crack them; however, as the salt is stored in the database with the password hash, a database compromise means losing both.
A pepper is a site-wide static value stored separately from the database (usually hard-coded in the application's source code) which is intended to be secret. It is used so that a compromise of the database would not cause the entire application's password table to be brute-forceable.
Is there anything I'm missing and is salting & peppering my passwords the best option to protect my user's security? Is there any potential security flaw to doing it this way?
Note: Assume for the purpose of the discussion that the application & database are stored on separate machines, do not share passwords etc. so a breach of the database server does not automatically mean a breach of the application server.
Ok. Seeing as I need to write about this over and over, I'll do one last canonical answer on pepper alone.
The Apparent Upside Of Peppers
It seems quite obvious that peppers should make hash functions more secure. I mean, if the attacker only gets your database, then your users passwords should be secure, right? Seems logical, right?
That's why so many people believe that peppers are a good idea. It "makes sense".
The Reality Of Peppers
In the security and cryptography realms, "make sense" isn't enough. Something has to be provable and make sense in order for it to be considered secure. Additionally, it has to be implementable in a maintainable way. The most secure system that can't be maintained is considered insecure (because if any part of that security breaks down, the entire system falls apart).
And peppers fit neither the provable or the maintainable models...
Theoretical Problems With Peppers
Now that we've set the stage, let's look at what's wrong with peppers.
Feeding one hash into another can be dangerous.
In your example, you do hash_function($salt . hash_function($pepper . $password)).
We know from past experience that "just feeding" one hash result into another hash function can decrease the overall security. The reason is that both hash functions can become a target of attack.
That's why algorithms like PBKDF2 use special operations to combine them (hmac in that case).
The point is that while it's not a big deal, it is also not a trivial thing to just throw around. Crypto systems are designed to avoid "should work" cases, and instead focus on "designed to work" cases.
While this may seem purely theoretical, it's in fact not. For example, Bcrypt cannot accept arbitrary passwords. So passing bcrypt(hash(pw), salt) can indeed result in a far weaker hash than bcrypt(pw, salt) if hash() returns a binary string.
Working Against Design
The way bcrypt (and other password hashing algorithms) were designed is to work with a salt. The concept of a pepper was never introduced. This may seem like a triviality, but it's not. The reason is that a salt is not a secret. It is just a value that can be known to an attacker. A pepper on the other hand, by very definition is a cryptographic secret.
The current password hashing algorithms (bcrypt, pbkdf2, etc) all are designed to only take in one secret value (the password). Adding in another secret into the algorithm hasn't been studied at all.
That doesn't mean it is not safe. It means we don't know if it is safe. And the general recommendation with security and cryptography is that if we don't know, it isn't.
So until algorithms are designed and vetted by cryptographers for use with secret values (peppers), current algorithms shouldn't be used with them.
Complexity Is The Enemy Of Security
Believe it or not, Complexity Is The Enemy Of Security. Making an algorithm that looks complex may be secure, or it may be not. But the chances are quite significant that it's not secure.
Significant Problems With Peppers
It's Not Maintainable
Your implementation of peppers precludes the ability to rotate the pepper key. Since the pepper is used at the input to the one way function, you can never change the pepper for the lifetime of the value. This means that you'd need to come up with some wonky hacks to get it to support key rotation.
This is extremely important as it's required whenever you store cryptographic secrets. Not having a mechanism to rotate keys (periodically, and after a breach) is a huge security vulnerability.
And your current pepper approach would require every user to either have their password completely invalidated by a rotation, or wait until their next login to rotate (which may be never)...
Which basically makes your approach an immediate no-go.
It Requires You To Roll Your Own Crypto
Since no current algorithm supports the concept of a pepper, it requires you to either compose algorithms or invent new ones to support a pepper. And if you can't immediately see why that's a really bad thing:
Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break.
Bruce Schneier
NEVER roll your own crypto...
The Better Way
So, out of all the problems detailed above, there are two ways of handling the situation.
Just Use The Algorithms As They Exist
If you use bcrypt or scrypt correctly (with a high cost), all but the weakest dictionary passwords should be statistically safe. The current record for hashing bcrypt at cost 5 is 71k hashes per second. At that rate even a 6 character random password would take years to crack. And considering my minimum recommended cost is 10, that reduces the hashes per second by a factor of 32. So we'd be talking only about 2200 hashes per second. At that rate, even some dictionary phrases or modificaitons may be safe.
Additionally, we should be checking for those weak classes of passwords at the door and not allowing them in. As password cracking gets more advanced, so should password quality requirements. It's still a statistical game, but with a proper storage technique, and strong passwords, everyone should be practically very safe...
Encrypt The Output Hash Prior To Storage
There exists in the security realm an algorithm designed to handle everything we've said above. It's a block cipher. It's good, because it's reversible, so we can rotate keys (yay! maintainability!). It's good because it's being used as designed. It's good because it gives the user no information.
Let's look at that line again. Let's say that an attacker knows your algorithm (which is required for security, otherwise it's security through obscurity). With a traditional pepper approach, the attacker can create a sentinel password, and since he knows the salt and the output, he can brute force the pepper. Ok, that's a long shot, but it's possible. With a cipher, the attacker gets nothing. And since the salt is randomized, a sentinel password won't even help him/her. So the best they are left with is to attack the encrypted form. Which means that they first have to attack your encrypted hash to recover the encryption key, and then attack the hashes. But there's a lot of research into the attacking of ciphers, so we want to rely on that.
TL/DR
Don't use peppers. There are a host of problems with them, and there are two better ways: not using any server-side secret (yes, it's ok) and encrypting the output hash using a block cipher prior to storage.
Fist we should talk about the exact advantage of a pepper:
The pepper can protect weak passwords from a dictionary attack, in the special case, where the attacker has read-access to the database (containing the hashes) but does not have access to the source code with the pepper.
A typical scenario would be SQL-injection, thrown away backups, discarded servers... These situations are not as uncommon as it sounds, and often not under your control (server-hosting). If you use...
A unique salt per password
A slow hashing algorithm like BCrypt
...strong passwords are well protected. It's nearly impossible to brute force a strong password under those conditions, even when the salt is known. The problem are the weak passwords, that are part of a brute-force dictionary or are derivations of them. A dictionary attack will reveal those very fast, because you test only the most common passwords.
The second question is how to apply the pepper ?
An often recommended way to apply a pepper, is to combine the password and the pepper before passing it to the hash function:
$pepperedPassword = hash_hmac('sha512', $password, $pepper);
$passwordHash = bcrypt($pepperedPassword);
There is another even better way though:
$passwordHash = bcrypt($password);
$encryptedHash = encrypt($passwordHash, $serverSideKey);
This not only allows to add a server side secret, it also allows to exchange the $serverSideKey, should this be necessary. This method involves a bit more work, but if the code once exists (library) there is no reason not to use it.
The point of salt and pepper is to increase the cost of a pre-computed password lookup, called a rainbow table.
In general trying to find a collision for a single hash is hard (assuming the hash is secure). However, with short hashes, it is possible to use computer to generate all possible hashes into a lookup onto a hard disk. This is called a Rainbow Table. If you create a rainbow table you can then go out into the world and quickly find plausable passwords for any (unsalted unpeppered) hash.
The point of a pepper is to make the rainbow table needed to hack your password list unique. Thus wasting more time on the attacker to construct the rainbow table.
The point of the salt however is to make the rainbow table for each user be unique to the user, further increasing the complexity of the attack.
Really the point of computer security is almost never to make it (mathematically) impossible, just mathematically and physically impractical (for example in secure systems it would take all the entropy in the universe (and more) to compute a single user's password).
I want this to be a discussion of salts & peppers and not specific algorithms but I'm using a secure one
Every secure password hashing function that I know of takes the password and the salt (and the secret/pepper if supported) as separate arguments and does all of the work itself.
Merely by the fact that you're concatenating strings and that your hash_function takes only one argument, I know that you aren't using one of those well tested, well analyzed standard algorithms, but are instead trying to roll your own. Don't do that.
Argon2 won the Password Hashing Competition in 2015, and as far as I know it's still the best choice for new designs. It supports pepper via the K parameter (called "secret value" or "key"). I know of no reason not to use pepper. At worst, the pepper will be compromised along with the database and you are no worse off than if you hadn't used it.
If you can't use built-in pepper support, you can use one of the two suggested formulas from this discussion:
Argon2(salt, HMAC(pepper, password)) or HMAC(pepper, Argon2(salt, password))
Important note: if you pass the output of HMAC (or any other hashing function) to Argon2 (or any other password hashing function), either make sure that the password hashing function supports embedded zero bytes or else encode the hash value (e.g. in base64) to ensure there are no zero bytes. If you're using a language whose strings support embedded zero bytes then you are probably safe, unless that language is PHP, but I would check anyway.
Can't see storing a hardcoded value in your source code as having any security relevance. It's security through obscurity.
If a hacker acquires your database, he will be able to start brute forcing your user passwords. It won't take long for that hacker to identify your pepper if he manages to crack a few passwords.
I'm not talking in particular about encryption, but security as a whole. Are there any security measures that can be put in place to protect data and/or a system that can withstand even a hypothetical amount of resources being pitted against it over a hypothetical amount of time?
I think the answer is no, but I thought I'd double check before saying this out loud to people because I'm no security expert.
UPDATE: I should point out, I'm not asking this because I need to implement something. It's idle curiosity. I should also mention that I'm ok dealing with hypotheticals here. Feel free to bring things like quantum computing into the equation if there's any relevance.
The One-time pad is such an encryption technique: it's fundamentally secure against brute force, in other words, information-theoretically secure. If you don't have the key, it cannot be "broken" regardless of what computation power you throw at it. The trick is that it's impossible to distinguish the correct answer from all other possible answers, because every answer is equally likely.
Read more on Wikipedia
Unfortunately the one-time pad is almost useless in practice, because the key must be as long as your plaintext, the key may never be re-used, and it has to be random. All of this means that you can't derive the key from a memorable password, so you need a secure storage method for the key itself. But if you can already secure a massive key, you might as well put your plaintext there without encryption.
The first thing that comes to mind is shutting down access (at least for some time) after a number of failed attempts. Such as a bank card becoming invalid after the wrong PIN has been used a couple of times, or a phone that deletes its own data after you fail to unlock it repeatedly.
Of course, this will not work with files, that the attacker can make copies of on his own machine.
First of all, you'd be better off trying this on ITsec.SE.
Now, to answer your question:
Yes, of course there are.
Brute force attacks can accomplish two things: "guessing" some sort of secret (e.g. password, encryption key, etc), and overwhelming resources (i.e. flooding, or Denial of service - DoS).
Any countermeasures aimed at preventing any other form of attack, would be irrelevant to bruteforce.
For example, take the standard reccomendations to protect against SQL Injection: input validation, stored procedures (or parameterized queries), command/parameter objects, and the like.
What would you try to bruteforce here? If code was written correctly, there is no "secret" to guess.
Now, if you're asking, "How to prevent brute force attacks?", well the answer would depend on what the attacker is trying to brute force.
Assuming that we're talking about bruteforcing a password / login screen, there several options: strong password policy (to make it harder), account lockout (to limit rate of bruteforce attempts), throttling (again limits the attempt rate), and more.
ideally no , but typically in a solution you provide , an additional step can be introduced, the data that can be subjected to direct brute force can be obfuscated to make it tough to or meaningless
for ex: a password that is encrypted and being sent over wire can be subjected brute force but if its obfuscated by transforming it into some form and then sent over wire then even brute force may not help unless the attacker knows the transforming functions too
You can always try to look for repeated / large volume attempts (to log in for example) and ban the source (IP) temporarily or even permanently.
Talking about a distributed attack it's much more difficult of course, but you can still issue mass temporary bans and scale services down for unknown users.
I'm not sure if there's any silver bullet, just be creative :) Having a home brewn solution will probably make your chances better as there are no known exploits.
I've seen a few questions and answers on SO suggesting that MD5 is less secure than something like SHA.
My question is, Is this worth worrying about in my situation?
Here's an example of how I'm using it:
On the client side, I'm providing a "secure" checksum for a message by appending the current time and a password and then hashing it using MD5. So: MD5(message+time+password).
On the server side, I'm checking this hash against the message that's sent using my knowledge of the time it was sent and the client's password.
In this example, am I really better off using SHA instead of MD5?
In what circumstances would the choice of hashing function really matter in a practical sense?
Edit:
Just to clarify - in my example, is there any benefit moving to an SHA algorithm?
In other words, is it feasible in this example for someone to send a message and a correct hash without knowing the shared password?
More Edits:
Apologies for repeated editing - I wasn't being clear with what I was asking.
Yes, it is worth worrying about in practice. MD5 is so badly broken that researchers have been able to forge fake certificates that matched a real certificate signed by a certificate authority. This meant that they were able to create their own fake certificate authority, and thus could impersonate any bank or business they felt like with browsers completely trusting them.
Now, this took them a lot of time and effort using a cluster of PlayStation 3s, and several weeks to find an appropriate collision. But once broken, a hash algorithm only gets worse, never better. If you care at all about security, it would be better to choose an unbroken hash algorithm, such as one of the SHA-2 family (SHA-1 has also been weakened, though not broken as badly as MD5 is).
edit: The technique used in the link that I provided you involved being able to choose two arbitrary message prefixes and a common suffix, from which it could generate for each prefix a block of data that could be inserted between that prefix and the common suffix, to produce a message with the same MD5 sum as the message constructed from the other prefix. I cannot think of a way in which this particular vulnerability could be exploited in the situation you describe, and in general, using a secure has for message authentication is more resistant to attack than using it for digital signatures, but I can think of a few vulnerabilities you need to watch out for, which are mostly independent of the hash you choose.
As described, your algorithm involves storing the password in plain text on the server. This means that you are vulnerable to any information disclosure attacks that may be able to discover passwords on the server. You may think that if an attacker can access your database then the game is up, but your users would probably prefer if even if your server is compromised, that their passwords not be. Because of the proliferation of passwords online, many users use the same or similar passwords across services. Furthermore, information disclosure attacks may be possible even in cases when code execution or privilege escalation attacks are not.
You can mitigate this attack by storing the password on your server hashed with a random salt; you store the pair <salt,hash(password+salt)> on the server, and send the salt to the client so that it can compute hash(password+salt) to use in place of the password in the protocol you mention. This does not protect you from the next attack, however.
If an attacker can sniff a message sent from the client, he can do an offline dictionary attack against the client's password. Most users have passwords with fairly low entropy, and a good dictionary of a few hundred thousand existing passwords plus some time randomly permuting them could make finding a password given the information an attacker has from sniffing a message pretty easy.
The technique you propose does not authenticate the server. I don't know if this is a web app that you are talking about, but if it is, then someone who can perform a DNS hijack attack, or DHCP hijacking on an unsecure wireless network, or anything of the sort, can just do a man-in-the-middle attack in which they collect passwords in clear text from your clients.
While the current attack against MD5 may not work against the protocol you describe, MD5 has been severely compromised, and a hash will only ever get weaker, never stronger. Do you want to bet that you will find out about new attacks that could be used against you and will have time to upgrade hash algorithms before your attackers have a chance to exploit it? It would probably be easier to start with something that is currently stronger than MD5, to reduce your chances of having to deal with MD5 being broken further.
Now, if you're just doing this to make sure no one forges a message from another user on a forum or something, then sure, it's unlikely that anyone will put the time and effort in to break the protocol that you described. If someone really wanted to impersonate someone else, they could probably just create a new user name that has a 0 in place of a O or something even more similar using Unicode, and not even bother with trying to forge message and break hash algorithms.
If this is being used for something where the security really matters, then don't invent your own authentication system. Just use TLS/SSL. One of the fundamental rules of cryptography is not to invent your own. And then even for the case of the forum where it probably doesn't matter all that much, won't it be easier to just use something that's proven off the shelf than rolling your own?
In this particular case, I don't think that the weakest link your application is using md5 rather than sha. The manner in which md5 is "broken" is that given that md5(K) = V, it is possible to generate K' such that md5(K') = V, because the output-space is limited (not because there are any tricks to reduce the search space). However, K' is not necessarily K. This means that if you know md5(M+T+P) = V, you can generate P' such that md5(M+T+P') = V, this giving a valid entry. However, in this case the message still remains the same, and P hasn't been compromised. If the attacker tries to forge message M', with a T' timestamp, then it is highly unlikely that md5(M'+T'+P') = md5(M'+T'+P) unless P' = P. In which case, they would have brute-forced the password. If they have brute-forced the password, then that means that it doesn't matter if you used sha or md5, since checking if md5(M+T+P) = V is equivalent to checking if sha(M+T+P) = V. (except that sha might take constant time longer to calculate, that doesn't affect the complexity of the brute-force on P).
However, given the choice, you really ought to just go ahead and use sha. There is no sense in not using it, unless there is a serious drawback to using it.
A second thing is you probably shouldn't store the user's password in your database in plain-text. What you should store is a hash of the password, and then use that. In your example, the hash would be of: md5(message + time + md5(password)), and you could safely store md5(password) in your database. However, an attacker stealing your database (through something like SQL injection) would still be able to forge messages. I don't see any way around this.
Brian's answer covers the issues, but I do think it needs to be explained a little less verbosely
You are using the wrong crypto algorithm here
MD5 is wrong here, Sha1 is wrong to use here Sha2xx is wrong to use and Skein is wrong to use.
What you should be using is something like RSA.
Let me explain:
Your secure hash is effectively sending the password out for the world to see.
You mention that your hash is "time + payload + password", if a third party gets a copy of your payload and knows the time. It can find the password (using a brute force or dictionary attack). So, its almost as if you are sending the password in clear text.
Instead of this you should look at a public key cryptography have your server send out public keys to your agents and have the agents encrypt the data with the public key.
No man in the middle will be able to tell whats in the messages, and no one will be able to forge the messages.
On a side note, MD5 is plenty strong most of the time.
It depends on how valuable the contents of the messages are. The SHA family is demonstrably more secure than MD5 (where "more secure" means "harder to fake"), but if your messages are twitter updates, then you probably don't care.
If those messages are the IPC layer of a distributed system that handles financial transactions, then maybe you care more.
Update: I should add, also, that the two digest algorithms are essentially interchangeable in many ways, so how much more trouble would it really be to use the more secure one?
Update 2: this is a much more thorough answer: http://www.schneier.com/essay-074.html
Yes, someone can send a message and a correct hash without knowing the shared password. They just need to find a string that hashes to the same value.
How common is that? In 2007, a group from the Netherlands announced that they had predicted the winner of the 2008 U.S. Presidential election in a file with the MD5 hash value 3D515DEAD7AA16560ABA3E9DF05CBC80. They then created twelve files, all identical except for the candidate's name and an arbitrary number of spaces following, that hashed to that value. The MD5 hash value is worthless as a checksum, because too many different files give the same result.
This is the same scenario as yours, if I'm reading you right. Just replace "candidate's name" with "secret password". If you really want to be secure, you should probably use a different hash function.
if you are going to generate a hash-mac don't invent your scheme. use HMAC. there are issues with doing HASH(secret-key || message) and HASH(message || secret-key). if you are using a password as a key you should also be using a key derivation function. have a look at pbkdf2.
Yes, it is worth to worry about which hash to use in this case. Let's look at the attack model first. An attacker might not only try to generate values md5(M+T+P), but might also try to find the password P. In particular, if the attacker can collect tupels of values Mi, Ti, and the corresponding md5(Mi, Ti, P) then he/she might try to find P. This problem hasn't been studied as extensively for hash functions as finding collisions. My approach to this problem would be to try the same types of attacks that are used against block ciphers: e.g. differential attacks. And since MD5 already highly susceptible to differential attacks, I can certainly imagine that such an attack could be successful here.
Hence I do recommend that you use a stronger hash function than MD5 here. I also recommend that you use HMAC instead of just md5(M+T+P), because HMAC has been designed for the situation that you describe and has accordingly been analyzed.
There is nothing insecure about using MD5 in this manner. MD5 was only broken in the sense that, there are algorithms that, given a bunch of data A additional data B can be generated to create a desired hash. Meaning, if someone knows the hash of a password, they could produce a string that will result with that hash. Though, these generated strings are usually very long so if you limit passwords to 20 or 30 characters you're still probably safe.
The main reason to use SHA1 over MD5 is that MD5 functions are being phased out. For example the Silverlight .Net library does not include the MD5 cryptography provider.
MD5 provide more collision than SHA which mean someone can actually get same hash from different word (but it's rarely).
SHA family has been known for it's reliability, SHA1 has been standard on daily use, while SHA256/SHA512 was a standard for government and bank appliances.
For your personal website or forum, i suggest you to consider SHA1, and if you create a more serious like commerce, i suggest you to use SHA256/SHA512 (SHA2 family)
You can check wikipedia article about MD5 & SHA
Both MD5 amd SHA-1 have cryptographic weaknesses. MD4 and SHA-0 are also compromised.
You can probably safely use MD6, Whirlpool, and RIPEMD-160.
See the following powerpoint from Princeton University, scroll down to the last page.
http://gcu.googlecode.com/files/11Hashing.pdf
I'm not going to comment on the MD5/SHA1/etc. issue, so perhaps you'll consider this answer moot, but something that amuses me very slightly is whenever the use of MD5 et al. for hashing passwords in databases comes up.
If someone's poking around in your database, then they might very well want to look at your password hashes, but it's just as likely they're going to want to steal personal information or any other data you may have lying around in other tables. Frankly, in that situation, you've got bigger fish to fry.
I'm not saying ignore the issue, and like I said, this doesn't really have much bearing on whether or not you should use MD5, SHA1 or whatever to hash your passwords, but I do get tickled slightly pink every time I read someone getting a bit too upset about plain text passwords in a database.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am asking from a "more secure" perspective. I can imagine a scenario with two required private keys needed for decryption scenarios that may make this an attractive model. I believe it is not adding any additional security other than having to compromise two different private keys. I think that if it was any more secure than encrypting it one million times would be the best way to secure information.
Update a couple of years later: As Rasmus Faber points out 3DES encryption was added to extend the life of DES encryption which had widespread adoption. Encrypting twice using the same key suffers from the Meet in the Middle Attack while encrypting a third time does in fact offer greater security
I understand that it is more secure provided you use different keys. But don't take my word for it. I'm not a crypto-analyst. I don't even play one on TV.
The reason I understand it to be more secure is that you're using extra information for encoding (both multiple keys and an unknown number of keys (unless you publish the fact that there's two)).
Double encryption using the same key makes many codes easier to crack. I've heard this for some codes but I know it to be true for ROT13 :-)
I think the security scheme used by Kerberos is a better one than simple double encryption.
They actually have one master key whose sole purpose is to encrypt the session key and that's all the master key is used for. The session key is what's used to encrypt the real traffic and it has a limited lifetime. This has two advantages.
Evil dudes don't have time to crack the session key since, by the time they've managed to do it, those session keys are no longer in use.
Those same evil dudes don't get an opportunity to crack the master key simply because it's so rarely used (they would need a great many encrypted packets to crack the key).
But, as I said, take that with a big grain of salt. I don't work for the NSA. But then I'd have to tell you that even if I did work for the NSA. Oh, no, you won't crack me that easily, my pretty.
Semi-useful snippet: Kerberos (or Cerberus, depending on your lineage) is the mythological three-headed dog that guards the gates of Hell, a well-chosen mascot for that security protocol. That same dog is called Fluffy in the Harry Potter world (I once had a girlfriend whose massive German Shepherd dog was called Sugar, a similarly misnamed beast).
It is more secure, but not much. The analogy with physical locks is pretty good. By putting two physical locks of the same type on a door, you ensure that a thief that can pick one lock in five minutes now need to spend ten minutes. But you might be much better off by buying a lock that was twice as expensive, which the thief could not pick at all.
In cryptography it works much the same way: in the general case, you cannot ensure that encrypting twice makes it more than twice as hard to break the encryption. So if NSA normally can decrypt your message in five minutes, with double encryption, they need ten minutes. You would probably be much better off by instead doubling the length of the key, which might make them need 100 years to break the encryption.
In a few cases, it makes sense to repeat the encryption - but you need to work the math with the specific algorithm to prove it. For instance, Triple-DES is basically DES repeated three times with three different keys (except that you encrypt-decrypt-encrypt, instead of just encrypting three times). But this also shows how unintuitive this works, because while Triple-DES triples the number of encryptions, it only has double the effective key-length of the DES algorithm.
Encryption with multiple keys is more secure than encryption with a single key, it's common sense.
My vote is that it is not adding any additional security
No.
other than having to compromise two different private keys.
Yes, but you see, if you encrypt something with two ciphers, each using a different key, and one of the ciphers are found to be weak and can be cracked, the second cipher also must be weak for the attacker to recover anything.
Double encryption does not increase the security.
There are two modes of using PGP: asymmetric (public key, with a private key to decrypt), and symmetric (with a passphrase). With either mode the message is encrypted with a session key, which is typically a randomly generated 128-bit number. The session key is then encrypted with the passphrase or with the public key.
There are two ways that the message can be decrypted. One is if the session key can be decryped. This is going to be either a brute-force attack on the passphrase or by an adversary that has your private key. The second way is an algorithmic weakness.
If the adversary can get your private key, then if you have two private keys the adversary will get both.
If the adversary can brute-force your passphrase or catch it with a keystroke logger, then the adversary can almost certainly get both of them.
If there is an algorithmic weakness, then it can be exploited twice.
So although it may seem like double encryption helps, in practice it does not help against any realistic threat.
The answer, like most things, is "it depends". In this case, it depends on how the encryption scheme is implemented.
In general, using double encryption with different keys does improve security, but it does not square the security, due to the meet-in-the-middle attack.
Basically, the attacker doesn't HAVE to break all possible combinations of the first key and the second key (squared security). They can break each key in turn (double security). This can be done in double the time of breaking the single key.
Doubling the time it takes isn't a significant improvement however, as others have noted. If they can break 1 in 10mins, they can break two in 20mins, which is still totally in the realm of possibility. What you really want is to increase security by orders of magnitude so rather than taking 10mins it takes 1000 years. This is done by choosing a better encryption method, not performing the same one twice.
The wikipedia article does a good job of explaining it.
Using brute force to break encryption, the only way they know they got the key, is when the document they've decrypted makes sense. When the document is double encrypted, it still looks like garbage, even if you have the right key - hence you don't know you had the right key.
Is this too obvious or am I missing something?
Its depends on the situation.
For those who gave poor comparison like "locks on doors", think twice before you write something. That example is far from the reality of encryption. Mine is way better =)
When you wrap something, you can wrap it with two diferent things, and it becomes more secure from the outside... true. Imagine that to get to your wrapped sandwitch, instead of unwrap, you cut the wrapping material. Double wrapping now makes no sense, you get it???
WinRAR is VERY secure. There's a case where the goverment couldnt' get into files on a laptop a guy was carrying from Canada. He used WinRAR. They tried to make him give them the password, and he took the 5th. It was on appeal for 2 years, and the courts finally said he didn't have to talk (every court said that during this process). I couldn't believe someone would even think he couldn't take the 5th. The government dropped the case when they lost their appeal, because they still hadn't cracked the files.