Why does HashMap need a cryptographically secure hashing function? - hashmap

I'm reading a Rust book about HashMap hashing functions, and I can't understand these two sentences.
By default, HashMap uses a cryptographically secure hashing function that can provide resistance to Denial of Service (DoS) attacks. This is not the fastest hashing algorithm available, but the trade-off for better security that comes with the drop in performance is worth it.
I know what a cryptographically secure hash function is, but don't I understand the rationale behind it. From my understanding a good hash function for HashMap should only have three properties:
deterministic (the same object has same hash value)
be VERY fast,
has a uniform distribution of bits in hash value (meaning it will reduce collision)
Other properties, in cryptographically secure hash function, are not really relevant 99% (maybe even 99.99%) of the time for hash tables.
So my question is: What does "resistance to DoS attack and better security
" even mean in the context of HashMap?

Let's start backward: how do you DoS a HashMap?
Over the years, there have been multiple attacks on various software stacks based on Hash Flooding. If you know which framework a site is powered by, and therefore which hash function is used, and this hash function is not cryptographically secure then you may be able to pre-compute, offline, a large set of strings hashing to the same number.
Then, you simply inject this set into the site, and for each (simple) request, it does a disproportionately large amount of work as inserting N elements takes O(N2) operations.
Rust was conceived with the benefit of hindsight, and therefore attention was paid to avoiding this attack by default, reasoning that users who really need performance out of HashMap would simply switch the hash function.

Let's say we use HashMap to store some user data in a web-application. Suppose that users can choose (part of) the key in some way – maybe the key is a username or a filename of an uploaded file or anything like that.
If we are not using a cryptographically secure hash function, this means that the attacker could possible craft multiple inputs that all map to the same output. Of course, a hash map has to deal with collisions, because they occur naturally.
But when unnaturally many collisions occur, the hash map implementation might do strange things. For example, looking up some keys could have a runtime of O(n). Or the hash map might think that it has to grow because of all the collisions; but growing won't solve the problem, so the hash map grows until all memory is used. In either case, it's bad. Hash maps just assume that statistically, collisions rarely occur.
Of course, this is not a "stealing user data" attack -- at least not directly. But if one part of a system is weak, this makes it easier for attackers to find other weaknesses.
A cryptographically secure hash function prevents this attack, since the attacker cannot possibly craft multiple keys that map to the same value (at least not without trying out all keys).
is not really relevant 99% (maybe even 99.99%) of the time for hash tables.
Yes, probably. But this is difficult to balance. I guess we all would agree that if 20% of users would have security problems in their application due to an unsecure hash function (while 80% don't care), it's still a good idea to use the "secure by default" approach. What about 5%/95%? What about 1%/99%? Hard to tell where the threshold is, right?
There has been a ton of discussion about this already. Because yes, most people only notice the slowness of the hash map. Maybe the situation I described above is incredibly rare and it isn't worth slowing down all other users' code by default. But this has been decided, the default hash function won't change, and luckily you can choose your own hash function.

If a server application stores user input (such as post data in a web application) in a hash table, a malicious user may try to provide a large number of inputs that all have the same hash value, leading to a large number of hash collisions and thus slowing down operations on the map significantly, to the point that it can be used as a DoS attack (as described in this article for example).
If the hash is cryptographically secure, attackers will have a much harder time trying to find inputs with the same hash value.

Related

Best Practices: Salting & peppering passwords?

I came across a discussion in which I learned that what I'd been doing wasn't in fact salting passwords but peppering them, and I've since begun doing both with a function like:
hash_function($salt.hash_function($pepper.$password)) [multiple iterations]
Ignoring the chosen hash algorithm (I want this to be a discussion of salts & peppers and not specific algorithms but I'm using a secure one), is this a secure option or should I be doing something different? For those unfamiliar with the terms:
A salt is a randomly generated value usually stored with the string in the database designed to make it impossible to use hash tables to crack passwords. As each password has its own salt, they must all be brute-forced individually in order to crack them; however, as the salt is stored in the database with the password hash, a database compromise means losing both.
A pepper is a site-wide static value stored separately from the database (usually hard-coded in the application's source code) which is intended to be secret. It is used so that a compromise of the database would not cause the entire application's password table to be brute-forceable.
Is there anything I'm missing and is salting & peppering my passwords the best option to protect my user's security? Is there any potential security flaw to doing it this way?
Note: Assume for the purpose of the discussion that the application & database are stored on separate machines, do not share passwords etc. so a breach of the database server does not automatically mean a breach of the application server.
Ok. Seeing as I need to write about this over and over, I'll do one last canonical answer on pepper alone.
The Apparent Upside Of Peppers
It seems quite obvious that peppers should make hash functions more secure. I mean, if the attacker only gets your database, then your users passwords should be secure, right? Seems logical, right?
That's why so many people believe that peppers are a good idea. It "makes sense".
The Reality Of Peppers
In the security and cryptography realms, "make sense" isn't enough. Something has to be provable and make sense in order for it to be considered secure. Additionally, it has to be implementable in a maintainable way. The most secure system that can't be maintained is considered insecure (because if any part of that security breaks down, the entire system falls apart).
And peppers fit neither the provable or the maintainable models...
Theoretical Problems With Peppers
Now that we've set the stage, let's look at what's wrong with peppers.
Feeding one hash into another can be dangerous.
In your example, you do hash_function($salt . hash_function($pepper . $password)).
We know from past experience that "just feeding" one hash result into another hash function can decrease the overall security. The reason is that both hash functions can become a target of attack.
That's why algorithms like PBKDF2 use special operations to combine them (hmac in that case).
The point is that while it's not a big deal, it is also not a trivial thing to just throw around. Crypto systems are designed to avoid "should work" cases, and instead focus on "designed to work" cases.
While this may seem purely theoretical, it's in fact not. For example, Bcrypt cannot accept arbitrary passwords. So passing bcrypt(hash(pw), salt) can indeed result in a far weaker hash than bcrypt(pw, salt) if hash() returns a binary string.
Working Against Design
The way bcrypt (and other password hashing algorithms) were designed is to work with a salt. The concept of a pepper was never introduced. This may seem like a triviality, but it's not. The reason is that a salt is not a secret. It is just a value that can be known to an attacker. A pepper on the other hand, by very definition is a cryptographic secret.
The current password hashing algorithms (bcrypt, pbkdf2, etc) all are designed to only take in one secret value (the password). Adding in another secret into the algorithm hasn't been studied at all.
That doesn't mean it is not safe. It means we don't know if it is safe. And the general recommendation with security and cryptography is that if we don't know, it isn't.
So until algorithms are designed and vetted by cryptographers for use with secret values (peppers), current algorithms shouldn't be used with them.
Complexity Is The Enemy Of Security
Believe it or not, Complexity Is The Enemy Of Security. Making an algorithm that looks complex may be secure, or it may be not. But the chances are quite significant that it's not secure.
Significant Problems With Peppers
It's Not Maintainable
Your implementation of peppers precludes the ability to rotate the pepper key. Since the pepper is used at the input to the one way function, you can never change the pepper for the lifetime of the value. This means that you'd need to come up with some wonky hacks to get it to support key rotation.
This is extremely important as it's required whenever you store cryptographic secrets. Not having a mechanism to rotate keys (periodically, and after a breach) is a huge security vulnerability.
And your current pepper approach would require every user to either have their password completely invalidated by a rotation, or wait until their next login to rotate (which may be never)...
Which basically makes your approach an immediate no-go.
It Requires You To Roll Your Own Crypto
Since no current algorithm supports the concept of a pepper, it requires you to either compose algorithms or invent new ones to support a pepper. And if you can't immediately see why that's a really bad thing:
Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break.
Bruce Schneier
NEVER roll your own crypto...
The Better Way
So, out of all the problems detailed above, there are two ways of handling the situation.
Just Use The Algorithms As They Exist
If you use bcrypt or scrypt correctly (with a high cost), all but the weakest dictionary passwords should be statistically safe. The current record for hashing bcrypt at cost 5 is 71k hashes per second. At that rate even a 6 character random password would take years to crack. And considering my minimum recommended cost is 10, that reduces the hashes per second by a factor of 32. So we'd be talking only about 2200 hashes per second. At that rate, even some dictionary phrases or modificaitons may be safe.
Additionally, we should be checking for those weak classes of passwords at the door and not allowing them in. As password cracking gets more advanced, so should password quality requirements. It's still a statistical game, but with a proper storage technique, and strong passwords, everyone should be practically very safe...
Encrypt The Output Hash Prior To Storage
There exists in the security realm an algorithm designed to handle everything we've said above. It's a block cipher. It's good, because it's reversible, so we can rotate keys (yay! maintainability!). It's good because it's being used as designed. It's good because it gives the user no information.
Let's look at that line again. Let's say that an attacker knows your algorithm (which is required for security, otherwise it's security through obscurity). With a traditional pepper approach, the attacker can create a sentinel password, and since he knows the salt and the output, he can brute force the pepper. Ok, that's a long shot, but it's possible. With a cipher, the attacker gets nothing. And since the salt is randomized, a sentinel password won't even help him/her. So the best they are left with is to attack the encrypted form. Which means that they first have to attack your encrypted hash to recover the encryption key, and then attack the hashes. But there's a lot of research into the attacking of ciphers, so we want to rely on that.
TL/DR
Don't use peppers. There are a host of problems with them, and there are two better ways: not using any server-side secret (yes, it's ok) and encrypting the output hash using a block cipher prior to storage.
Fist we should talk about the exact advantage of a pepper:
The pepper can protect weak passwords from a dictionary attack, in the special case, where the attacker has read-access to the database (containing the hashes) but does not have access to the source code with the pepper.
A typical scenario would be SQL-injection, thrown away backups, discarded servers... These situations are not as uncommon as it sounds, and often not under your control (server-hosting). If you use...
A unique salt per password
A slow hashing algorithm like BCrypt
...strong passwords are well protected. It's nearly impossible to brute force a strong password under those conditions, even when the salt is known. The problem are the weak passwords, that are part of a brute-force dictionary or are derivations of them. A dictionary attack will reveal those very fast, because you test only the most common passwords.
The second question is how to apply the pepper ?
An often recommended way to apply a pepper, is to combine the password and the pepper before passing it to the hash function:
$pepperedPassword = hash_hmac('sha512', $password, $pepper);
$passwordHash = bcrypt($pepperedPassword);
There is another even better way though:
$passwordHash = bcrypt($password);
$encryptedHash = encrypt($passwordHash, $serverSideKey);
This not only allows to add a server side secret, it also allows to exchange the $serverSideKey, should this be necessary. This method involves a bit more work, but if the code once exists (library) there is no reason not to use it.
The point of salt and pepper is to increase the cost of a pre-computed password lookup, called a rainbow table.
In general trying to find a collision for a single hash is hard (assuming the hash is secure). However, with short hashes, it is possible to use computer to generate all possible hashes into a lookup onto a hard disk. This is called a Rainbow Table. If you create a rainbow table you can then go out into the world and quickly find plausable passwords for any (unsalted unpeppered) hash.
The point of a pepper is to make the rainbow table needed to hack your password list unique. Thus wasting more time on the attacker to construct the rainbow table.
The point of the salt however is to make the rainbow table for each user be unique to the user, further increasing the complexity of the attack.
Really the point of computer security is almost never to make it (mathematically) impossible, just mathematically and physically impractical (for example in secure systems it would take all the entropy in the universe (and more) to compute a single user's password).
I want this to be a discussion of salts & peppers and not specific algorithms but I'm using a secure one
Every secure password hashing function that I know of takes the password and the salt (and the secret/pepper if supported) as separate arguments and does all of the work itself.
Merely by the fact that you're concatenating strings and that your hash_function takes only one argument, I know that you aren't using one of those well tested, well analyzed standard algorithms, but are instead trying to roll your own. Don't do that.
Argon2 won the Password Hashing Competition in 2015, and as far as I know it's still the best choice for new designs. It supports pepper via the K parameter (called "secret value" or "key"). I know of no reason not to use pepper. At worst, the pepper will be compromised along with the database and you are no worse off than if you hadn't used it.
If you can't use built-in pepper support, you can use one of the two suggested formulas from this discussion:
Argon2(salt, HMAC(pepper, password)) or HMAC(pepper, Argon2(salt, password))
Important note: if you pass the output of HMAC (or any other hashing function) to Argon2 (or any other password hashing function), either make sure that the password hashing function supports embedded zero bytes or else encode the hash value (e.g. in base64) to ensure there are no zero bytes. If you're using a language whose strings support embedded zero bytes then you are probably safe, unless that language is PHP, but I would check anyway.
Can't see storing a hardcoded value in your source code as having any security relevance. It's security through obscurity.
If a hacker acquires your database, he will be able to start brute forcing your user passwords. It won't take long for that hacker to identify your pepper if he manages to crack a few passwords.

Hashing Passwords With Multiple Algorithms

Does using multiple algorithms make passwords more secure? (Or less?)
Just to be clear, I'm NOT talking about doing anything like this:
key = Hash(Hash(salt + password))
I'm talking about using two separate algorithms and matching both:
key1 = Hash1(user_salt1 + password)
key2 = Hash2(user_salt2 + password)
Then requiring both to match when authenticating. I've seen this suggested as a way eliminate collision matches, but I'm wondering about unintended consequences, such as creating a 'weakest link' scenario or providing information that makes the user database easier to crack, since this method provides more data than a single key does. E.g. something like combining information the hash to find them more easily. Also if collisions were truly eliminated, you could theoretically brute force the actual password not just a matching password. In fact, you'd have to in order to brute force the system at all.
I'm not actually planning to implement this, but I'm curious whether or not this is actually an improvement over the standard practice of single key = Hash(user_salt + password).
EDIT:
Many good answers, so just to surmise here, this should have been obvious looking back, but you do create a weakest link by using both, because the matches of weaker of the two algorithms can be tried against the other. Example if you used a weak (fast) MD5 and a PBKDF2, I'd brute force the MD5 first, then try any match I found against the other, so by having the MD5 (or whatever) you actual make the situation worse. Also even if both are among the more secure set (bcrypt+PBKDF2 for example), you double your exposure to one of them breaking.
The only thing this would help with would be reducing the possibility of collisions. As you mention, there are several drawbacks (weakest link being a big one).
If the goal is to reduce the possibility of collisions, the best solution would simply be to use a single secure algorithm (e.g. bcrypt) with a larger hash.
Collisions are not a concern with modern hashing algorithms. The point isn't to ensure that every hash in the database is unique. The real point is to ensure that, in the event your database is stolen or accidentally given away, the attacker has a tough time determining a user's actual password. And the chance of a modern hashing algorithm recognizing the wrong password as the right password is effectively zero -- which may be more what you're getting at here.
To be clear, there are two big reasons you might be concerned about collisions.
A collision between the "right" password and a supplied "wrong" password could allow a user with the "wrong" password to authenticate.
A collision between two users' passwords could "reveals" user A's password if user B's password is known.
Concern 1 is addressed by using a strong/modern hashing algorithm (and avoiding terribly anti-brilliant things, like looking for user records based solely on their password hash). Concern 2 is addressed with proper salting -- a "lengthy" unique salt for each password. Let me stress, proper salting is still necessary.
But, if you add hashes to the mix, you're just giving potential attackers more information. I'm not sure there's currently any known way to "triangulate" message data (passwords) from a pair of hashes, but you're not making significant gains by including another hash. It's not worth the risk that there is a way to leverage the additional information.
To answer your question:
Having a unique salt is better than having a generic salt. H(S1 + PW1) , H(S2 + PW2)
Using multiple algorithms may be better than using a single one H1(X) , H2(Y)
(But probably not, as svidgen mentions)
However,
The spirit of this question is a bit wrong for two reasons:
You should not be coming up with your own security protocol without guidance from a security expert. I know it's not your own algorithm, but most security problems start because they were used incorrectly; the algorithms themselves are usually air-tight.
You should not be using hash(salt+password) to store passwords in a database. This is because hashing was designed to be fast - not secure. It's somewhat easy with today's hardware (especially with GPU processing) to find hash collisions in older algorithms. You can of course use a newer secure Hashing Algorithm (SHA-256 or SHA-512) where collisions are not an issue - but why take chances?
You should be looking into Password-Based Key Derivation Functions (PBKDF2) which are designed to be slow to thwart this type of attack. Usually it takes a combination of salting, a secure hashing algorithm (SHA-256) and iterates a couple hundred thousand times.
Making the function take about a second is no problem for a user logging in where they won't notice such a slowdown. But for an attacker, this is a nightmare since they have to perform these iterations for every attempt; significantly slowing down any brute-force attempt.
Take a look at libraries supporting PBKDF encryption as a better way of doing this. Jasypt is one of my favorites for Java encryption.
See this related security question: How to securely hash passwords
and this loosely related SO question
A salt is added to password hashes to prevent the use of generic pre-built hash tables. The attacker would be forced to generate new tables based on their word list combined with your random salt.
As mentioned, hashes were designed to be fast for a reason. To use them for password storage, you need to slow them down (large number of nested repetitions).
You can create your own password-specific hashing method. Essentially, nest your preferred hashes on the salt+password and recurs.
string MyAlgorithm(string data) {
string temp = data;
for i = 0 to X {
temp = Hash3(Hash2(Hash1(temp)));
}
}
result = MyAlgorithm("salt+password");
Where "X" is a large number of repetitions, enough so that the whole thing takes at least a second on decent hardware. As mentioned elsewhere, the point of this delay is to be insignificant to the normal user (who knows the correct password and only waits once), but significant to the attacker (who must run this process for every combination). Of course, this is all for the sake of learning and probably simpler to just use proper existing APIs.

Why doesn't having the code to the MD5 function help hackers break it?

I believe I can download the code to PHP or Linux or whatever and look directly at the source code for the MD5 function. Could I not then reverse engineer the encryption?
Here's the code - http://dollar.ecom.cmu.edu/sec/cryptosource.htm
It seems like any encryption method would be useless if "the enemy" has the code it was created with. Am I wrong?
That is actually a good question.
MD5 is a hash function -- it "mixes" input data in such a way that it should be unfeasible to do a number of things, including recovering the input given the output (it is not encryption, there is no key and it is not meant to be inverted -- rather the opposite). A handwaving description is that each input bit is injected several times in a large enough internal state, which is mixed such that any difference quickly propagates to the whole state.
MD5 is public since 1992. There is no secret, and has never been any secret, to the design of MD5.
MD5 is considered cryptographically broken since 2004, year of publication of the first collision (two distinct input messages which yield the same output); it was considered "weak" since 1996 (when some structural properties were found, which were believed to ultimately help in building collisions). However, there are other hash functions, which are as public as MD5 is, and for which no weakness is known yet: the SHA-2 family. Newer hash functions are currently being evaluated as part of the SHA-3 competition.
The really troubling part is that there is no known mathematical proof that a hash function may actually exist. A hash function is a publicly described efficient algorithm, which can be embedded as a logic circuit of a finite, fixed and small size. For the practitioners of computational complexity, it is somewhat surprising that it is possible to exhibit a circuit which cannot be inverted. So right now we only have candidates: functions for which nobody has found weaknesses yet, rather than function for which no weakness exists. On the other hand, the case of MD5 shows that, apparently, getting from known structural weaknesses to actual collisions to attacks takes a substantial amount of time (weaknesses in 1996, collisions in 2004, applied collisions -- to a pair of X.509 certificates -- in 2008), so the current trend is to use algorithm agility: when we use a hash function in a protocol, we also think about how we could transition to another, should the hash function prove to be weak.
It is not an encryption, but a one way hashing mechanism. It digests the string and produces a (hopefully) unique hash.
If it were a reversible encryption, zip and tar.gz formats would be quite verbose. :)
The reason it doesn't help hackers too much (obviously knowing how one is made is beneficial) is that if they find a password to a system that is hashed, e.g. 2fcab58712467eab4004583eb8fb7f89, they need to know the original string used to create it, and also if any salt was used. That is because when you login, for obvious reasons, the password string is hashed with the same method as it is generated and then that resulting hash is compared to what is stored.
Also, many developers are migrating to bcrypt which incorporates a work factor, if the hashing takes 1 second as opposed to .01 second, it greatly slows down generating a rainbow table for you application, and those old PHP sites using md5() only become the low hanging fruit.
Further reading on bcrypt.
One of the criteria of good cryptographic operations is that knowledge of the algorithm should not make it easier to break the encryption. So an encryption should not be reversible without knowledge of the algorithm and the key, and a hash function must not be reversible regardless of knowledge of the algorithm (the term used is "computationally infeasible").
MD5 and other hash function (like SHA-1 SHA-256, etc) perform a one-way operation on data that creates a digest or "fingerprint" that is usually much smaller than than the plaintext. This one way function cannot be reversed to retrieve the plaintext, even when you know exactly what the function does.
Likewise, knowledge of an encryption algorithm doesn't make it any easier (assuming a good algorithm) to recover plaintext from ciphertext. The reverse process is "computationally infeasible" without knowledge of the encryption key used.

Given a hashing algorithm, is there a more efficient way to 'unhash' besides bruteforce?

So I have the code for a hashing function, and from the looks of it, there's no way to simply unhash it (lots of bitwise ANDs, ORs, Shifts, etc). My question is, if I need to find out the original value before being hashed, is there a more efficient way than just brute forcing a set of possible values?
Thanks!
EDIT: I should add that in my case, the original message will never be longer than several characters, for my purposes.
EDIT2: Out of curiosity, are there any ways to do this on the run, without precomputed tables?
Yes; rainbow table attacks. This is especially true for hashes of shorter strings. i.e. hashes of small strings like 'true' 'false' 'etc' can be stored in a dictionary and can be used as a comparison table. This speeds up cracking process considerably. Also if the hash size is short (i.e. MD5) the algorithm becomes especially easy to crack. Of course, the way around this issue is combining 'cryptographic salts' with passwords, before hashing them.
There are two very good sources of info on the matter: Coding Horror: Rainbow Hash Cracking and
Wikipedia: Rainbow table
Edit: Rainbox tables can tage tens of gigabytes so downloading (or reproducing) them may take weeks just to make simple tests. Instead, there seems to be some online tools for reversing simple hashes: http://www.onlinehashcrack.com/ (i.e. try to reverse 463C8A7593A8A79078CB5C119424E62A which is MD5 hash of the word 'crack')
"Unhashing" is called a "preimage attack": given a hash output, find a corresponding input.
If the hash function is "secure" then there is no better attack than trying possible inputs until a hit is found; for a hash function with a n-bit output, the average number of hash function invocations will be about 2n, i.e. Way Too Much for current earth-based technology if n is greater than 180 or so. To state it otherwise: if an attack method faster than this brute force method is found, for a given hash function, then the hash function is deemed irreparably broken.
MD5 is considered broken, but for other weaknesses (there is a published method for preimages with cost 2123.4, which is thus about 24 times faster than the brute force cost -- but it is still so far in the technologically unfeasible that it cannot be confirmed).
When the hash function input is known to be part of a relatively small space (e.g. it is a "password", so it could fit in the brain of a human user), then one can optimize preimage attacks by using precomputed tables: the attacker still has to pay the search cost once, but he can reuse his tables to attack multiple instances. Rainbow tables are precomputed tables with a space-efficient compressed representation: with rainbow tables, the bottleneck for the attacker is CPU power, not the size of his hard disks.
Assuming the "normal case", the original message will be many times longer than the hash. Therefore, it is in principle absolutely impossible to derive the message from the hash, simply because you cannot calculate information that is not there.
However, you can guess what's probably the right message, and there exist techniques to accelerate this process for common messages (such as passwords), for example rainbow tables. It is very likely that if something that looks sensible is the right message if the hash matches.
Finally, it may not be necessary at all to find the good message as long as one can be found which will pass. This is the subject of a known attack on MD5. This attack lets you create a different message which gives the same hash.
Whether this is a security problem or not depends on what exactly you use the hash for.
This may sound trivial, but if you have the code to the hashing function, you could always override a hash table container class's hash() function (or similar, depending on your programming language and environment). That way, you can hash strings of say 3 characters or less, and then you can store the hash as a key by which you obtain the original string, which appears to be exactly what you want. Use this method to construct your own rainbow table, I suppose. If you have the code to the program environment in which you want to find these values out, you could always modify it to store hashes in the hash table.

When is it safe to use a broken hash function?

It is trivial to use a secure hash function like SHA-256, and continuing to use MD5 for security is reckless behavior. However, there are some complexities to hash function vulnerabilities that I would like to better understand.
Collisions have been generated for MD4 and MD5. According to NIST, MD5 is not a secure hash function. It only takes 239 operations to generate a collision and should never be used for passwords. However SHA-1 is vulnerable to a similar collision attack in which a collision can be found in 269 operations, whereas brute force is 280. No one has generated a SHA-1 collision and NIST still lists SHA-1 as a secure message digest function.
So when is it safe to use a broken hash function? Even though a function is broken it can still be "big enough". According to Schneier a hash function vulnerable to a collision attack can still be used as an HMAC. I believe this is because the security of an HMAC is dependent on its secret key and a collision cannot be found until this key is obtained. Once you have the key used in an HMAC it's already broken, so it's a moot point. What hash function vulnerabilities would undermine the security of an HMAC?
Let's take this property a bit further. Does it then become safe to use a very weak message digest like MD4 for passwords if a salt is prepended to the password? Keep in mind the MD4 and MD5 attacks are prefixing attacks, and if a salt is prepended then an attacker cannot control the prefix of the message. If the salt is truly a secret, and isn't known to the attacker, then does it matter if it's appended to the password? Is it safe to assume that an attacker cannot generate a collision until the entire message has been obtained?
Do you know of other cases where a broken hash function can be used in a security context without introducing a vulnerability?
(Please post supporting evidence because it is awesome!)
Actually collisions are easier than what you list on both MD5 and SHA-1. MD5 collisions can be found in time equivalent to 226.5 operation (where one "operation" is the computation of MD5 over a short message). See this page for some details and an implementation of the attack (I wrote that code; it finds a collision within an average of 14 seconds on a 2.4 GHz Core2 x86 in 64-bit mode).
Similarly, the best known attack on SHA-1 is in about 261 operations, not 269. It is still theoretical (no actual collision was produced yet) but it is within the realm of the feasible.
As for implications on security: hash functions are usually said to have three properties:
No preimage: given y, it should not be feasible to find x such that h(x) = y.
No second preimage: given x1, it should not be feasible to find x2 (distinct from x1) such that h(x1) = h(x2).
No collision: it should not be feasible to find any x1 and x2 (distinct from each other) such that h(x1) = h(x2).
For a hash function with a n-bit output, there are generic attacks (which work regardless of the details of the hash function) in 2n operations for the two first properties, and 2n/2 operations for the third. If, for a given hash function, an attack is found, which, by exploiting special details of how the hash function operates, finds a preimage, a second preimage or a collision faster than the corresponding generic attack, then the hash function is said to be "broken".
However, not all usages of hash functions rely on all three properties. For instance, digital signatures begin by hashing the data which is to be signed, and then the hash value is used in the rest of the algorithm. This relies on the resistance to preimages and second preimages, but digital signatures are not, per se, impacted by collisions. Collisions may be a problem in some specific signature scenarios, where the attacker gets to choose the data that is to be signed by the victim (basically, the attacker computes a collision, has one message signed by the victim, and the signature becomes valid for the other message as well). This can be counteracted by prepending some random bytes to the signed message before computing the signature (the attack and the solution where demonstrated in the context of X.509 certificates).
HMAC security relies on an other property that the hash function must fulfill; namely, that the "compression function" (the elementary brick on which the hash function is built) acts as a Pseudo-Random Function (PRF). Details on what a PRF is are quite technical, but, roughly speaking, a PRF should be indistinguishable from a Random Oracle. A random oracle is modeled as a black box which contains a gnome, some dice and a big book. On some input data, the gnome select a random output (with the dice) and writes down in the book the input message and the output which was randomly selected. The gnome uses the book to check whether he already saw the same input message: if so, then the gnome returns the same output than previously. By construction, you can know nothing about the output of a random oracle on a given message until you try it.
The random oracle model allows the HMAC security proof to be quantified in invocations of the PRF. Basically, the proof states that HMAC cannot be broken without invoking the PRF a huge number of times, and by "huge" I mean computationally infeasible.
Unfortunately, we do not have random oracles, so in practice we must use hash functions. There is no proof that hash functions really exist, with the PRF property; right now, we only have candidates, i.e. functions for which we cannot prove (yet) that their compression functions are not PRF.
If the compression function is a PRF then the hash function is automatically resistant to collisions. That's part of the magic of PRF. Therefore, if we can find collisions for a hash function, then we know that the internal compression function is not a PRF. This does not turn the collisions into an attack on HMAC. Being able to generate collisions at will does not help in breaking HMAC. However, those collisions demonstrate that the security proof associated with HMAC does not apply. The guarantee is void. That's just the same than a laptop computer: opening the case does not necessarily break the machine, but afterwards you are on your own.
In the Kim-Biryukov-Preneel-Hong article, some attacks on HMAC are presented, in particular a forgery attack on HMAC-MD4. The attack exploits the shortcomings of MD4 (its "weaknesses") which make it a non-PRF. Variants of the same weaknesses were used to generate collisions on MD4 (MD4 is thoroughly broken; some attacks generate collisions faster than the computation of the hash function itself !). So the collisions do not imply the HMAC attack, but both attacks feed on the same source. Note, though, that the forgery attack has cost 258, which is quite high (no actual forgery was produced, the result is still theoretical) but substantially lower than the resistance level expected from HMAC (with a robust hash function with an n-bit output, HMAC should resist up to 2n work factor; n = 128 for MD4).
So, while collisions do not per se imply weaknesses on HMAC, they are bad news. In practice, collisions are a problem for very few setups. But knowing whether collisions impact a given usage of hash functions is tricky enough, that it is quite unwise to keep on using a hash function for which collisions were demonstrated.
For SHA-1, the attack is still theoretical, and SHA-1 is widely deployed. The situation has been described like this: "The alarm is on, but there is no visible fire or smoke. It is time to walk towards the exits -- but not to run."
For more information on the subject, begin by reading the chapter 9 of the Handbook of Applied Cryptography, by Menezes, van Oorschot and Vanstone, a must-read for the apprentice cryptographer (not to be confused with "Applied Cryptography" by B. Schneier, which is a well-written introduction but nowhere as thorough as the "Handbook").
The only time it is safe to use a broken hash function is when the consequences of a collision are harmless or trivial, e.g. when assigning files to a bucket on a filesystem.
When you don't care whether it's safe or not.
Seriously, it doesn't take any extra effort to use a secure hash function in pretty much every language, and performance impact is negligible, so I don't see why you wouldn't.
[Edit after actually reading your question]
According to Schneier a hash function vulnerable to a collsion attack can still be used as an HMAC. I believe this is because the security of an HMAC is Dependant on its secret key and a collision cannot be found until this key is obtained.
Actually, it's essentially because being able to generate a collision for a hash does not necessarily help you generate a collision for the hash-of-a-hash (combined with the XORing used by HMACs).
Does it then become safe to use a very weak message digest like md4 for passwords if a salt is perpended to the password?
No, not if the hash has a preimage attack which allows you to prepend data to the input. For instance, if the hash was H(pass + salt), we'd need a preimage attack which allows us to find pass2 such that H(pass2 + salt) = H(pass + salt).
There have been append attacks in the past, so I'm sure prepend attacks are possible.
Download sites use MD5 hash as a checksum to determine if the file was corrupted during download, and I would say a broken hash is good enough for that purpose.
Lets say that a MITM decides to modify the file (say a zip archive, or an exe). Now, the attacker has to do two things -
Find a hash collision and create a modified file out of it
Ensure that the newly created file is also a valid exe or a zip archive
With a broken hash, 1 is a bit easier. But ensuring that the collision simultaneously meets other known properties of the file is too expensive computationally.
This is totally my own answer, and I could be terribly wrong.
The answer entirely depends on what you're using it for. If you need to prevent somebody producing a collision with a few milliseconds I'd be less worried than if you need to prevent somebody producing a collision within a few decades.
What problem are you actually trying to solve?
Most of the worry about using something like MD4 for a password is related less to currently known attacks, than to the fact that once it has been analyzed to the point that collision generation is easy, it is generally presumed to be considerably more likely that somebody will be able to use that knowledge to create a preimage attack -- and when/if that happens, essentially all possible uses of that hash function become vulnerable.

Resources