Assuming a SHA 256 hash and a completely random password using the extended ASCII charset, is there a specific length after which additional characters offer no increase in entropy, and if so what is this?
Thanks.
SHA-256 has 256 bits, obviously. The minimum UTF-8 character length is one byte, i.e. 8 bits. Therefore, any password longer than 256/8=32 characters is guaranteed extremely likely to collide with a shorter one.
Is this what you meant?
A hash doesn't increase entropy, it just, so to speak, distills it. Since SHA256 produces 256 bits of output, if you supply it with a password that's completely unpredictable (i.e., each bit of input represents one bit of entropy) then anything beyond 256 bits of input is more or less wasted.
Other than from a truly random source, however, it's really hard to get input that has one bit of entropy for every bit of input. For typical English text, Shannon's testing showed about one bit of entropy per character.
I have come to roughly the same conclusion as the others did, but with a different rationale.
Generally speaking, a preimage (brute force) attack on SHA-256 requires 2^256 evaluations, regardless of password length. In other words, a hash of a "password" that is thousands of characters long would still take an average of 2^256 tries to duplicate. 2^256 is about 1.2 x 10^77. However, a very short password, where the number of possibilities is less than 2^256, is even easier to break.
The threshold is passed when the number of possibilities is greater than 2^256.
If you are using ISO 8859-1, which has 191 characters, there are 191^n possible random passwords of length n, where n is the length of the password. 191^33 is about 1.9 x 10^75 and 191^34 is about 3.6 x 10^77, so the threshold would be at 33 characters.
If you were using plain ASCII, with 128 characters, there would be 128^n possible random passwords of length n, where n is the length of the password. 128^36 is about 7.2 x 10^75 and 128^37 is about 9.3 x 10^77, so the threshold would be at 36 characters.
Some of the other answers seem to imply that the threshold is always at 32 characters. However, if my logic is correct, the threshold varies, depending on the number of characters you have in your character set.
In fact, suppose that you used only characters a-z and 0-9, you would continue to add password strength up until your password was 49 characters long! (36^49 is about 1.8 x 10^76)
Hopefully this answer gives you a mathematical basis for answering the question.
As a side note, if a birthday (collision) attack were possible on SHA-256, it would theoretically require only 2^128 evaluations (on average), which is about 3.4 x 10^38. In that case, the threshold for ISO 8859-1 would be at only 16 characters (191^16 is about 3.1 x 10^36). Thankfully, such an attack has not yet been publicly demonstrated.
Please see the Wikipedia articles on SHA-2, preimage attacks, and birthday attacks.
I don't think there is an "effective" limit. Password of any length will be effective if it is effectively created (the usual rules, no words, mixed numbers, letters, cases and characters). It is best to force user to follow these rules rather then limit length. But minimum length should be imposed, sth like 8-10 characters, to save the users from themselves.
Related
I am trying to understand SHA uniqueness in simple terms.
For example let us assume there are only messages with maximum length of 4 bits (binery) in whole world. Number of possible messages with different lengths is
2 for single bit length
2^2 for double bit length
2^3 for 3 bit length
2^4 for 4 bit length
that would be 2+4+8+16 = 30 (31 if we consider empty message 2^0 = 1)
Lets us consider SHA3(for example) with output length of 3bits (binery), so maximum possible number of digest are 8.
How can a digest be unique if we need to map 30 messages to 8, or why is it hard to find digest collision for 2 unique messages
I'm not sure what you mean by "SHA uniqueness". An SHA value (any version) is not unique, it cannot be, because it maps an infinite number of inputs (an input of any length) to a finite number of outputs.
A cryptographic hash function has three important properties (which make it a crypto hash, over a regular hash):
strong collision resistance: it is very difficult (computationally infeasible, ie. "not practically possible") to find two inputs that produce the same output (even if you can choose both)
weak collision resistance: for a given input, it is computationally infeasible to find another input that gives the same hash value (you can choose one input to match the output of a given input)
preimage resistance: for a hash value, it's computationally infeasible to find an input that produces that output (it's "one-way")
The only problem in your example is the size. With such small numbers it doesn't make sense of course. But if the hash value is say 512 bits, it suddenly gets really time consuming and hence practically impossible to brute force.
"SHA3 which has digest length of 3bits"
I think this question is based on one bit misunderstanding. SHA-3 is a family of hashes that has the same output bit size as SHA-2. SHA-2 has bit sizes 224, 256, 384 or 512 for SHA-224, SHA-256, SHA-384 and SHA-512 respectively.
Of course, SHA-2 already took those identifiers, so SHA-3 will have SHA3-224, SHA3-256, SHA3-384 and SHA3-512. There were some proposals to use a different acronym, but those failed.
Still, SHA-3 hashes have near infinite input, so there will be many hashes that map to the same value. However, since it is not possible reverse any SHA-3 algorithm, it should be impossible to find a collision. That is, unless SHA-3 is broken, as it is not provably secure.
Any SHA3 variant will have digests with more than 100 bits. The terminology has probably confused you, because SHA256 has 256 bits, while SHA3 is considered the third generation of SHA algorithms (and does NOT have 3 bits of lenght).
Generally speaking it's not hard to find a hash collision by brute-forcing (alas, it's time-consuming), what is difficult is producing a collision that is also meaningful in its context. For example, assume you have a source file for an important application, that hashes to a digest. If an attacker tried to alter the source file in a way to introduce a vulnerability, while also hashing to the same digest, he'd have to introduce a lot of random gibberish, making the attack obvious.
On some sites there are certain restrictions on what characters should be used in passwords. For example, it must contain at least 1 digit, 1 alphabet symbol, etc. Does it really make password harder to guess? It seems that bruteforcing such password is easier than arbitrary one. I've looked up for similar questions, but those address password length restrictions, which seem reasonable to me (minimum length, of course).
By making passwords meet a larger set of conditions, some feel that they increase the security of their systems. I would argue against that. Lets take a minor example:
Password of 4 characters where 1 must be capitalized (i.e. a letter), 1 must be a number, and all entries are a letter or number. Then you have:
26 letters
10 numbers
62 letters/numbers
62 letters/numbers
That gives
26*10*62*62 combinations (for one ordering)
However, if we simply limit to all letters/numbers only then we get
62*62*62*62 combinations
It's obvious which is larger.
Now, remove the limitation of letters/numbers and allow every UTF-8 character (including space, ofc!) and that gets much larger.
By requiring certain characteristics of a password other than minimum length, the total number of combinations is reduced and that implies the overall security is reduced.
EDIT: It helps and does not hurt to have a list of passwords which are disallowed. For example cuss words, common pets names, etc. As those increase hackability while decreasing security.
In math, it's called Permutation.
http://betterexplained.com/articles/easy-permutations-and-combinations/
For easy examples:
only 5 digits numbers, there are 10*10*10*10*10 possibilities.
ddddd: 10*10*10*10*10
only 5 alphabetic characters, there are (26+26+10)^5 possibilities.
xxxxx: (26+26+10)^5
More possibilities take more time to hack your password.
I apologize if this has been answered before, but I was not able to find anything. This question was inspired by a comment on another security-related question here on SO:
How to generate a random, long salt for use in hashing?
The specific comment is as follows (sixth comment of accepted answer):
...Second, and more importantly, this will only return hexadecimal
characters - i.e. 0-9 and A-F. It will never return a letter higher
than an F. You're reducing your output to just 16 possible characters
when there could be - and almost certainly are - many other valid
characters.
– AgentConundrum Oct 14 '12 at 17:19
This got me thinking. Say I had some arbitrary series of bytes, with each byte being randomly distributed over 2^(8). Let this key be A. Now suppose I transformed A into its hexadecimal string representation, key B (ex. 0xde 0xad 0xbe 0xef => "d e a d b e e f").
Some things are readily apparent:
len(B) = 2 len(A)
The symbols in B are limited to 2^(4) discrete values while the symbols in A range over 2^(8)
A and B represent the same 'quantities', just using different encoding.
My suspicion is that, in this example, the two keys will end up being equally as secure (otherwise every password cracking tool would just convert one representation to another for quicker attacks). External to this contrived example, however, I suspect there is an important security moral to take away from this; especially when selecting a source of randomness.
So, in short, which is more desirable from a security stand point: longer keys or keys whose values cover more discrete symbols?
I am really interested in the theory behind this, so an extra bonus gold star (or at least my undying admiration) to anyone who can also provide the math / proof behind their conclusion.
If the number of different symbols usable in your password is x, and the length is y, then the number of different possible passwords (and therefore the strength against brute-force attacks) is x ** y. So you want to maximize x ** y. Both adding to x or adding to y will do that, Which one makes the greater total depends on the actual numbers involved and what your practical limits are.
But generally, increasing x gives only polynomial growth while adding to y gives exponential growth. So in the long run, length wins.
Let's start with a binary string of length 8. The possible combinations are all permutations from 00000000 and 11111111. This gives us a keyspace of 2^8, or 256 possible keys. Now let's look at option A:
A: Adding one additional bit.
We now have a 9-bit string, so the possible values are between 000000000 and 111111111, which gives us a keyspace size of 2^9, or 512 keys. We also have option B, however.
B: Adding an additional value to the keyspace (NOT the keyspace size!):
Now let's pretend we have a trinary system, where the accepted numbers are 0, 1, and 2. Still assuming a string of length 8, we have 3^8, or 6561 keys...clearly much higher.
However! Trinary does not exist!
Let's look at your example. Please be aware I will be clarifying some of it, which you may have been confused about. Begin with a 4-BYTE (or 32-bit) bitstring:
11011110 10101101 10111110 11101111 (this is, btw, the bitstring equivalent to 0xDEADBEEF)
Since our possible values for each digit are 0 or 1, the base of our exponent is 2. Since there are 32 bits, we have 2^32 as the strength of this key. Now let's look at your second key, DEADBEEF. Each "digit" can be a value from 0-9, or A-F. This gives us 16 values. We have 8 "digits", so our exponent is 16^8...which also equals 2^32! So those keys are equal in strength (also, because they are the same thing).
But we're talking about REAL passwords, not just those silly little binary things. Consider an alphabetical password with only lowercase letters of length 8: we have 26 possible characters, and 8 of them, so the strength is 26^8, or 208.8 billion (takes about a minute to brute force). Adding one character to the length yields 26^9, or 5.4 trillion combinations: 20 minutes or so.
Let's go back to our 8-char string, but add a character: the space character. now we have 27^8, which is 282 billion....FAR LESS than adding an additional character!
The proper solution, of course, is to do both: for instance, 27^9 is 7.6 trillion combinations, or about half an hour of cracking. An 8-character password using upper case, lower case, numbers, special symbols, and the space character would take around 20 days to crack....still not nearly strong enough. Add another character, and it's 5 years.
As a reference, I usually make my passwords upwards of 16 characters, and they have at least one Cap, one space, one number, and one special character. Such a password at 16 characters would take several (hundred) trillion years to brute force.
I'm currently using a SHA1 to somewhat shorten an url:
Digest::SHA1.hexdigest("salt-" + url)
How safe is it to use only the first 8 characters of the SHA1 as a unique identifier, like GitHub does for commits apparently?
To calculate the probability of a collision with a given length and the number of hashes that you have, see the birthday problem. I don't know the number of hashes that you are going to have, but here are some examples. 8 hexadecimal characters is 32 bits, so for about 100 hashes the probability of a collision is about 1/1,000,000, for 10,000 hashes it's about 1/100, for 100,000 it's 3/4 etc.
See the table in the Birthday attack article on Wikipedia to find a good hash length that would satisfy your needs. For example if you want the collision to be less likely than 1/1,000,000,000 for a set of more than 100,000 hashes then use 64 bits, or 16 hexadecimal digits.
It all depends on how many hashes are you going to have and what probability of a collision are you willing to accept (because there is always some probability, even if insanely small).
If you're talking about a SHA-1 in hexadecimal, then you're only getting 4 bits per character, for a total of 32 bits. The chances of a collision are inversely proportional to the square root of that maximum value, so about 1/65536. If your URL shortener gets used much, it probably won't take terribly long before you start to see collisions.
As for alternatives, the most obvious is probably to just maintain a counter. Since you need to store a table of URLs to translate your shortened URL back to the original, you basically just store each new URL in your table. If it was already present, you give its existing number. Otherwise, you insert it and give it a new number. Either way, you give that number to the user.
It depends on what you are trying to accomplish. The output of SHA1 is effectively random with regards to the input (the output of a good hash function changes in half of its bits based on a one-bit change in the input, and SHA1, while not perfect, is pretty good), and by taking a 32-bit (assuming 8 hex digits) subset of the 160-bit output, you reduce the output space from 2^160 to 2^32 values. All things being equal, which they never are, this would significantly reduce the difficulty of finding a collision.
However, if the hash function's input must be a valid URL, that significantly reduces the number of possible inputs. #rsp points out the birthday problem, but given this, I'm not sure exactly how applicable it is at least in its simple form. Also, it largely assumes that there are no other precautions in place.
I would be more interested in why you are doing this. Is this about URLs that the user will need to remember and type? If so, tacking on a bunch of random hexadecimal digits is probably a bad idea. Is it a URL or URL parameter that will just be passed around programmatically? Then, I wouldn't care much about length. Either way, there are probably better ways to do what you are trying to accomplish.
If you use a binary output for SHA1 and Base64 encode the result, you will get much higher information density per character; you can have the same 8-character names, but rather than only 16^8 (2^32) possibilities, you'll have 64^8 (2^48) possibilities.
Using the assumption that the 50% probability-of-collision scales with 1.177*sqrt(N), using a Base64-style encoding will require 256 times more inputs than the hex-output before reaching the 50% chance of collision probability.
Join me in the fight against weak password hashes.
A PBKDF2 password hash should contain the salt, the number of iterations, and the hash itself so it's possible to verify later. Is there a standard format, like RFC2307's {SSHA}, for PBKDF2 password hashes? BCRYPT is great but PBKDF2 is easier to implement.
Apparently, there's no spec. So here's my spec.
>>> from base64 import urlsafe_b64encode
>>> password = u"hashy the \N{SNOWMAN}"
>>> salt = urlsafe_b64decode('s8MHhEQ78sM=')
>>> encoded = pbkdf2_hash(password, salt=salt)
>>> encoded
'{PBKDF2}1000$s8MHhEQ78sM=$hcKhCiW13OVhmLrbagdY-RwJvkA='
Update: http://www.dlitz.net/software/python-pbkdf2/ defines a crypt() replacement. I updated my little spec to match his, except his starts with $p5k2$ instead of {PBKDF2}. (I have the need to migrate away from other LDAP-style {SCHEMES}).
That's {PBKDF2}, the number of iterations in lowercase hexadecimal, $, the urlsafe_base64 encoded salt, $, and the urlsafe_base64 encoded PBKDF2 output. The salt should be 64 bits, the number of iterations should be at least 1000, and the PBKDF2 with HMAC-SHA1 output can be any length. In my implementation it is always 20 bytes (the length of a SHA-1 hash) by default.
The password must be encoded to utf-8 before being sent through PBKDF2. No word on whether it should be normalized into Unicode's NFC.
This scheme should be on the order of iterations times more costly to brute force than {SSHA}.
There is a specification for the parameters (salt and iterations) of PBKDF2, but it doesn't include the hash. This is included in PKCS #5 version 2.0 (see Appendix A.2). Some platforms have built-in support for encoding and decoding this ASN.1 structure.
Since PBKDF2 is really a key derivation function, it doesn't make sense for it to specify a way to bundle the "hash" (which is the really a derived key) together with the derivation parameters—in normal usage, the key must remain secret, and is never stored.
But for usage as a one-way password hash, the hash can be stored in a record with the parameters, but in its own field.
I'll join you in the fight against weak hashes.
OWASP has a Password Storage Cheat Sheet (https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet) with some guidance; they recommend 64,000 PBKDF2 iterations minimum as of 2012, doubling every two years (i.e. 90,510 in 2012).
Note that a storing a long, cryptographically random salt per-userid is always basic.
Note that having a widely variable per-userid number of iterations and storing the number of iterations along with the salt will add some complexity to cracking software, and may help preclude certain optimizations. For instance, "bob" gets encrypted with 135817 iterations, while "alice" uses 95,121 iterations, i.e. perhaps a minimum of(90510 + RAND(90510)) for 2013.
Note also that all of this is useless if users are allowed to choose weak passwords like "password", "Password1!", "P#$$w0rd", and "P#$$w0rd123", all of which will be found by rules based dictionary attacks very quickly indeed (the latter is simply "password" with the following rules: uppercase first letter, 1337-speak, add a three digit number to the end). Take a basic dictionary list (phpbb, for a good, small starter wordlist) and apply rules like this to it, and you'll crack a great many passwords where people try "clever" tricks.
Therefore, when checking new passwords, don't just apply "All four of upper, lower, number, digit, at least 11 characters long", since "P#$$w0rd123" complies with this seemingly very tough rule. Instead, use that basic dictionary list and see if basic rules would crack it (it's a lot simpler than actually trying a crack - you can lower-case your list and their word, and then simply write code like "if the last 4 characters are a common year, check all but the last four characters against the wordlist", and "if the last 3 characters are digits, check all but the last 3 characters against the wordlist" and "check all but the last two characters against the wordlist" and "De-1337 the password - turn #'s into a, 3 into e, and so on, and then check it against the wordlist and try those other rules too."
As far as passphrases go, in general are a great idea, particularly if some other characters are added to the middle of words, but if and only if they're long enough, since you're giving up a lot of possible combinations.
Note that modern machines with GPU's are up to the tens of billions of hash iterations (MD5, SHA1, SHA-256, SHA-512, etc.) per second, even in 2012. As far as word combination "correct horse battery staple" type passwords, this one is at best a very modest password- it's only 4 all lower case English words of length 7 or less with spaces. So, if we go looking for XKCD style passwords with an 18 billion guess a second setup: A modern small american english dictionary has: 6k words of length 5 or less 21k words of length 7 or less 36k words of length 9 or less 46k words of length 11 or less 49k words of length 13 or less
With an XKCD style passphrase, and without bothering to filter words by popularity ("correct" vs. "chair's" vs. "dumpier" vs. "hemorrhaging") we have 21k^4, which is only about 2E17 possibilities. With the 18 billion/sec setup (a single machine with 8 GPU's if we're facing a single SHA1 iteration), that's about 4 months to exhaustively search the keyspace. If we had ten such setups, that's about two weeks. If we excluded unlikely words like "dumpier", that's a lot faster for a quick first pass.
Now, if you get words out of a "huge" linux american english wordlist, like "Balsamina" or "Calvinistically" (both chosen by using the "go to row" feature", then we'd have 30k words of length 5 or less 115k words of length 7 or less 231k words of length 9 or less 317k words of length 11 or less 362k words of length 13 or less
Even with the 7 length max limit, with this huge dictionary as a base and randomly chosen words, we have 115k^4 ~= 1.8E20 possibilities, or about 12 years if the setup is kept up to date (doubling in power every 18 months). This is extremely similar to a 13 character, lower case + number only password. "300 years" is what most estimates will tell you, but they fail to take Moore's Law into account.