Adler-32 in Multilevel Security Environment Safe Practices - security

I'm implementing a multilevel security environment on several several web servers running Debian. I've done quite a bit of reading on fast hash checking algorithms to compliment the other security components.
It seems Adler-32 is quite fast and compact (which I'm quite fond of), although I understand it can be 'easily' forged. This aspect of it makes me a bit nervous, so is there some way to safeguard against it being forged somehow?

No. CRCs can also be easily forged. If you are worried about forging (and make sure that you understand why you are worried about forging), then you need to use a cryptographically secure hash. E.g. SHA-2.

No. You cannot condition Adler-32 to be secure, for a simple reason which will affect even a perfect non-reversible hash function you might hope to find.
No 32-bit checksum or digest can be meaningfully resistant to attack because there are only four billion possible outcomes in a modified message. This means that to brute-force a collision takes a comparatively trivial amount of time.
To put this in real-world terms, hashcash is giving me an estimate of 874 seconds to brute-force 32 bits of of a SHA1 digest. Any checksum which you choose for speed is going to get proportionally easier.
That's even before you start to consider potential weaknesses in the algorithm which might yield a more efficient approach than brute force, and the use of GPU computing to accelerate the attack.

Related

Do I need MD5 as a companion to SHA-1?

Do I need both MD5 and SHA-1 values to be sure the downloaded file is
a) Untouched by hackers. For example, when I need to download some app's .iso via torrents
and
b) Not corrupted during technical issues? For example, some unstable network connection during download.
Or, probably, SHA-1 value will be enough for both checks?
Also, is SHA-1 (without MD5) enough to be sure that some file downloaded years ago and stored somewhere on my HDD haven't degradated?
From a security perspective MD-5 is utterly broken.
SHA-1 is considered suspicious, and avoided for most uses if at all possible. For new projects: don't use it at all.
SHA-2 (aka SHA-256, SHA-512, etc.) is still widely used for fast hashes.
SHA-3 is the future since 2012, nothing is stopping you from using it already. I see little reason not to use it for new projects.
What's the problem with older ones:
Their resistance to finding collisions is below par: This is an attacker creating 2 contents that have the same hash. These are constructed at the same time. This problem is there for MD5 and SHA-1, and it's BAD, but requires the attacker creating both versions (and then they can do a switch at any time they want undetected).
Their resistance to length extension attacks is relatively weak. This is especially true for MD5, but SHA-1 and even SHA-2 to some degree suffer from it.
When is it not a problem: to ensure your disk has not produced an error: and hash will do, even a simple CRC32 will work wonders (and I'd recommend the simpler CRC check), or a RAID array, as these can fix errors, not just detect them.
Use both ?
Well if you have to find a collision on one hash and have that same set of plaintexts also produce a collision on another hash, is probably more difficult. This approach has been used in the past, The original PGP did something like it. If I'm not mistaken it had a number of things it calculated, one of them simply the length (which would prevent the extension attack above).
So yes, it likely adds something, but the way md5 and SHA-1 and SHA-2 work internally is quite similar, and that's the worrisome part: they are too much alike to be sure just how much it adds against a highly sophisticated attacker (think the level of the NSA and their counterparts).
So why not use one of the more modern versions of SHA-2, or even better SHA-3 ? They've no known weaknesses and have been peer-reviewed heavily. As such for any commercial level use, they should be more than enough.
Refs:
https://en.wikipedia.org/wiki/Length_extension_attack
https://en.wikipedia.org/wiki/Collision_attack
https://stackoverflow.com/questions/tagged/sha-3

PHP Password Hashing in 2011

I'm bringing this up after spending a few hours trawling through a number of posts on SO with regards to the most secure way to handle passwords in PHP/MySQL. Most answers seem to be fairly out of date, as are links that people are directed to. Many recommend md5 and sha-1.
We all know that MD5 and SHA-1 are no longer worth using due to the fact that they have been reversed, and also because there are a number of databases out there that have built up millions of md5/sha1 strings. Now, obviously you get around this with salt, which I intend to do.
I have however recently started playing around with whirlpool, which seems much more secure, and up to date. Would I be right in thinking whirlpool+salt is ample protection for passwords?
I was actually considering something like this:
<?php
$static_salt = 'some_static_salt_string_hard_coded';
$password = 'some_password_here';
$salt = 'unique_salt_generated_here';
$encoded = hash('whirlpool', $static_salt.$password.$salt);
?>
What do you think? Overkill or sensible?
This is probably good enough for most applications.
However, salts become (almost) useless if your DB is leaked -- including the static one if your configuration file is leaked too. They are a good protection against rainbow tables, but nowadays it's easier to use a bunch of GPUs to brute-force a given hash.
IMHO, currently the best solution is to use bcrypt. It's apparently supported in PHP 5.3+, and here's an example of how to use it.
This will be enough (however, there is no sense in static hardcoded salt). And, why not to use SHA256? Whirlpool is rarely used.
It's particularly meaningless to discuss the merits of particular algorithms without a much wider consideration of the threat models and specifics of implementations.
Yes, whirlpool does appear to have some advantages in terms of how effective it is as a hash, but as Nickolay says that may be deceptive and due to the fact it is less widely used. But there are other considerations too - for some purposes storing a 128 character string for each account may be an unnecessary overhead. For everyone it's a question of what the software supports (and some people might want to use the same account record to control access to different systems).
At the end of the day, it doesn't matter how sophisticated your hashing algorithm is:
given a free choice, users pick bad, guessable passwords
users will use the same password for different services
If it works for you - then great - but there is no universal solution.

Password hashing: Is this a way to avoid collisions?

I was thinking about using 2 keys for hashing each user password, obtaining 2 different hashes. This way, it would be (almost?) impossible to find a password that works, other than the actual password.
Is that right? Is it worth it?
An important rule to learn is "never try to invent your own cryptography". You are just wasting time at best and introducing security holes at worst.
If you are unsure whether you are an exception to this rule, then you are not an exception to this rule.
The designers of cryptographic hashes already worried about collisions so you do not have to. Just pick one (SHA-256 is a fine choice) and focus your efforts on the rest of your application.
You might use SHA256 as a hashing algorithm instead. No collisions were found to date, and it's highly unlikely to see any collisions on passwords in the future.
You could just use a longer hash. SHA-512, for example, is 512 bits, and (assuming it's uniform) far, far less likely to clash as SHA-256. But personally, I wouldn't worry about it. Most passwords themselves are less than 32 bytes (256 bits), and so should have an extremely low probability of clashing with SHA-256.

AES vs Blowfish for file encryption

I want to encrypt a binary file. My goal is that to prevent anyone to read the file who doesn't have the password.
Which is the better solution, AES or Blowfish with the same key length? We can assume that the attacker has great resources (softwares, knowledge, money) for cracking the file.
Probably AES. Blowfish was the direct predecessor to Twofish. Twofish was Bruce Schneier's entry into the competition that produced AES. It was judged as inferior to an entry named Rijndael, which was what became AES.
Interesting aside: at one point in the competition, all the entrants were asked to give their opinion of how the ciphers ranked. It's probably no surprise that each team picked its own entry as the best -- but every other team picked Rijndael as the second best.
That said, there are some basic differences in the basic goals of Blowfish vs. AES that can (arguably) favor Blowfish in terms of absolute security. In particular, Blowfish attempts to make a brute-force (key-exhaustion) attack difficult by making the initial key setup a fairly slow operation. For a normal user, this is of little consequence (it's still less than a millisecond) but if you're trying out millions of keys per second to break it, the difference is quite substantial.
In the end, I don't see that as a major advantage, however. I'd generally recommend AES. My next choices would probably be Serpent, MARS and Twofish in that order. Blowfish would come somewhere after those (though there are a couple of others that I'd probably recommend ahead of Blowfish).
It is a not-often-acknowledged fact that the block size of a block cipher is also an important security consideration (though nowhere near as important as the key size).
Blowfish (and most other block ciphers of the same era, like 3DES and IDEA) have a 64 bit block size, which is considered insufficient for the large file sizes which are common these days (the larger the file, and the smaller the block size, the higher the probability of a repeated block in the ciphertext - and such repeated blocks are extremely useful in cryptanalysis).
AES, on the other hand, has a 128 bit block size. This consideration alone is justification to use AES instead of Blowfish.
In terms of the algorithms themselves I would go with AES, for the simple reason is that it's been accepted by NIST and will be peer reviewed and cryptanalyzed for years. However I would suggest that in practical applications, unless you're storing some file that the government wants to keep secret (in which case the NSA would probably supply you with a better algorithm than both AES and Blowfish), using either of these algorithms won't make too much of a difference. All the security should be in the key, and both of these algorithms are resistant to brute force attacks. Blowfish has only shown to be weak on implementations that don't make use of the full 16 rounds. And while AES is newer, that fact should make you lean more towards BlowFish (if you were only taking age into consideration). Think of it this way, BlowFish has been around since the 90's and nobody (that we know of) has broken it yet....
Here is what I would pose to you... instead of looking at these two algorithms and trying to choose between the algorithm, why don't you look at your key generation scheme. A potential attacker who wants to decrypt your file is not going to sit there and come up with a theoretical set of keys that can be used and then do a brute force attack that can take months. Instead he is going to exploit something else, such as attacking your server hardware, reverse engineering your assembly to see the key, trying to find some config file that has the key in it, or maybe blackmailing your friend to copy a file from your computer. Those are going to be where you are most vulnerable, not the algorithm.
AES.
(I also am assuming you mean twofish not the much older and weaker blowfish)
Both (AES & twofish) are good algorithms. However even if they were equal or twofish was slightly ahead on technical merit I would STILL chose AES.
Why? Publicity. AES is THE standard for government encryption and thus millions of other entities also use it. A talented cryptanalyst simply gets more "bang for the buck" finding a flaw in AES then it does for the much less know and used twofish.
Obscurity provides no protection in encryption. More bodies looking, studying, probing, attacking an algorithm is always better. You want the most "vetted" algorithm possible and right now that is AES. If an algorithm isn't subject to intense and continual scrutiny you should place a lower confidence of it's strength. Sure twofish hasn't been compromised. Is that because of the strength of the cipher or simply because not enough people have taken a close look ..... YET
The algorithm choice probably doesn't matter that much. I'd use AES since it's been better researched. What's much more important is choosing the right operation mode and key derivation function.
You might want to take a look at the TrueCrypt format specification for inspiration if you want fast random access. If you don't need random access than XTS isn't the optimal mode, since it has weaknesses other modes don't. And you might want to add some kind of integrity check(or message authentication code) too.
I know this answer violates the terms of your question, but I think the correct answer to your intent is simply this: use whichever algorithm allows you the longest key length, then make sure you choose a really good key. Minor differences in the performance of most well regarded algorithms (cryptographically and chronologically) are overwhelmed by a few extra bits of a key.
Both algorithms (AES and twofish) are considered very secure. This has been widely covered in other answers.
However, since AES is much widely used now in 2016, it has been specifically hardware-accelerated in several platforms such as ARM and x86. While not significantly faster than twofish before hardware acceleration, AES is now much faster thanks to the dedicated CPU instructions.

How secure is 64-bit RC2?

In encryption, would two symmetric algorithms be considered to be equal in terms of security if their key sizes are equivalent? (i.e. does a 64-bit RC2 algorithm provide the same exact security that a 64-bit AES algorithm would?)
How secure (or insecure) would it be to use a 64-bit RC2 algorithm?
How long could I expect it to take for a brute force attack to crack this kind of encryption?
What kind of data would it be okay to secure with this algorithm? (e.g. I'm guessing that credit card info would not be okay to encrypt with this algorithm since the algorithm is not secure enough).
In general, equivalent key sizes does not imply equivalent security, for a variety of reasons:
First, it's simply the case that some algorithms are have known attacks where others do not. The size of the key is just the upper bound of the effort it would take to break the cipher; in the worst case, you can always try every possible key and succeed (on average) after checking half the key space. That doesn't mean this is the best possible attack. Here's an example: AES with 128 bit keys uses 10 rounds. If you used AES with a 128 bit key, but only one round, it would be trivially breakable even though the key is the same size. For many algorithms, there are known attacks which can break the algorithm much faster the searching the entire key space.
In the case of block ciphers, there are other considerations as well. That is because block ciphers process data in chunks of bits. There are various combinatorial properties which come into play after you've started encrypting large amounts of data. For instance using the common CBC mode, you start running into problems after encrypting about 2^(n/2) blocks (this problem is intrinsic to CBC). For a 64 bit cipher like RC2, that means 2^32 64 bit blocks, or 32 GiB, which while large is quite easy to imagine (eg you encrypt a disk image with it). Whereas for a 128 bit cipher like AES, the problem only starts to crop up after about 2^64 128 bit blocks, or roughly 295 exabytes. In a scenario like this, AES with a 64 bit key would in fact be much more secure than RC2 with a 64 bit key.
Here we get to the epistemology portion of the answer: even if there are no known attacks, it doesn't mean that there are no attacks possible. RC2 is quite old and is rarely used; even when it was a fairly current cipher there was rather less analysis of it than, say, DES. It's quite likely that nobody in the last 5 years has bothered to go back and look at how to break RC2 using the latest attack techniques, simply because in the relatively academic publish-or-perish model that modern public cryptography research operates under, there is less gain to be had; it's much much better if you're seeking tenure (or looking to beef up your reputation to get more consulting work) to publish even a very marginal improvement on attacking AES than it would be to utterly demolish RC2, because nobody uses it anymore.
And with a 64 bit key, you've immediately constrained yourself to that upper bound, and 2^64 effort is really quite low; possibly within reach not just for intelligence agencies but even reasonably sized corporations (or botnet herders).
Finally, I'll point out that RC2 was designed to be fast on 286/386-era processors. On modern machines it is substantially (roughly 4-6x) slower than AES or similar ciphers designed in the last 10 years.
I really can't see any upside to using RC2 for anything, the only use I can imagine that would make sense would be for compatibility with some ancient (in computer time) system. Use AES (or one of the 4 other AES finalists if you must).
Here is my personal explanation about the expression "attack on n out of p rounds" that you can find on the page http://en.wikipedia.org/wiki/Block_cipher_security_summary . But beware: I am actually posting this as an answer so that people can tell me if I'm wrong. No-one ever explained this to me, and I am not a specialist, this is just the only explanation that makes sense that I could figure.
Cryptographers consider any algorithm that require less than brute force operations to be a successful attack. When a cipher is said to have an attack on "n out of p rounds", I guess that it means that if the cipher was defined as n rounds of the basic function it is actually defined as p rounds of, there would be an attack for it. Perhaps the algorithm actually keeps working for more than n rounds, but the cut-off point where it becomes more expensive than brute-force is n. In other words, this is a very fine distinction for an algorithm that is not broken, and it tells us how close we are to understanding abstractly the mathematical function it implements. This explains the seemingly arbitrary numbers than occur as values of "n" when this expression is employed.
To reiterate, a cipher that has an attack on n out of p rounds is a cipher that is not broken.
Also, an algorithm that is "broken" because it has an attack in 2100 operations for a 128-bit key can still be useful. The worry is in this case that further mathematical discoveries can continue to eat at the number of operations it takes to crack it. But 2100 is just as impractical as 2128.

Resources