How secure is 64-bit RC2? - security

In encryption, would two symmetric algorithms be considered to be equal in terms of security if their key sizes are equivalent? (i.e. does a 64-bit RC2 algorithm provide the same exact security that a 64-bit AES algorithm would?)
How secure (or insecure) would it be to use a 64-bit RC2 algorithm?
How long could I expect it to take for a brute force attack to crack this kind of encryption?
What kind of data would it be okay to secure with this algorithm? (e.g. I'm guessing that credit card info would not be okay to encrypt with this algorithm since the algorithm is not secure enough).

In general, equivalent key sizes does not imply equivalent security, for a variety of reasons:
First, it's simply the case that some algorithms are have known attacks where others do not. The size of the key is just the upper bound of the effort it would take to break the cipher; in the worst case, you can always try every possible key and succeed (on average) after checking half the key space. That doesn't mean this is the best possible attack. Here's an example: AES with 128 bit keys uses 10 rounds. If you used AES with a 128 bit key, but only one round, it would be trivially breakable even though the key is the same size. For many algorithms, there are known attacks which can break the algorithm much faster the searching the entire key space.
In the case of block ciphers, there are other considerations as well. That is because block ciphers process data in chunks of bits. There are various combinatorial properties which come into play after you've started encrypting large amounts of data. For instance using the common CBC mode, you start running into problems after encrypting about 2^(n/2) blocks (this problem is intrinsic to CBC). For a 64 bit cipher like RC2, that means 2^32 64 bit blocks, or 32 GiB, which while large is quite easy to imagine (eg you encrypt a disk image with it). Whereas for a 128 bit cipher like AES, the problem only starts to crop up after about 2^64 128 bit blocks, or roughly 295 exabytes. In a scenario like this, AES with a 64 bit key would in fact be much more secure than RC2 with a 64 bit key.
Here we get to the epistemology portion of the answer: even if there are no known attacks, it doesn't mean that there are no attacks possible. RC2 is quite old and is rarely used; even when it was a fairly current cipher there was rather less analysis of it than, say, DES. It's quite likely that nobody in the last 5 years has bothered to go back and look at how to break RC2 using the latest attack techniques, simply because in the relatively academic publish-or-perish model that modern public cryptography research operates under, there is less gain to be had; it's much much better if you're seeking tenure (or looking to beef up your reputation to get more consulting work) to publish even a very marginal improvement on attacking AES than it would be to utterly demolish RC2, because nobody uses it anymore.
And with a 64 bit key, you've immediately constrained yourself to that upper bound, and 2^64 effort is really quite low; possibly within reach not just for intelligence agencies but even reasonably sized corporations (or botnet herders).
Finally, I'll point out that RC2 was designed to be fast on 286/386-era processors. On modern machines it is substantially (roughly 4-6x) slower than AES or similar ciphers designed in the last 10 years.
I really can't see any upside to using RC2 for anything, the only use I can imagine that would make sense would be for compatibility with some ancient (in computer time) system. Use AES (or one of the 4 other AES finalists if you must).

Here is my personal explanation about the expression "attack on n out of p rounds" that you can find on the page http://en.wikipedia.org/wiki/Block_cipher_security_summary . But beware: I am actually posting this as an answer so that people can tell me if I'm wrong. No-one ever explained this to me, and I am not a specialist, this is just the only explanation that makes sense that I could figure.
Cryptographers consider any algorithm that require less than brute force operations to be a successful attack. When a cipher is said to have an attack on "n out of p rounds", I guess that it means that if the cipher was defined as n rounds of the basic function it is actually defined as p rounds of, there would be an attack for it. Perhaps the algorithm actually keeps working for more than n rounds, but the cut-off point where it becomes more expensive than brute-force is n. In other words, this is a very fine distinction for an algorithm that is not broken, and it tells us how close we are to understanding abstractly the mathematical function it implements. This explains the seemingly arbitrary numbers than occur as values of "n" when this expression is employed.
To reiterate, a cipher that has an attack on n out of p rounds is a cipher that is not broken.
Also, an algorithm that is "broken" because it has an attack in 2100 operations for a 128-bit key can still be useful. The worry is in this case that further mathematical discoveries can continue to eat at the number of operations it takes to crack it. But 2100 is just as impractical as 2128.

Related

Adler-32 in Multilevel Security Environment Safe Practices

I'm implementing a multilevel security environment on several several web servers running Debian. I've done quite a bit of reading on fast hash checking algorithms to compliment the other security components.
It seems Adler-32 is quite fast and compact (which I'm quite fond of), although I understand it can be 'easily' forged. This aspect of it makes me a bit nervous, so is there some way to safeguard against it being forged somehow?
No. CRCs can also be easily forged. If you are worried about forging (and make sure that you understand why you are worried about forging), then you need to use a cryptographically secure hash. E.g. SHA-2.
No. You cannot condition Adler-32 to be secure, for a simple reason which will affect even a perfect non-reversible hash function you might hope to find.
No 32-bit checksum or digest can be meaningfully resistant to attack because there are only four billion possible outcomes in a modified message. This means that to brute-force a collision takes a comparatively trivial amount of time.
To put this in real-world terms, hashcash is giving me an estimate of 874 seconds to brute-force 32 bits of of a SHA1 digest. Any checksum which you choose for speed is going to get proportionally easier.
That's even before you start to consider potential weaknesses in the algorithm which might yield a more efficient approach than brute force, and the use of GPU computing to accelerate the attack.

Is XXTEA a good encryption algorithm for a PIC microcontroller?

I need a good encrypt algorithm for a PIC microcontroller. After some googling, it seems XXTEA is the only option, however, "XXTEA is vulnerable to a chosen-plaintext attack requiring 2^59 queries and negligible work".
I am not good at cryptography, so I would like to ask: how accurate is the above statement? Could I use XXTEA in a commercial security application? If no, is there any available algorithm I could use for my embedded system?
You cannot know what makes an encryption algorithm secure. Nobody knows what makes an encryption algorithm secure. The best we have are "algorithms which have sustained heavy scrutiny from hundreds of cryptographers during many years, and are still relatively unscathed". This is the case for AES, not for XXTEA. We may note that the attack on XXTEA is still very expensive, on the verge of the feasible and probably not applicable to most "commercial" situations, but still, this algorithm has been demonstrated flaky. As such, if you value your security, don't get creative with your crypto; use well-vetted standards.
Why would you want to use XXTEA ? What does it do for you, that AES does not ? You may want to have a look at this question for some pointers to implementations of AES for some PIC microcontrollers.
(The main design criterion of TEA and its derivatives like XXTEA was to have compact source code, so that it could be learned by heart and typed again on a computer. This does not immediately translates to compactness of compiled code. (X*)TEA algorithms tend to be slow and to rely on 32-bit operations which are ill-fitting for small microcontrollers.)
One would look for other encryption methods like XXTEA if they want block size to be lesser than 128 bits. Because you may have communication medium with very low bandwidth like powerline communication in noisy environment. There are useful data transmission of only few bytes like 4 or 5 bytes of payload. In this condition, if AES is used, then the block size becomes 16 bytes and creates lot of overhead on the bandwidth available.
In case of XXTEA the block size is only 64 bits (8 bytes) so creates less overhead.

Password hashing: Is this a way to avoid collisions?

I was thinking about using 2 keys for hashing each user password, obtaining 2 different hashes. This way, it would be (almost?) impossible to find a password that works, other than the actual password.
Is that right? Is it worth it?
An important rule to learn is "never try to invent your own cryptography". You are just wasting time at best and introducing security holes at worst.
If you are unsure whether you are an exception to this rule, then you are not an exception to this rule.
The designers of cryptographic hashes already worried about collisions so you do not have to. Just pick one (SHA-256 is a fine choice) and focus your efforts on the rest of your application.
You might use SHA256 as a hashing algorithm instead. No collisions were found to date, and it's highly unlikely to see any collisions on passwords in the future.
You could just use a longer hash. SHA-512, for example, is 512 bits, and (assuming it's uniform) far, far less likely to clash as SHA-256. But personally, I wouldn't worry about it. Most passwords themselves are less than 32 bytes (256 bits), and so should have an extremely low probability of clashing with SHA-256.

AES vs Blowfish for file encryption

I want to encrypt a binary file. My goal is that to prevent anyone to read the file who doesn't have the password.
Which is the better solution, AES or Blowfish with the same key length? We can assume that the attacker has great resources (softwares, knowledge, money) for cracking the file.
Probably AES. Blowfish was the direct predecessor to Twofish. Twofish was Bruce Schneier's entry into the competition that produced AES. It was judged as inferior to an entry named Rijndael, which was what became AES.
Interesting aside: at one point in the competition, all the entrants were asked to give their opinion of how the ciphers ranked. It's probably no surprise that each team picked its own entry as the best -- but every other team picked Rijndael as the second best.
That said, there are some basic differences in the basic goals of Blowfish vs. AES that can (arguably) favor Blowfish in terms of absolute security. In particular, Blowfish attempts to make a brute-force (key-exhaustion) attack difficult by making the initial key setup a fairly slow operation. For a normal user, this is of little consequence (it's still less than a millisecond) but if you're trying out millions of keys per second to break it, the difference is quite substantial.
In the end, I don't see that as a major advantage, however. I'd generally recommend AES. My next choices would probably be Serpent, MARS and Twofish in that order. Blowfish would come somewhere after those (though there are a couple of others that I'd probably recommend ahead of Blowfish).
It is a not-often-acknowledged fact that the block size of a block cipher is also an important security consideration (though nowhere near as important as the key size).
Blowfish (and most other block ciphers of the same era, like 3DES and IDEA) have a 64 bit block size, which is considered insufficient for the large file sizes which are common these days (the larger the file, and the smaller the block size, the higher the probability of a repeated block in the ciphertext - and such repeated blocks are extremely useful in cryptanalysis).
AES, on the other hand, has a 128 bit block size. This consideration alone is justification to use AES instead of Blowfish.
In terms of the algorithms themselves I would go with AES, for the simple reason is that it's been accepted by NIST and will be peer reviewed and cryptanalyzed for years. However I would suggest that in practical applications, unless you're storing some file that the government wants to keep secret (in which case the NSA would probably supply you with a better algorithm than both AES and Blowfish), using either of these algorithms won't make too much of a difference. All the security should be in the key, and both of these algorithms are resistant to brute force attacks. Blowfish has only shown to be weak on implementations that don't make use of the full 16 rounds. And while AES is newer, that fact should make you lean more towards BlowFish (if you were only taking age into consideration). Think of it this way, BlowFish has been around since the 90's and nobody (that we know of) has broken it yet....
Here is what I would pose to you... instead of looking at these two algorithms and trying to choose between the algorithm, why don't you look at your key generation scheme. A potential attacker who wants to decrypt your file is not going to sit there and come up with a theoretical set of keys that can be used and then do a brute force attack that can take months. Instead he is going to exploit something else, such as attacking your server hardware, reverse engineering your assembly to see the key, trying to find some config file that has the key in it, or maybe blackmailing your friend to copy a file from your computer. Those are going to be where you are most vulnerable, not the algorithm.
AES.
(I also am assuming you mean twofish not the much older and weaker blowfish)
Both (AES & twofish) are good algorithms. However even if they were equal or twofish was slightly ahead on technical merit I would STILL chose AES.
Why? Publicity. AES is THE standard for government encryption and thus millions of other entities also use it. A talented cryptanalyst simply gets more "bang for the buck" finding a flaw in AES then it does for the much less know and used twofish.
Obscurity provides no protection in encryption. More bodies looking, studying, probing, attacking an algorithm is always better. You want the most "vetted" algorithm possible and right now that is AES. If an algorithm isn't subject to intense and continual scrutiny you should place a lower confidence of it's strength. Sure twofish hasn't been compromised. Is that because of the strength of the cipher or simply because not enough people have taken a close look ..... YET
The algorithm choice probably doesn't matter that much. I'd use AES since it's been better researched. What's much more important is choosing the right operation mode and key derivation function.
You might want to take a look at the TrueCrypt format specification for inspiration if you want fast random access. If you don't need random access than XTS isn't the optimal mode, since it has weaknesses other modes don't. And you might want to add some kind of integrity check(or message authentication code) too.
I know this answer violates the terms of your question, but I think the correct answer to your intent is simply this: use whichever algorithm allows you the longest key length, then make sure you choose a really good key. Minor differences in the performance of most well regarded algorithms (cryptographically and chronologically) are overwhelmed by a few extra bits of a key.
Both algorithms (AES and twofish) are considered very secure. This has been widely covered in other answers.
However, since AES is much widely used now in 2016, it has been specifically hardware-accelerated in several platforms such as ARM and x86. While not significantly faster than twofish before hardware acceleration, AES is now much faster thanks to the dedicated CPU instructions.

How bad is 3 as an RSA public exponent

I'm creating an application where I have to use RSA to encrypt some stuff using a public key. I want this encryption to be really fast. Initially, I tried a 2048 bit key with F4 (=65537) as the exponent but it is not fast enough. So now I'm considering the following 2 options:
2048 bit modulus, e=3
1024 bit modulus, e=65537
Both satisfy my performance requirements but which one provides better security? I should also note that I use the PKCS#1 padding scheme.
If you use random padding such as OAEP in PKCS#1, most (all?) of the known weaknesses from using low exponents are no longer relevant.
Also have you tried using e=17? There's no rule saying you have to choose either 3 or 65537.
Provided that you use a good padding scheme, then there is no known reason why e=3 should have worse security than any other public exponent. Using a short exponent has issues if you also do not use a good padding scheme, but the problem more lies in the padding scheme than in the exponent.
The "gut feeling" of many researcher is that e=3 is not better than any other public exponent, and e=3 might turn out to be slightly weaker at some unspecified future date, although nothing points at such a weakness right now.
Key length has a much higher practical impact on security. A 768-bit RSA key has been cracked recently (this was not easy ! Four years of work with big computers and bigger brains). A 1024-bit key is deemed adequate for the short term, but long-term uses (e.g. the encrypted data has high value and must still be confidential in year 2030) would mandate something bigger, e.g. 2048 bits. See this site for much information on how cryptographic strength can be estimated and has been estimated by various researchers and organizations.
If you are after very fast asymmetric encryption, you may want to investigate the Rabin-Williams encryption scheme which is faster than RSA, while providing at least the same level of security for the same output length (but there is no easy-to-use detailed standard for that scheme, contrary to RSA with PKCS#1, so you are a bit on your own here).
While there is currently no known attack against if a correct padding is used, small exponents are more likely to lead to exploits in case of implementation errors. And implementation errors are unfortunately still a threat. E.g. this is a vulnerability that was quite "popular". (Note, this is for signatures. I just want to show that even commercial software can have serious bugs.)
If you have to cut corners, then you have to consider the potential implications of your actions. I.e. choosing a small modulus or a small exponent both have their own drawbacks.
If you choose a small (1024 bit) modulus then you can't assume that your data can be kept confidential for decades.
If you choose a small exponent you might be more susceptible to implementation errors.
In the first case, you pretty much know when your secrets are in danger, since it is quite easy to follow the progress made in factoring. (This assumes of course that agencies that don't publish e.g. NSA is not your enemy).
In the second case (implementation errors), you don't know when you made a mistake. You might be safe using e=3 or you might have made a big blunder. I.e. in one case you have a rather good way to estimate your risk, and in the other case you have not.
Therefore, I'd recommend not to use e=3 at all.
I'd use more safety margin against those threats that are hard to predict, than those threats that are widely publicized.
In their book 'Practical Cryptography', Bruce Schneier and Niels Ferguson suggest using a public exponent of 3 for signatures and 5 for encryption. You should double check on the other criteria they recommend which avoid catastrophes. Section 13.4 covers this (p229ff), and discusses the not very complex requirement that given n = pq (where p and q are random primes), neither (p-1) nor (q-1) can be a multiple of 3 or 5. But still double check the book for the details.
(I believe there is a new edition of the book due out in 2010.)
To cite Don Coppersmith's 1997 paper "Small Solutions to Polynomial Equations, and Low Exponent RSA Vulnerabilities":
RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message.
While this may not be a problem if RSA-OAEP padding scheme is used, the PKCS#1 padding scheme (which op is using) is vulnerable if public exponent 3 is used.
FYI, see this for a bit of history:
http://chargen.matasano.com/chargen/2006/9/18/rsa-signature-forgery-explained-with-nate-lawson-part-iv.html
If your exponent is low and the value of m*e < modulus, you can just take the eth root of the ciphertext to decrypt.
This is from my notes on crypto from two years ago. But, in answer to your question, it would seem that option 2 is better.
Someone who is more eager to do math might be able to give you a better explanation why.

Resources