How secure is 64-bit RC2?

11.8k views Asked by At

In encryption, would two symmetric algorithms be considered to be equal in terms of security if their key sizes are equivalent? (i.e. does a 64-bit RC2 algorithm provide the same exact security that a 64-bit AES algorithm would?)

How secure (or insecure) would it be to use a 64-bit RC2 algorithm?

How long could I expect it to take for a brute force attack to crack this kind of encryption?

What kind of data would it be okay to secure with this algorithm? (e.g. I'm guessing that credit card info would not be okay to encrypt with this algorithm since the algorithm is not secure enough).

2

There are 2 answers

3
Jack Lloyd On BEST ANSWER

In general, equivalent key sizes does not imply equivalent security, for a variety of reasons:

First, it's simply the case that some algorithms are have known attacks where others do not. The size of the key is just the upper bound of the effort it would take to break the cipher; in the worst case, you can always try every possible key and succeed (on average) after checking half the key space. That doesn't mean this is the best possible attack. Here's an example: AES with 128 bit keys uses 10 rounds. If you used AES with a 128 bit key, but only one round, it would be trivially breakable even though the key is the same size. For many algorithms, there are known attacks which can break the algorithm much faster the searching the entire key space.

In the case of block ciphers, there are other considerations as well. That is because block ciphers process data in chunks of bits. There are various combinatorial properties which come into play after you've started encrypting large amounts of data. For instance using the common CBC mode, you start running into problems after encrypting about 2^(n/2) blocks (this problem is intrinsic to CBC). For a 64 bit cipher like RC2, that means 2^32 64 bit blocks, or 32 GiB, which while large is quite easy to imagine (eg you encrypt a disk image with it). Whereas for a 128 bit cipher like AES, the problem only starts to crop up after about 2^64 128 bit blocks, or roughly 295 exabytes. In a scenario like this, AES with a 64 bit key would in fact be much more secure than RC2 with a 64 bit key.

Here we get to the epistemology portion of the answer: even if there are no known attacks, it doesn't mean that there are no attacks possible. RC2 is quite old and is rarely used; even when it was a fairly current cipher there was rather less analysis of it than, say, DES. It's quite likely that nobody in the last 5 years has bothered to go back and look at how to break RC2 using the latest attack techniques, simply because in the relatively academic publish-or-perish model that modern public cryptography research operates under, there is less gain to be had; it's much much better if you're seeking tenure (or looking to beef up your reputation to get more consulting work) to publish even a very marginal improvement on attacking AES than it would be to utterly demolish RC2, because nobody uses it anymore.

And with a 64 bit key, you've immediately constrained yourself to that upper bound, and 2^64 effort is really quite low; possibly within reach not just for intelligence agencies but even reasonably sized corporations (or botnet herders).

Finally, I'll point out that RC2 was designed to be fast on 286/386-era processors. On modern machines it is substantially (roughly 4-6x) slower than AES or similar ciphers designed in the last 10 years.

I really can't see any upside to using RC2 for anything, the only use I can imagine that would make sense would be for compatibility with some ancient (in computer time) system. Use AES (or one of the 4 other AES finalists if you must).

0
Pascal Cuoq On

Here is my personal explanation about the expression "attack on n out of p rounds" that you can find on the page http://en.wikipedia.org/wiki/Block_cipher_security_summary . But beware: I am actually posting this as an answer so that people can tell me if I'm wrong. No-one ever explained this to me, and I am not a specialist, this is just the only explanation that makes sense that I could figure.

Cryptographers consider any algorithm that require less than brute force operations to be a successful attack. When a cipher is said to have an attack on "n out of p rounds", I guess that it means that if the cipher was defined as n rounds of the basic function it is actually defined as p rounds of, there would be an attack for it. Perhaps the algorithm actually keeps working for more than n rounds, but the cut-off point where it becomes more expensive than brute-force is n. In other words, this is a very fine distinction for an algorithm that is not broken, and it tells us how close we are to understanding abstractly the mathematical function it implements. This explains the seemingly arbitrary numbers than occur as values of "n" when this expression is employed.

To reiterate, a cipher that has an attack on n out of p rounds is a cipher that is not broken.

Also, an algorithm that is "broken" because it has an attack in 2100 operations for a 128-bit key can still be useful. The worry is in this case that further mathematical discoveries can continue to eat at the number of operations it takes to crack it. But 2100 is just as impractical as 2128.