Friday, October 26, 2007

Hardware Assisted Brute Force Attacks


Jeff Atwood recently blogged about a hardware assisted brute force password cracking product developed by Elcomsoft.

Using Elcomsoft's figures from a pdf they released, Atwood concludes that we can attempt this many passwords a second:


Dual Core CPU 10,000,000
GPU 200,000,000


...and using Elcomsoft's relatively weak 8-character alpha-based password (a-A, z-Z), there are this many passwords:


52^8 = 53,459,728,531,456


Some more basic math gives us:


53,459,728,531,456 / 10,000,000 pps / 60 / 60 / 24 = 61.9 days
53,459,728,531,456 / 200,000,000 pps / 60 / 60 / 24 = 3.1 days


To his credit, Atwood correctly assaults Elcomsoft's most questionable assumptions: allowing people only 8 characters, and using an extremely limited character set. Adding additional characters, it becomes clear that the exponent plays a rather sizable role. Here's the amount of time for twelve characters:


52^12 / 200,000,000 pps / 60 / 60 / 24 = 22,620,197 days


We can all agree that ~22 million days (about ~61,000 years) is an awful long time. But this is where Atwood makes an error:

For those of you keeping score at home, with a 12 character password this hardware assisted brute-force attack would take 61,973 years. Even if we increased the brute force attack rate by a factor of a thousand, it would still take 62 years.


The error here is pretty basic: over the next 62 years, what sort of new computing techniques will we develop? How much faster will GPUs run five years from now? Ten years from now? What did computing resemble 62 years ago?

I'll give you a hint:



This is ENIAC. It was the "first large-scale, electronic, digital computer capable of being reprogrammed to solve a full range of computing problems." In one second, it could perform:
  • 5,000 add or subtract operations
  • 357 multiply operations with two ten digit operands
  • 35 division or square root operations
...all of this, at a svelte 30 tons, while consuming 150 kW. This is state of the art in computing, roughly ~61 years ago.

Considering the operations being done to hash a password are substantially more complicated than simple arithmetic, how many passwords could ENIAC hash in a second? To be honest, I think it's more likely that this unit would be expressed in number of seconds per password rather than the number of passwords per second, but I'll be the first to admit that I'm not familiar enough with the NTLM hashing algorithm to say with certainty.

For the sake of argument (and not having to discuss a computing architecture that dates me by a good 35 years) let's assume that ENIAC can do one hash a second. This means our modern CPU is ten million times faster than ENIAC, or somewhere between 2^23 and 2^24 times the speed of ENIAC. The GPU is 2^27 to 2^28 times the speed of ENIAC. These are both commodity products, and ENIAC was state of the art, so a more fair comparison would be with something like Blue Gene, but I digress.

Using the above assumptions, this means that the number of hashes we can do should double every 2.25 years. This also means that it'd take 22.5 years for our current computing power to increase 2^10 (or 1024) times. Let's reread Jeff's statement:

Even if we increased the brute force attack rate by a factor of a thousand, it would still take 62 years.


...so this means that 22.5 years from now, a computer would still need another 62 years to crack the 12-character password, for a grand total of 84.5 years. But, if we increase our computing power by a million (2^20, or 45 years from now--it's no big deal, we just push the date out), it will only take .o62 years (roughly 22 days), or 45.062 years from today. If we increase our computing power by 2^27, it ends up being roughly 4.25 hours to crack a single 12 character password.

Obviously all of this is total speculation, with wild figures and estimates on my behalf that are obviously open to interpretation, huge questions about where computing will be 40 years from now, etc. But the point I'm trying to make is: it's a fallacy to assume today's computing power is what you'll have access to in a year. It's a gigantic fallacy to assume today's computing power is what you'll have access to in 20 years (what were you doing in 1987?). Discussing what we'll have at our disposal in 62 years is mind-boggling.

When we're computing the amount of time it takes to do something, it makes no sense to hold computing power constant, especially when we start talking about any amount of time in excess of a year. If we think it's going to take ten years to perform some computation, one viable option is to wait five years (at which point in time your computing power has quadrupled), kick off that batch job, and take a 2.5 year coffee break. It still took you 7.5 years, but that was 2.5 years saved.

No comments: