Now, let's think about it from the perspective of an attacker. We want to
brute force a hash result, assuming we know which hash function is used. The
simplest way is to compute the hash result corresponding to each possible
password, until finding the good one.

Okay, in this way we should theoritically retrieve the plain password (or a
collision of it). However, if there are lots of possible passwords, the
computation could last too much time. For exemple, if we assume that the
password is composed of exactly 9 alpha-numeric characters (lower or upper),
it raises to 62 possibilities per character, or 62^{9} possibilities, what
roughly corresponds to 2^{54}. Assuming we can compute 32 millions of hashes
by second (≈ 2^{54} hashes/s), this exhaustive attack would last
approximately 500 millions of seconds (≈ 2^{29} s), that is about 17
years (this example uses an Intel i5-5300U CPU, 2.3 GHz, with 2 cores).

Another way would be to do this previous computation one time with a big
calculator, and store the
pairs (plain, hash) in a table. So when we want to retrieve a plain password,
we just have to look for the hash result in the table and take the
corresponding plain.

With this method, the problem is to store the table, which contains 2^{54}
pairs of words (plain, hash) with the previous example. Considering that a
password represents 9 characters (so 9 bytes) and with a hash size of 16 bytes
(*e.g.* MD5), we get 2^{54} times 25 bytes to store, that is more
than 2^{58} bytes, or roughly 250,000 TBytes !

These issues lead us to other methods for brute forcing. One of them consists
in creating dictionnaries containing the most common passwords with all the
imaginable declinations, staying with 9 characters (Figure 3).

With that method, we definitely reduce the cost of a brute force attack. But
obviously, this method is not exhaustive, if a client uses an uncommon
password (*e.g.* : ag14d23a4), it will have a small probability to be
in our dictionnary.

There is actually another way to do, making an efficient
compromise between time and memory, but staying with an exhaustive research.
Imagine a non-exhaustive dictionnary composed of 2 stored columns (passwords
in the first and hash results in the other), but with hidden intermediate
columns which are not stored but can be computed when required by applying
intermediate functions, and finally all these columns compose an exhaustive
table, called Rainbow Table. With an ingenious way,
Rainbow Tables
allow attackers to significantly reduce memory and time complexity, for our
precedent example, the quantity of stored data change from 250,000 TBytes to
690 GBytes, that is a factor gain of roughly 5,000.

Although these last method, which permits to do a good compromise betwenn
time and memory, the brute force complexity stay important, in particular if
you want to hack a bigger password than 9-character size. That leads to wonder
what have been the best ways to accelerate computations in the last years.