Getting the best of both worlds : Space-time trade-offs in algorithms. by@singhal.amogh1995

December 24th 2017 1,848 reads

Fibonacci sequence is an integer sequence where each term after the first two terms is the sum of the preceding two terms. For a better understanding, here is how it looks:

If I were to ask you this, tell me the ** fifth** term in this series. You would simply calculate the sum of the

Put simply, an algorithm is a ** well defined list of steps for solving a particular problem**. There are two major factors which govern the efficiency of an algorithm.

**Space**: the data storage (*RAM, HDD*etc.)consumed in performing a given task,**Time**: time (*computation time*or*response time*)consumed in performing a given task.

Space-time trade-off in a general sense state that :

You can decrease the time complexity of your algorithm in exchange for greater space, or consume lesser space in exchange for slower executions.

Most computers have a large amount of space, but not infinite space. Also, most people are willing to wait a little while for a big calculation, but not forever. So if your problem is taking a long time but not much memory, a space-time trade-off would let you use more memory and solve the problem more quickly. Or, if it could be solved very quickly but requires more memory than you have, you can try to spend more time solving the problem in the limited memory.

The most common condition is an algorithm using a *lookup table*. This means that the answers for some question for every possible value can be written down. One way of solving this problem is to write down the entire lookup table, which will let you find answers very quickly, but will use a lot of space. Another way is to calculate the answers without writing down anything, which uses very little space, but might take a long time.

A space-time tradeoff can be used with the problem of data storage. If data is stored uncompressed, it takes more space but less time than if the data were stored compressed (** since compressing the data decreases the amount of space it takes, but it takes time to run the compression algorithm**).

Larger code size can be used to increase program speed when using *loop unwinding*. This technique makes the program code longer for each iteration of a loop, but saves the computation time needed for jumping back to the beginning of the loop at the end of each iteration.

In the field of *cryptography*, using space-time trade-off, the attacker is decreasing the exponential time required for a *brute-force attack*. Rainbow tables use partially precomputed values in the hash space of a cryptographic hash function to crack passwords in minutes instead of weeks. Decreasing the size of the rainbow table increases the time required to iterate over the hash space. The meet-in-the-middle attack attack uses a space-time trade-off to find the cryptographic key in only 2^{n+1} encryptions (and O(2^{n}) space) compared to the expected 2^{2n} encryptions (but only O(1) space) of the normal attack.

Dynamic programming is another example where the time of solving problems can be decreased by using more memory. The above Fibonacci problem can be solved faster with DP.

This will take O(n) time and space (which is a much better improvement in time, but now we need to store the results of the previous executions instead of just requiring them on demand).

Keep in mind that this is just one example, there are tons of cases that explores this space-time trade-off (and the choice might not be as ‘clear-cut’ as this).

If you have made it till here, here’s a good news for you. Something extra from the bag:

Space-time trade-offs are also prevalent in Biology. How you ask?

Using stored knowledge or encoding stimuli reactions as “instincts” in the DNA avoids the need for “calculation” in time-critical situations.

Thank you for your time. If you like my writing, you can find more of my content on Quora. ** Do these concepts make sense to you?** If you have 12 seconds, I’d love to read your comment below..