**Launch your smart contract idea in just 4 days**

515 reads

by Mike RoksMarch 8th, 2024

Hi everybody, my name is Mike Roks, and today I'm going to try to give you a comprehensive understanding of what Dynamic Programming actually is, how are the tasks regarding it structured, and what is the general approach to solving these tasks.

The base principle in all dynamic programming tasks is finding a way to get the return value for input (X) - `large X`

so that:

- Answer is "Given" (obvious, known)
- Or can be always calculated using answers for (x) -
`small x`

, where (x) is an input "smaller" (simpler, in the context of a given task, logically ordered before) than (X)

That's pretty much it, but it requires clear examples to be understood in practice, let's check them out:

**Easy** - Fibonacci numbers

In the example of Fibonacci numbers, the "Given" answers are for inputs 1 (== 0) and 2 (== 1), the "logically larger" input, in this case, is a larger number on an axis [1, +inf), and to calculate answer for X, you need answers for X-1 and X-2. So the "direction" in which the X grows is - towards +inf on an integer scale, and the way to get the return value is:

- If 1 or 2 ret 0 or 1
- If > 2 ret fib(X-1) + fib(X-2) This way, getting return values for "Larger X" can always be done using values of "smaller x” and values for "the smallest x" are known. So, we have a valid solution to this problem using the base principle of dynamic programming. The non-optimized but obvious code is:

```
public class FibFinderBad {
public static void main(String args[]) {
int numToFindFibFor = 8;
System.out.println(fib(numToFindFibFor));
}
private static int fib(int x) {
if(x < 1) return -1;
//the "obvious" answers - "base cases"
if(x == 1) return 0;
if(x == 2) return 1;
//the answer which uses "smaller" x's, but not "larger" x's
return fib(x-1) + fib(x-2);
}
}
```

As you can see, in each call of `fib`

, we either 1) return straight away if x is < 0, 0 or 1, or make 2 other calls of the same method `fib`

. So if `numToFindFibFor`

is 0, we make a total of 1 calls for `fib`

, if `numToFindFibFor`

is 1, we also make a total of 1 call. If `numToFindFibFor`

is 2, we make 1 call for 2 & 1 call for both `1`

and `0`

, so the total of 3 calls.
If `numToFindFibFor`

is 3, we make 1 for 3, 1 call for 2, 2 calls for 1, 1 call for 0.
Here is a picture of calls for calculating `fib(5)`

But in this graphic, you can notice we are doing the same operation multiple time, for example in the same run of calculating `fib`

of 5, calculating `fib`

of 2 comes up a numerous amount of time! Same for fib of 3. You can imagine if the initial `x`

becomes a huge number, we will calculate `fib`

of 2, 3, 4… thousands or millions of times, completely unnecessary, as `fib`

of 3 is the same no matter how many times you calculate it!

So, we introduce an `optimization mechanism`

, which is not necessary for a task to be solved in a `Dynamic Programming manner`

, but which often helps us to drastically reduce the Time Complexity of our solution - `Caching`

.

The number of calls of `fib`

in the initial solution was of the order of `2^numToFindFibFor`

, which is already a monstrous number when `numToFindFibFor`

goes beyond 40, and to calculate `fib`

of 300 with this code in your lifespan, you'd need a supercomputer 😀. Let's cache the previously calculated values of `fib`

instead of just throwing them away. A cache can be any data structure, it just needs to allow us to save the currently computed result & try to find a previously calculated result

```
public class FibFinderBadWithCaching {
final static Map<Integer, Integer> fibCache = new HashMap<>();
public static void main(String args[]) {
int numToFindFibFor = 7;
System.out.println(fib(numToFindFibFor));
}
private static int fib(int x) {
if(x < 1) return -1;
//the "obvious" answers - "base cases"
if(x == 1) return 0;
if(x == 2) return 1;
if(fibCache.containsKey(x)) return fibCache.get(x);
//the answer which uses "smaller" x's, but not "larger" x's
final int res = fib(x-1) + fib(x-2);
fibCache.put(x, res);
return res;
}
}
```

And now, when you are running `fib(300)`

, you'll have to call `fib`

~900 times in total. That's linear complexity!

This code can be even further optimized to require a constant Space instead of requiring as many items in `fibCache`

as there are unique calls to `fib`

, but this optimization it's not necessary for the purpose of this tutorial. We have already gone from TC of O(2^N) to `TC of O(N)`

, which is a qualitative difference, as before we could not run `fib`

on anything above 100, and now we can.
That's it for now, in part 2 we will see a more complicated task & use the same base principle in solving it!

L O A D I N G

. . . comments & more!

. . . comments & more!