Hi everybody, my name is Mike Roks, and today I'm going to try to give you a comprehensive understanding of what Dynamic Programming actually is, how are the tasks regarding it structured, and what is the general approach to solving these tasks. The base principle in all dynamic tasks is finding a way to get the return value for input (X) - so that: programming large X Answer is "Given" (obvious, known) Or can be always calculated using answers for (x) - , where (x) is an input "smaller" (simpler, in the context of a given task, logically ordered before) than (X) small x That's pretty much it, but it requires clear examples to be understood in practice, let's check them out: - Fibonacci numbers Easy In the example of Fibonacci numbers, the "Given" answers are for inputs 1 (== 0) and 2 (== 1), the "logically larger" input, in this case, is a larger number on an axis [1, +inf), and to calculate answer for X, you need answers for X-1 and X-2. So the "direction" in which the X grows is - towards +inf on an integer scale, and the way to get the return value is: If 1 or 2 ret 0 or 1 If > 2 ret fib(X-1) + fib(X-2) This way, getting return values for "Larger X" can always be done using values of "smaller x” and values for "the smallest x" are known. So, we have a valid solution to this problem using the base principle of dynamic programming. The non-optimized but obvious code is: public class FibFinderBad { public static void main(String args[]) { int numToFindFibFor = 8; System.out.println(fib(numToFindFibFor)); } private static int fib(int x) { if(x < 1) return -1; //the "obvious" answers - "base cases" if(x == 1) return 0; if(x == 2) return 1; //the answer which uses "smaller" x's, but not "larger" x's return fib(x-1) + fib(x-2); } } As you can see, in each call of , we either 1) return straight away if x is < 0, 0 or 1, or make 2 other calls of the same method . So if is 0, we make a total of 1 calls for , if is 1, we also make a total of 1 call. If is 2, we make 1 call for 2 & 1 call for both and , so the total of 3 calls. If is 3, we make 1 for 3, 1 call for 2, 2 calls for 1, 1 call for 0. Here is a picture of calls for calculating fib fib numToFindFibFor fib numToFindFibFor numToFindFibFor 1 0 numToFindFibFor fib(5) But in this graphic, you can notice we are doing the same operation multiple time, for example in the same run of calculating of 5, calculating of 2 comes up a numerous amount of time! Same for fib of 3. You can imagine if the initial becomes a huge number, we will calculate of 2, 3, 4… thousands or millions of times, completely unnecessary, as of 3 is the same no matter how many times you calculate it! fib fib x fib fib So, we introduce an , which is not necessary for a task to be solved in a , but which often helps us to drastically reduce the Time Complexity of our solution - . optimization mechanism Dynamic Programming manner Caching The number of calls of in the initial solution was of the order of , which is already a monstrous number when goes beyond 40, and to calculate of 300 with this code in your lifespan, you'd need a supercomputer 😀. Let's cache the previously calculated values of instead of just throwing them away. A cache can be any data structure, it just needs to allow us to save the currently computed result & try to find a previously calculated result fib 2^numToFindFibFor numToFindFibFor fib fib public class FibFinderBadWithCaching { final static Map<Integer, Integer> fibCache = new HashMap<>(); public static void main(String args[]) { int numToFindFibFor = 7; System.out.println(fib(numToFindFibFor)); } private static int fib(int x) { if(x < 1) return -1; //the "obvious" answers - "base cases" if(x == 1) return 0; if(x == 2) return 1; if(fibCache.containsKey(x)) return fibCache.get(x); //the answer which uses "smaller" x's, but not "larger" x's final int res = fib(x-1) + fib(x-2); fibCache.put(x, res); return res; } } And now, when you are running , you'll have to call ~900 times in total. That's linear complexity! fib(300) fib This code can be even further optimized to require a constant Space instead of requiring as many items in as there are unique calls to , but this optimization it's not necessary for the purpose of this tutorial. We have already gone from TC of O(2^N) to , which is a qualitative difference, as before we could not run on anything above 100, and now we can. That's it for now, in part 2 we will see a more complicated task & use the same base principle in solving it! fibCache fib TC of O(N) fib