Dynamic programming (competitive programming)

This is an introduction guide to learning dynamic programming problems and techniques that might come up during competitive programming and/or programming interviews.

Top-down Memoization vs Bottom-up tabulation
There are two main approaches to implementing dynamic programming - bottom-up tabulation or top-down memoization.

It is generally a good idea to practice both approaches. The main advantage of using a bottom-up is taking advantage of the order of the evaluation to save memory, and not to incur the stack costs of a recursive solution. Meanwhile, the top-down memoization approach might be more intuitive to implement as it formulates the problem as a standard recursive problem, with a way to memoize the results that you have already computed - in other words, using a cache. Additionally, because of the recursive formulation of the top-down approach, the algorithm will only solve the subproblems that are reachable from your main problem, while bottom-up will generally attempt to solve all subproblems, whether they are useful for the ultimate problem or not.

Example - Minimum Falling Path Sum

 * Leetcode Link
 * Problem Explanation

In this problem, you want to find minimum sum from top to bottom. The code for the two approaches are below:

    

The recursive solution is clearly going to run too slowly, at $$N = 100$$, as each cell branches 3 times, or roughly $$O(3^{2N})$$ operations. By using memoization to cache results, note that we can only have $$O(N^2)$$ results, each doing a constant amount of work, thus making the solution $$O(N^2)$$. As we need to store data for each cell, this is also $$O(N^2)$$ space.

The bottom-up approach has tighter loops around $$O(N^2)$$ as there are no recursion overhead, and better memory locality generally. It's easy to see that this is $$O(N^2)$$ space as well. Furthermore, we can optimize the space requirement to $$O(N)$$ by noting that for each subproblem, we only need to remember the results to the previous row - bounding us to only keep track of the previous row and the current row, which is $$O(N)$$.

This optimization is akin to solving the Fibonacci sequence with dynamic programming and reducing the space from $$O(N)$$ to $$O(1)$$ by noting that for a given $$f_i$$, we only depend on $$f_{i-1}$$ and $$f_{i-2}$$, and thus only keeping track of three variables.

Dynamic Programming Patterns

 * Prefix/Suffix DP
 * with O(N) cost / Shortest/Longest Path in a DAG
 * Subsequence DP
 * Subsequence with O(N) cost per DP / Divide and Conquer
 * Pseudo-polynomial DP
 * Try all subset DP