Dynamic Programming (DP) is a problem solving technique that entails breaking down a problem
into easier, simpler subproblems of a similar structure. By storing the optimal solutions to each
subproblem and reusing them as needed, the runtime for a recursively defined solution can be
A classic example of a problem which can be solved using dynamic programming is the task of
computing the nth Fibonacci number. The Fibonacci numbers are defined by Fn = Fn−1 + Fn−2
and F1 = F2 = 1. A na¨ıve recursive approach might attempt to simply define a base case and a
recurrence relation, computing the desired result in O(2N ) time. However, a DP approach reduces
this runtime to a much more favorable O(N). This can be done in one of two ways: top-down
or bottom-up. The top-down approach to DP is very similar to recursion. The same structure
of base case and recurrence is used; however, we store the values we calculate, essentially turning
them into additional base cases. This technique, known as memoization, eliminates unnecessary
recomputation of results. Conversely, the bottom-up approach begins with the first two values,
and builds up to the final result using the recurrence. Generally most of the work involved in a
DP solution is formulating the subproblems and recurrence relations. Once you’re sure that your
solution is correct, the actual implementation is usually quite simple compared to other types of
algorithms. For the rest of this lecture we will describe the various types of DP problems that
appear on USACO contests, and provide a few practice problems for each type. When it comes to
USACO problems, DP is one of the most useful techniques you can learn. You can pretty much
expect at least one DP problem on every Silver and Gold contest.
Dynamic programming problems in computing contests tend to fall into a couple of different cate-
gories. This is by no means intended to be an exhaustive list, but this should be a helpful reference.
Recognizing problems as knapsack-like or interval/tree based can lead to a solution more quickly,
and thinking about how to deal with the base cases in each of these types is an instructive exercise.
The general integer knapsack problem goes like this: given a knapsack of a particular capacity and
objects each of which have a size and a value, find the set of objects which maximizes the total
value but whose total size is constrained to fit within the knapsack. Some knapsack problems will
allow you to take an unlimited number of each type of object; others will only allow a limited
number of objects. Our solution to this problem is to do dynamic programming on the maximum
value that a knapsack of each size can have in it. Suppose we have a table of the maximum value
that a knapsack of any size can have in it called dp[1..N]. To consider adding another item of size
S, iterate over the dp array and see if placing the current object into the knapsack of siz...