Home
DSA Course
Dynamic Programming
Dynamic Programming
Dynamic Programming
A powerful algorithmic paradigm that solves complex problems by breaking them down into simpler overlapping subproblems and storing the results.
Input: "Complex Problem"
Dynamic Programming (DP) is an optimization technique found in many algorithms. It breaks complex problems into simpler overlapping subproblems and solves each subproblem only once, storing the results for future use.
Top-Down (Memoization)
Starts from the main problem and recursively calls subproblems. It uses a cache (memo) to store results.
if (cache[n]) return cache[n];
cache[n] = solve(n-1) + ...
return cache[n];
}
Bottom-Up (Tabulation)
Starts from the smallest subproblems and iteratively builds up to the main problem solution.
dp[0] = base_case;
for (i = 1 to n) {
dp[i] = dp[i-1] + ...
}
return dp[n];
}
"Those who cannot remember the past are condemned to repeat it."
See the Logic in Motion
Stop memorizing code. Unlock the full interactive visualizer to master the logic step-by-step.Intuition
Imagine you have to write down the solution to a complex math problem. If you see the same problem again later, would you solve it from scratch? No, you'd check your previous notes! Dynamic Programming (DP) is exactly that: remembering answers to the past so you don't repeat work.
Concept
Dynamic Programming applies when a problem has two key properties:
- Overlapping Subproblems: The problem can be broken down into smaller, repetitive subproblems. (e.g., Fibonacci: F_5 needs F_4 and F_3, F_4 needs F_3 and F_2 - F_3 repeats).
- Optimal Substructure: The optimal solution to the problem can be constructed from optimal solutions of its subproblems.
There are two main approaches:
- Memoization (Top-Down): Start from the main problem, recurse down, and cache results.
- Tabulation (Bottom-Up): Start from the smallest subproblems, build a table (array/matrix) up to the main problem.
How it Works
The Core Workflow:
- Identify State: What variables define a subproblem? (e.g., index, capacity).
- Recurrence Relation: How does the current state depend on previous states? (e.g., dp[i] = dp[i-1] + dp[i-2]).
- Base Cases: What are the trivial solutions? (e.g., dp[0] = 0).
- Order of Computation: Decide if you want to recurse (Memoization) or iterate (Tabulation).
Step-by-Step Breakdown
Visualizing the difference:
- Recursion (Without DP): Grows like a tree, re-calculating the same nodes many times. O(2^n).
- Memoization: Prunes the recursion tree. If a node is visited, return stored value. O(N).
- Tabulation: Fills a table row by row or cell by cell. No recursion overhead. O(N).
When to Use
Common Scenarios:
- Maximizing or Minimizing valid combinations (e.g., Knapsack, Longest Common Subsequence).
- Counting total arrangements (e.g., Coin Change, Unique Paths).
- String edit distance problems.
- Graph problems on Directed Acyclic Graphs (DAGs).
When NOT to Use
- No Overlap: If the problem splits into non-overlapping subproblems, use Divide and Conquer (e.g., Merge Sort).
- No Optimal Substructure: If the local optimal choice doesn't lead to the global optimum (or dependencies are cyclic).
How to Identify
"Maximize/Minimize value", "Count ways to...", "Longest/Shortest sequence", "Can you reach...?".
Stop Guessing, Start Mastering.
Build the FAANG intuition. Master this pattern with optimized implementations, visual dry runs, and our curated collection of high-yield problems.
