Browse Curriculum

Greedy Algorithms

An algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit.

What is the Greedy Method?

A paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit.

Greedy Choice Property

We can make a choice that looks best at the moment and solve the subproblems that arise later. The choice made by a greedy algorithm may depend on choices made so far, but not on future choices.

Eat largest cookie first
Pick nearest city
Common Pitfalls

Greedy algorithms do not always find the global optimal solution. They are short-sighted.

Example: In specific Coin Change variants or Longest Path, greedy fails.
Greedy vs Dynamic Programming
Greedy

Fast, Single Path, No Backtracking

Dynamic Programming

Slower, Exhaustive/Memoized, Guarantees Optimum

See the Logic in Motion
Stop memorizing code. Unlock the full interactive visualizer to master the logic step-by-step.
Unlock VisualizerPREMIUM FEATURE

Intuition

Imagine you want to count a pile of money as quickly as possible. You would naturally start by picking the largest denominations first. This is a Greedy approach: making the locally optimal choice at each step with the hope of finding a global optimum.

Concept

Core Properties:

  • Greedy Choice Property: A global optimal solution can be arrived at by selecting a local optimal (greedy) choice.
  • Optimal Substructure: An optimal solution to the problem contains specific optimal solutions to sub-problems.

Greedy vs. DP:

  • Greedy: Makes a choice and never looks back. Generally faster, but doesn't guarantee a global optimum for all problems.
  • Dynamic Programming: Considers all possible choices (and solutions to subproblems) to find the guaranteed optimum.

How it Works

General Framework:

  1. Candidate Set: A list of items to choose from.
  2. Selection Function: Chooses the best candidate to add to the solution.
  3. Feasibility Function: Checks if the candidate can contribute to a solution.
  4. Objective Function: Assigns a value to a solution (maximize or minimize).

Step-by-Step Breakdown

Classic Example (Coin Change):

  • Target: 36 cents. Coins: {25, 10, 5, 1}.
  • Step 1: Pick largest <= 36. Pick 25. Remaining: 11.
  • Step 2: Pick largest <= 11. Pick 10. Remaining: 1.
  • Step 3: Pick largest <= 1. Pick 1. Remaining: 0.
  • Result: 3 coins (25, 10, 1). Optimal!

When to Use

Standard Problems:

  • Activity Selection / Interval Scheduling.
  • Huffman Coding (Compression).
  • Minimum Spanning Tree (Prim's, Kruskal's).
  • Dijkstra's Algorithm (Shortest Path).

When NOT to Use

  • Knapsack Problem (0/1): Greedy fails here; requires DP.
  • Longest Path: Greedy often fails.
  • Any problem where a local optimal choice prevents a global solution.

How to Identify

"Find minimum number of...", "Maximize total...", sorted inputs often suggest Greedy.

Stop Guessing, Start Mastering.

Build the FAANG intuition. Master this pattern with optimized implementations, visual dry runs, and our curated collection of high-yield problems.

Start Your Premium Prep