Coding patterns — refresher + drills
Pattern-first study beats problem-first. This page is the structured refresher: one chapter per pattern with how to recognize it, the mental model, the template, a walked LeetCode example, and a drill list. Extra depth on DP (the 5-step framework), backtracking (universal recipe), and multithreading (every LC concurrency problem solved).
What you'll learn
- The 60-second pattern decoder (cheatsheet)
- Two pointers
- Sliding window
- Prefix sum
- Hashmap counting
- Monotonic stack
- Monotonic deque
- Binary search on answer
- Heap / top-K
- Union-Find
- Trie
- BFS
- Tree DFS / recursion
- Graph traversal — DFS, topo sort, cycle detection
- Backtracking — the universal framework
- Backtracking — 6 sub-patterns (subsets, combinations, permutations, partitions, constraints, grid)
- DP — the 5-step framework
- DP types — linear, knapsack, state-machine
- DP types — grid, sequence/LCS, LIS
- DP types — interval, bitmask, tree, digit
- Bit manipulation
- Multithreading — primitives & patterns
- Multithreading — every LC concurrency problem solved
- Drill schedule (4-week plan)
- Canonical hard problems Sr Staff loops ask
- Pattern-recognition quiz
Read the problem prompt, scan for signal phrases, name the pattern in < 30 seconds. If you can't, you haven't internalized the pattern yet — go back to its chapter.
| When you see… | Reach for… |
|---|---|
| "sorted array", "two indices", "find pair" | Two pointers |
| "longest", "shortest", "subarray with property" | Sliding window |
| "sum of subarrays", "range sum query" | Prefix sum |
| "k largest / smallest", "top k", "scheduling", "merge K" | Heap (min-heap of size k) |
| "next greater / smaller", "largest rectangle", "trap water" | Monotonic stack |
| "sliding window max / min" | Monotonic deque |
| "min of max" / "max of min" + monotonic predicate | Binary search on answer |
| "shortest path on unweighted graph" | BFS |
| "all permutations / subsets / combinations / partitions" | Backtracking |
| "course schedule", "build order", "dependencies" | Topological sort |
| "merge accounts", "connected components", "redundant edge" | Union-Find |
| "min cost / max value with overlapping subproblems" | DP — 1D first, then 2D |
| "merge / split intervals to minimize total cost" | Interval DP |
| "small N (≤ 20), need to track subsets" | Bitmask DP |
| "single number among duplicates", XOR tricks | Bit manipulation |
| "reverse linked list", "k-th from end" | Two pointers (slow/fast on linked list) |
| "detect cycle in linked list" | Floyd's tortoise & hare |
| "LRU / LFU / O(1) operations" | Hashmap + doubly linked list |
| "frequent strings", "autocomplete" | Trie |
| "interval scheduling", "meeting rooms" | Sort + heap OR sort + sweep |
| "alternate output between threads", "synchronize" | Multithreading — semaphores |
- The signal is the prompt language. Train your eye to read it.
- 30 seconds → pattern. 3-5 minutes → brute force. Then optimize.
- If two patterns fit, start with the simpler one. Optimize only if asked.
When the input is sorted (or can be sorted) and you're looking for a pair / triplet / specific structure, two pointers walks the array in O(N) using two indices that move toward each other or in the same direction.
How to recognize it
- "Given a sorted array…"
- "Find two numbers / a pair…"
- "In-place partition / rearrange"
- "Container with most water" / "trap rain water" geometric framings
The mental model
One pointer on the left, one on the right. They move toward each other based on a comparison. The key invariant: the answer is always between the two pointers, so we can safely move the worse-positioned pointer inward. This works because sorting eliminates ambiguity — sum-too-small means move left forward; sum-too-large means move right backward.
Template
def two_sum_sorted(arr, target):
left, right = 0, len(arr) - 1
while left < right:
s = arr[left] + arr[right]
if s == target:
return [left, right]
elif s < target:
left += 1
else:
right -= 1
return [-1, -1]
Find all unique triplets that sum to 0. Sort first, then for each i, do two-pointer on (i+1, n-1) for target = -arr[i]. Skip duplicates at each level to dedupe. O(N²) total.
Drill problems
- LC 167 Two Sum II (sorted) — canonical two-pointer warmup
- LC 15 3Sum — extension with dedup
- LC 11 Container with most water — move shorter side
- LC 42 Trapping rain water — two-pointer with max tracking
- LC 26 Remove duplicates from sorted
- LC 283 Move zeroes — partition variant
- LC 75 Sort colors — Dutch national flag (three pointers)
- LC 125 Valid palindrome
- Sort first if not sorted; it's almost always required.
- For dedup in N-sum problems:
while i+1 < n and arr[i] == arr[i+1]: i += 1after a match. - Linked-list two-pointer (slow + fast at 2×) gives middle + cycle detection.
Anywhere you see "longest / shortest subarray (or substring) with property X," you're looking at a sliding window. Two pointers left and right bound the window. Expand right; when invariant breaks, shrink left until valid; record best.
How to recognize it
- "Longest substring without repeating characters" → variable-size window
- "Max sum of k contiguous elements" → fixed-size window
- "Smallest subarray with sum ≥ S" → variable window
- Any "contiguous" + extremum problem
The mental model
Imagine a window of two indices (left, right). Advance right to expand. If the window violates a constraint, advance left to shrink. The window is always a valid range; we just record the best valid size we ever achieved.
Template — variable window
def longest_with_property(s):
left = 0
state = {} # what we track (counter, set, sum)
best = 0
for right, c in enumerate(s):
# 1. expand right
state[c] = state.get(c, 0) + 1
# 2. shrink left while invalid
while not valid(state):
state[s[left]] -= 1
if state[s[left]] == 0:
del state[s[left]]
left += 1
# 3. record best (only when valid)
best = max(best, right - left + 1)
return best
Template — fixed window
def max_sum_window_k(arr, k):
s = sum(arr[:k]); best = s
for i in range(k, len(arr)):
s += arr[i] - arr[i-k]
best = max(best, s)
return best
Hashmap last_seen[c] = index. As right advances, if c was seen at position ≥ left, jump left = last_seen[c]+1. Otherwise extend window. Record max(right−left+1). O(N).
Drill problems
- LC 3 Longest substring w/o repeating — variable window template
- LC 76 Minimum window substring — the hard sliding window canon
- LC 209 Min size subarray sum
- LC 424 Longest repeating char replacement — track max char count
- LC 438 Find all anagrams — fixed-size window with char count
- LC 567 Permutation in string
- LC 992 Subarrays with K different — exactly-K trick: at-most(K) − at-most(K-1)
- LC 904 Fruit into baskets
- Variable window: expand right, contract left, record best inside the loop after shrinking.
- Fixed window: maintain rolling sum/counter; subtract leaving element, add entering element.
- "Exactly K" → atMost(K) − atMost(K-1). Memorize this trick.
Range sum queries in O(1) after O(N) preprocessing. The killer combo: prefix sum + hashmap finds "subarrays whose sum equals K" in O(N).
Recognize it
- "Sum / count of subarrays satisfying property" → prefix sum + hashmap
- "Range sum query" → prefix array
- "Equal partition of array" → prefix sum equality
Template
# Prefix array: pre[i] = sum(arr[:i])
# Range sum arr[l..r] inclusive = pre[r+1] - pre[l]
def subarray_sum_equals_k(nums, k):
"""Count subarrays whose sum == k. O(N)."""
count = 0
prefix = 0
seen = {0: 1} # empty prefix sum has count 1
for x in nums:
prefix += x
# if prefix - k was seen, then arr from that point to here sums to k
count += seen.get(prefix - k, 0)
seen[prefix] = seen.get(prefix, 0) + 1
return count
For each index i, ask: how many earlier indices j had prefix[j] = prefix[i] - k? If we know that count, we know how many subarrays ending at i sum to k. Hashmap stores the count of each prefix sum seen so far. Initialize with {0: 1} to handle subarrays starting at index 0.
Drill problems
- LC 303 Range sum query (immutable)
- LC 304 Range sum 2D
- LC 560 Subarray sum equals K
- LC 525 Contiguous array — map 0→-1, find equal prefix
- LC 974 Subarray sums divisible by K
- LC 238 Product of array except self — prefix + suffix product
- LC 1413 Min start value for positive prefix
- Initialize hashmap with
{0: 1}for the empty prefix. - 2D prefix sum uses inclusion-exclusion:
sum(r1..r2, c1..c2) = P[r2+1][c2+1] − P[r1][c2+1] − P[r2+1][c1] + P[r1][c1].
Hashmaps + counters convert O(N²) brute force into O(N). Anywhere you'd search a list for "does X exist?" inside a loop, replace with a hashmap.
Recognize it
- "Find duplicates" / "first non-repeating" / "anagrams"
- Two-sum and its variants
- Frequency-based problems
- Group / bucket by some key
Template
from collections import Counter, defaultdict
def first_unique(s):
cnt = Counter(s)
for i, c in enumerate(s):
if cnt[c] == 1:
return i
return -1
def group_anagrams(words):
groups = defaultdict(list)
for w in words:
key = tuple(sorted(w)) # or tuple of count[a..z]
groups[key].append(w)
return list(groups.values())
Drill problems
- LC 1 Two Sum
- LC 49 Group anagrams
- LC 242 Valid anagram
- LC 387 First unique character
- LC 128 Longest consecutive sequence — start from x only if x-1 not in set
- LC 349 Intersection of two arrays
- LC 217 Contains duplicate
Counterfrom collections is your friend.- For grouping, the key is a canonical form (sorted tuple, character counts).
- Hashmap + prefix sum unlocks subarray problems — see the previous chapter.
"Next greater / smaller element," "largest rectangle in histogram," "trapping rain water" — all monotonic stack. The stack stores indices in monotonic order; you pop when the invariant breaks. O(N).
Recognize it
- "Next greater / next smaller element"
- "Largest rectangle / histogram" problems
- "Daily temperatures" / span / waiting time problems
- Brackets / parentheses matching variants
The mental model
The stack holds indices in strictly increasing / decreasing order of their values. When a new element breaks the order, you pop — and the popped element's "answer" is determined right then (next greater, next smaller, or width of rectangle).
Template — next greater element
def next_greater(nums):
n = len(nums)
res = [-1] * n
stack = [] # indices, values STRICTLY decreasing
for i, v in enumerate(nums):
while stack and nums[stack[-1]] < v:
res[stack.pop()] = v
stack.append(i)
return res
For each bar, find the rectangle with this bar as the shortest height. Stack of indices with strictly increasing heights. When current is shorter, pop top: the popped bar's rectangle extends from (new top + 1) to (current − 1) → width = current − new_top − 1. After loop, treat right boundary as n.
Drill problems
- LC 496 Next greater element I — canonical
- LC 739 Daily temperatures
- LC 84 Largest rectangle in histogram
- LC 85 Maximal rectangle — apply LC 84 row by row
- LC 42 Trapping rain water — mono stack OR two-pointer
- LC 907 Sum of subarray minimums
- LC 901 Online stock span
- Stack holds indices, not values — you need the position.
- Pop while the stack's top violates monotonicity. Each element pushed/popped once → O(N).
- For "histogram rectangle" remember: width = i − (new top after pop) − 1.
Sliding window max / min in O(N). The deque holds indices monotonically (descending for max, ascending for min); front is always the answer for the current window.
Template
from collections import deque
def sliding_window_max(nums, k):
dq = deque() # indices, nums STRICTLY decreasing
res = []
for i, v in enumerate(nums):
# pop smaller values from back (they'll never be max again)
while dq and nums[dq[-1]] <= v:
dq.pop()
dq.append(i)
# pop front if outside window
if dq[0] <= i - k:
dq.popleft()
# record max once window is full
if i >= k - 1:
res.append(nums[dq[0]])
return res
Drill problems
- Pop from back for monotonicity. Pop from front when outside window.
- Each element enters / exits deque once → O(N) total.
If you can write a can_do(x) predicate that's monotonic (once true at x, stays true for all larger x — or vice versa), binary search the answer space. Recognition phrase: "minimum X such that…" or "maximum X such that…"
Recognize it
- "Find the minimum X such that all items can be done in time T" → search X
- "Minimize the maximum" / "maximize the minimum"
- "Smallest divisor" / "min eating speed"
- Predicate is monotonic — you can check feasibility in O(N) for a given answer
Universal template
def min_value_satisfying(lo, hi, predicate):
"""Find smallest x in [lo, hi] where predicate(x) is True.
Predicate is monotonic: F...F T...T (False then True)."""
while lo < hi:
mid = (lo + hi) // 2
if predicate(mid):
hi = mid # mid might be the answer; don't exclude
else:
lo = mid + 1 # mid too small
return lo
Min speed k such that sum(ceil(p / k) for p in piles) ≤ H. The predicate "k works" is monotonic: faster always works. Binary search k in [1, max(piles)]. Answer found in O(N log(max)).
Drill problems
- LC 704 Binary search — vanilla
- LC 34 First & last position
- LC 33 Search in rotated sorted array
- LC 153 Find min in rotated sorted
- LC 875 Koko eating bananas — canonical "binary search on answer"
- LC 1011 Capacity to ship packages
- LC 410 Split array largest sum — minimize max partition sum
- LC 4 Median of two sorted arrays — binary search on partition
- The hard part isn't binary search — it's noticing the answer space is searchable.
- Always write the predicate first. Verify monotonicity by trying x and x+1.
- Use the canonical template above — handles all "first true" problems cleanly.
"Top K" / "k largest / smallest" / "merge K sorted" / "median from stream" — all heap. Maintain a min-heap of size K to get the K largest in O(N log K). Or two heaps to get the running median in O(log N) per insert.
Template — top-K
import heapq
def top_k_largest(nums, k):
h = []
for x in nums:
heapq.heappush(h, x)
if len(h) > k:
heapq.heappop(h) # pop smallest
return h # K largest, in heap order
Template — K-way merge
def merge_k(lists):
h = [(arr[0], i, 0) for i, arr in enumerate(lists) if arr]
heapq.heapify(h)
out = []
while h:
v, i, j = heapq.heappop(h)
out.append(v)
if j + 1 < len(lists[i]):
heapq.heappush(h, (lists[i][j+1], i, j+1))
return out
Template — median from stream (two heaps)
class MedianFinder:
def __init__(self):
self.low = [] # max-heap (negate values)
self.high = [] # min-heap
def addNum(self, x):
heapq.heappush(self.low, -x)
heapq.heappush(self.high, -heapq.heappop(self.low))
if len(self.high) > len(self.low):
heapq.heappush(self.low, -heapq.heappop(self.high))
def findMedian(self):
if len(self.low) > len(self.high):
return -self.low[0]
return (-self.low[0] + self.high[0]) / 2
Drill problems
- LC 215 Kth largest element
- LC 347 Top K frequent elements
- LC 23 Merge K sorted lists
- LC 295 Find median from data stream
- LC 253 Meeting rooms II — sort by start, heap by end
- LC 621 Task scheduler
- LC 973 K closest points to origin
- LC 1046 Last stone weight
- Python's
heapqis a min-heap only. Negate values for max-heap. - Top-K with min-heap of size K → O(N log K). Quickselect → O(N) average if you need it tighter.
- Two-heaps for streaming median — memorize the rebalance step.
Connectivity questions: "are these two in the same component?" / "merge accounts" / "redundant edge." Both find and union are O(α(N)) ≈ O(1) with path compression + union by rank.
Template (memorize cold)
class DSU:
def __init__(self, n):
self.p = list(range(n))
self.r = [0] * n # rank
def find(self, x):
while self.p[x] != x:
self.p[x] = self.p[self.p[x]] # path compression
x = self.p[x]
return x
def union(self, a, b):
ra, rb = self.find(a), self.find(b)
if ra == rb: return False
if self.r[ra] < self.r[rb]: ra, rb = rb, ra
self.p[rb] = ra
if self.r[ra] == self.r[rb]: self.r[ra] += 1
return True
Drill problems
- LC 200 Number of islands — DSU or DFS
- LC 323 Connected components
- LC 684 Redundant connection
- LC 721 Accounts merge
- LC 128 Longest consecutive sequence — DSU variant
- LC 778 Swim in rising water — DSU + sort
- Always use BOTH path compression (in find) AND union by rank.
- DSU answers "components" questions without needing BFS/DFS.
- For Kruskal's MST: sort edges by weight, add edge if it unifies two components.
String prefix matching in O(length). Autocomplete, word search, XOR-max problems all use tries.
Template
class Trie:
def __init__(self):
self.children = {}
self.end = False
def insert(self, w):
node = self
for c in w:
node = node.children.setdefault(c, Trie())
node.end = True
def search(self, w):
node = self
for c in w:
if c not in node.children: return False
node = node.children[c]
return node.end
def starts_with(self, p):
node = self
for c in p:
if c not in node.children: return False
node = node.children[c]
return True
Drill problems
- LC 208 Implement Trie
- LC 211 Add & search words — supports
.wildcard via DFS - LC 212 Word search II — trie + DFS on grid
- LC 642 Design search autocomplete
- LC 421 Maximum XOR of two numbers — bit trie
- Use a dict for children (simpler than fixed 26 slots) unless explicitly told it's lowercase a-z only.
- Trie + DFS = the standard pattern for "search all words in grid" problems.
BFS gives shortest path on unweighted graphs. Process by layers (distance increases by 1 each layer). Use a deque; track visited.
Template
from collections import deque
def bfs_shortest(start, target, neighbors):
if start == target: return 0
seen = {start}
q = deque([(start, 0)])
while q:
node, dist = q.popleft()
for nb in neighbors(node):
if nb == target: return dist + 1
if nb not in seen:
seen.add(nb)
q.append((nb, dist + 1))
return -1
# Level-by-level processing
def bfs_levels(root):
if not root: return []
levels, q = [], deque([root])
while q:
level = []
for _ in range(len(q)): # snapshot level size
node = q.popleft()
level.append(node.val)
for c in node.children:
q.append(c)
levels.append(level)
return levels
Drill problems
- LC 102 Binary tree level order
- LC 127 Word ladder — bidirectional BFS optimization
- LC 200 Number of islands — flood fill
- LC 994 Rotting oranges — multi-source BFS
- LC 542 01 matrix — multi-source BFS from all 0s
- LC 1091 Shortest path in binary matrix — 8 directions
- LC 286 Walls and gates — multi-source BFS
- Snapshot the queue size (
for _ in range(len(q))) to process one level at a time. - Multi-source BFS: enqueue ALL sources at once with distance 0. Powerful for "min distance from any source."
- Use BFS for unweighted shortest path. Use Dijkstra for weighted.
Most tree problems reduce to: recurse on left and right; combine. The recursion's return value is the key design choice — what info does the parent need from each subtree?
The general recipe
- What does the parent need to know from each subtree? That's the return value.
- Recurse on left, recurse on right.
- Combine subtree returns to produce this node's return.
- Optionally update a global state along the way.
The parent needs: max gain from one side passing through this node. So recurse returns node.val + max(left_gain, right_gain, 0). But the answer (max path) might bend at this node: track node.val + max(left_gain, 0) + max(right_gain, 0) as a global max.
def maxPathSum(root):
self.best = float('-inf')
def gain(node):
if not node: return 0
L = max(gain(node.left), 0)
R = max(gain(node.right), 0)
self.best = max(self.best, node.val + L + R)
return node.val + max(L, R)
gain(root)
return self.best
Drill problems
- LC 104 Max depth
- LC 543 Diameter
- LC 124 Max path sum
- LC 98 Validate BST — pass min/max bounds down
- LC 236 LCA
- LC 297 Serialize / deserialize
- LC 437 Path Sum III — prefix sum on a tree
- LC 968 Binary tree cameras — tree DP
- Always recurse on null check first.
- Distinguish "what this node returns to parent" from "what answer this node contributes."
- Pass info DOWN via parameters, info UP via return value.
Graph problems decompose into: traverse (DFS / BFS), detect cycles (3-color DFS or DSU), or topologically sort (Kahn's BFS or DFS post-order). Memorize the three templates.
Topological sort (Kahn — BFS)
from collections import defaultdict, deque
def topo_sort(n, edges):
g = defaultdict(list); indeg = [0] * n
for u, v in edges:
g[u].append(v); indeg[v] += 1
q = deque(i for i in range(n) if indeg[i] == 0)
order = []
while q:
u = q.popleft(); order.append(u)
for v in g[u]:
indeg[v] -= 1
if indeg[v] == 0: q.append(v)
return order if len(order) == n else [] # else: cycle
Cycle detection (DFS 3-color)
def has_cycle(n, edges):
g = defaultdict(list)
for u, v in edges: g[u].append(v)
color = [0] * n # 0 white / 1 gray / 2 black
def dfs(u):
color[u] = 1
for v in g[u]:
if color[v] == 1: return True # back edge = cycle
if color[v] == 0 and dfs(v): return True
color[u] = 2
return False
return any(dfs(i) for i in range(n) if color[i] == 0)
Drill problems
- LC 207 Course schedule
- LC 210 Course schedule II — return the order
- LC 269 Alien dictionary — derive edges from word order
- LC 133 Clone graph
- LC 417 Pacific Atlantic — DFS from edges
- LC 399 Evaluate division — DFS / DSU with ratios
- LC 743 Network delay time — Dijkstra
- Kahn's: indegree 0 → add to queue. Final order length = n means no cycle.
- DFS cycle: gray ⇒ back edge ⇒ cycle. Black ⇒ done.
- For weighted shortest path, use Dijkstra (heap-based BFS).
Whenever you need to generate or count ALL X — all permutations, all subsets, all valid configurations — backtracking is the answer. The framework: explore a choice, recurse, undo. Memorize the template once; everything else is variation.
Recognize it (3 signal phrases)
- "Find all …" — all permutations, all paths, all combinations
- "Generate every / list every …"
- "Place N items satisfying constraints" — N-queens, sudoku
The mental model
Imagine building the answer step by step. At each step you have a set of choices. Pick one, recurse. If the partial state is invalid, abandon. When you reach a complete solution, record it. After returning, undo the choice so the next iteration starts clean.
One template covers subsets, permutations, combinations, N-queens, partitions
def backtrack(state, choices):
if is_goal(state):
results.append(snapshot(state)) # NEVER append state directly — snapshot it
return
for choice in choices:
if not is_valid(state, choice):
continue # prune
apply(state, choice)
backtrack(state, next_choices(state, choice))
undo(state, choice) # the line that makes it backtracking
The 4 questions you must answer before writing code
- What is the state? The partial-built solution. List? Set? Index + path?
- What's the goal condition? "Length == k," "all chars used," "filled all rows"?
- What are the choices at each step? All chars not yet used? All values 0–9? All neighbors?
- How do you prune? Skip duplicates? Skip invalid placements? Branch-and-bound bound?
The pruning tricks that save you
- Sort first if the problem has duplicates — lets you skip with
if i > start and arr[i] == arr[i-1]: continue - Pass index, not remaining set — avoid rebuilding the choice list each call
- Use a
visitedset for permutations (where order matters and any element can come first) - Branch-and-bound: if best-possible-from-here ≤ current best, prune
- Constraint propagation (N-queens, sudoku): maintain "available" sets and update before recursing
Common pitfall
results.append(path) appends a reference — when you mutate path later via undo, your recorded result mutates too. Always copy: results.append(path[:]) or results.append(list(path)).
- Universal template = apply → recurse → undo. The "undo" line is the whole pattern.
- Always SNAPSHOT (copy) the state when appending to results.
- The 4 questions: state, goal, choices, pruning. Answer them before coding.
- Time complexity: usually O(branching^depth × work per leaf). Pruning saves you in practice.
Every backtracking problem maps to one of six sub-patterns. Recognize which → apply the matching template. The differences live in how choices are enumerated and how duplicates are handled.
① Subsets — at each index, include or skip
Signal: "generate all subsets / power set"
def subsets(nums):
res, path = [], []
def bt(i):
if i == len(nums):
res.append(path[:])
return
# choice 1: skip
bt(i + 1)
# choice 2: include
path.append(nums[i])
bt(i + 1)
path.pop()
bt(0); return res
# Output: 2^n subsets
Drill: LC 78 Subsets, LC 90 Subsets II (with dups)
② Combinations — pick k from n, no order
Signal: "all combinations of k items" / "all sums to target"
def combine(n, k):
res, path = [], []
def bt(start):
if len(path) == k:
res.append(path[:])
return
for i in range(start, n + 1):
path.append(i)
bt(i + 1) # next start = i+1, not start+1
path.pop()
bt(1); return res
Key idea: start parameter prevents re-using past elements → ensures combinations not permutations. Pass i+1 for "use each once," pass i for "reuse allowed."
Drill: LC 77 Combinations, LC 39 Combination sum (reuse), LC 40 Combination sum II (no reuse, dedup)
③ Permutations — order matters, every element used once
Signal: "all permutations" / "every arrangement"
def permute(nums):
res, path = [], []
used = [False] * len(nums)
def bt():
if len(path) == len(nums):
res.append(path[:])
return
for i in range(len(nums)):
if used[i]:
continue
# dedup if sorted: skip duplicates at same level
# if i > 0 and nums[i] == nums[i-1] and not used[i-1]: continue
used[i] = True; path.append(nums[i])
bt()
used[i] = False; path.pop()
bt(); return res
Drill: LC 46 Permutations, LC 47 Permutations II (dedup), LC 17 Letter combinations
④ Partitions — split string / array into k pieces
Signal: "partition string into palindromes" / "split into k equal-sum groups"
def partition_palindromes(s):
res, path = [], []
def bt(start):
if start == len(s):
res.append(path[:])
return
for end in range(start + 1, len(s) + 1):
sub = s[start:end]
if sub == sub[::-1]: # is palindrome
path.append(sub)
bt(end)
path.pop()
bt(0); return res
Drill: LC 131 Palindrome partitioning, LC 140 Word break II, LC 698 Partition to K equal sum subsets
⑤ Constraint satisfaction — N-Queens, Sudoku
Signal: "place N items satisfying constraints"
def solve_n_queens(n):
res = []
cols, diag1, diag2 = set(), set(), set()
placement = [-1] * n
def bt(row):
if row == n:
board = ['.'*c + 'Q' + '.'*(n-c-1) for c in placement]
res.append(board); return
for col in range(n):
if col in cols or row-col in diag1 or row+col in diag2:
continue
cols.add(col); diag1.add(row-col); diag2.add(row+col)
placement[row] = col
bt(row + 1)
cols.remove(col); diag1.remove(row-col); diag2.remove(row+col)
bt(0); return res
Drill: LC 51 N-Queens, LC 37 Sudoku solver
⑥ Grid backtracking — word search
Signal: "find word in grid going in 4 directions"
def exist(board, word):
R, C = len(board), len(board[0])
def bt(r, c, i):
if i == len(word): return True
if r < 0 or r >= R or c < 0 or c >= C or board[r][c] != word[i]:
return False
tmp, board[r][c] = board[r][c], '#' # mark visited
found = (bt(r+1,c,i+1) or bt(r-1,c,i+1) or bt(r,c+1,i+1) or bt(r,c-1,i+1))
board[r][c] = tmp # unmark (backtrack)
return found
return any(bt(r, c, 0) for r in range(R) for c in range(C))
Drill: LC 79 Word search, LC 212 Word search II (trie + backtrack)
- 6 sub-patterns: subsets, combinations, permutations, partitions, constraints, grid.
- Subsets = include/skip. Combinations = start index. Permutations = used set.
- Always copy the state when recording (
path[:]). - Dedup with sort + "skip same as previous at same level."
DP feels mysterious until you realize every DP problem is solved with the same 5 steps. Once you can articulate state + transition + base case + order + optimization, the code writes itself. The hard part is recognizing that a problem is DP and defining the state.
When is it DP?
- "Find min cost / max value / count ways" + overlapping subproblems
- The answer at index
idepends on answer at smaller indices - Naive recursion would TLE due to recomputation
- Signal phrases: "minimum number of," "in how many ways," "longest / shortest with property"
The 5-step universal framework
- State: what variables uniquely identify a subproblem? (i.e., what do you need to remember?). If one index → 1D DP. Two indices → 2D. An interval [i,j] → interval DP. A subset → bitmask DP.
- Transition: how does
dp[state]depend on smaller states? Write this as an equation first. - Base case: what's the answer at the smallest state? Almost always trivial (0 or 1).
- Order: in what order do you fill the table so each
dp[state]depends only on already-filled states? Usually bottom-up; or use memoization with recursion if order is tricky. - Optimization: can you reduce dimensions? "Rolling array" if only the last row matters. O(2 × W) instead of O(N × W).
The "what's the state?" decision tree
| Problem shape | State |
|---|---|
| "At position i, what's the answer for first i elements?" | 1D: dp[i] |
| "At cell (i,j), what's the answer?" | 2D grid: dp[i][j] |
| "Compare two sequences" | 2D: dp[i][j] = answer for first i of A, first j of B |
| "At i, holding / not holding stock" | State machine: dp[i][state] |
| "Interval [i,j] — best way to merge" | Interval: dp[i][j] iterated by interval length |
| "Subset of N items (N ≤ 20)" | Bitmask: dp[mask] |
| "Tree, info per node" | Tree DP: post-order recursion returning info per subtree |
| "Count numbers in [a,b] with digit property" | Digit DP: dp[pos][tight][...] |
How to MEMORIZE: the state-naming trick
If you can't say it in words, you can't code it
Before writing code, say out loud: "dp[i][j] means: the minimum cost to convert the first i characters of A into the first j characters of B." If you can't articulate this in one sentence, your state definition is wrong. 90% of DP bugs come from a sloppy state definition.
Memoization vs bottom-up
Memoization (top-down)
- Write the natural recursion first
- Add
@functools.cacheor manual dict - Easier to write when transition is irregular
- Can blow recursion stack for big N
Bottom-up (tabulation)
- Fill table iteratively from base case
- No recursion stack issue
- Easier to space-optimize (rolling array)
- Need to figure out iteration order
- 5 steps: state, transition, base, order, optimization.
- Articulate
dp[i][j]in plain English before coding. - Use memoization first if the transition is irregular; rewrite to bottom-up if you need to space-optimize.
- If stuck, write the brute-force recursion; identify overlapping calls; that's your DP.
Three classic 1D DP shapes: linear (climb stairs, house robber), knapsack (coin change, subset sum), state-machine (best time to buy/sell stock with cooldown).
A. Linear DP
Signal: "minimum cost to reach N," "ways to reach N," "max value selecting non-adjacent items"
State: dp[i] = answer for first i elements / position i
# House robber (LC 198): max sum w/o adjacent
def rob(nums):
if not nums: return 0
prev2, prev1 = 0, 0
for x in nums:
cur = max(prev1, prev2 + x)
prev2, prev1 = prev1, cur
return prev1
# Decode ways (LC 91): how many ways to decode "12" as letters
def numDecodings(s):
if not s or s[0] == '0': return 0
dp = [0] * (len(s) + 1); dp[0] = dp[1] = 1
for i in range(2, len(s) + 1):
if s[i-1] != '0': dp[i] += dp[i-1]
two = int(s[i-2:i])
if 10 <= two <= 26: dp[i] += dp[i-2]
return dp[len(s)]
Drill: LC 70 Climbing stairs, LC 198 House robber, LC 213 House robber II, LC 91 Decode ways, LC 746 Min cost climbing stairs, LC 53 Maximum subarray (Kadane)
B. Knapsack DP
Signal: "select items with weight/value constraint" / "ways to make sum S"
State: dp[i][w] = max value (or count) using first i items with capacity w
0/1 Knapsack (each item used 0 or 1 times)
Iterate capacity in reverse after the items loop — so you don't reuse the current item.
def knapsack_01(weights, values, W):
dp = [0] * (W + 1)
for w, v in zip(weights, values):
for c in range(W, w - 1, -1):
dp[c] = max(dp[c], dp[c - w] + v)
return dp[W]
Unbounded knapsack (items reusable)
Iterate capacity forward — current item can be reused.
def coin_change(coins, amount):
dp = [float('inf')] * (amount + 1); dp[0] = 0
for c in range(1, amount + 1):
for coin in coins:
if coin <= c:
dp[c] = min(dp[c], dp[c - coin] + 1)
return dp[amount] if dp[amount] != float('inf') else -1
Drill: LC 322 Coin change, LC 518 Coin change II (count ways), LC 416 Partition equal subset sum, LC 494 Target sum, LC 1049 Last stone weight II
C. State-machine DP
Signal: "best time to buy and sell stock" with various constraints
State: dp[i][state] — state captures "what mode are we in?" (holding stock or not, cooldown, etc.)
# Best time to buy/sell stock with cooldown (LC 309)
def maxProfit(prices):
hold = -prices[0] # currently holding
sold = 0 # just sold (in cooldown)
rest = 0 # resting (no stock, no cooldown)
for p in prices[1:]:
new_hold = max(hold, rest - p)
new_sold = hold + p
new_rest = max(rest, sold)
hold, sold, rest = new_hold, new_sold, new_rest
return max(sold, rest)
Drill: LC 121 Buy/sell stock, LC 309 With cooldown, LC 188 At most K transactions
- 0/1 knapsack: iterate capacity reverse. Unbounded: forward.
- Coin change (min coins) is just unbounded knapsack with "min" instead of "max."
- State-machine DP: draw the states + transitions on paper before coding.
- House robber uses only the last 2 values → O(1) space.
Three classic 2D DPs: grid (move through cells), sequence comparison (LCS, edit distance), LIS (longest increasing subsequence — has an O(N log N) trick).
A. Grid DP
State: dp[i][j] = answer at cell (i, j). Transitions usually from (i-1, j) and (i, j-1).
# Unique paths (LC 62): right or down only
def uniquePaths(m, n):
dp = [[1] * n for _ in range(m)]
for i in range(1, m):
for j in range(1, n):
dp[i][j] = dp[i-1][j] + dp[i][j-1]
return dp[-1][-1]
# Min path sum (LC 64)
def minPathSum(grid):
m, n = len(grid), len(grid[0])
for i in range(m):
for j in range(n):
if i == 0 and j == 0: continue
up = grid[i-1][j] if i > 0 else float('inf')
left = grid[i][j-1] if j > 0 else float('inf')
grid[i][j] += min(up, left)
return grid[-1][-1]
Drill: LC 62 Unique paths, LC 64 Min path sum, LC 120 Triangle, LC 221 Maximal square, LC 174 Dungeon game (iterate from bottom-right)
B. Sequence comparison DP (LCS family)
State: dp[i][j] = answer for first i of A vs first j of B
Canonical: if A[i-1] == B[j-1], dp[i][j] = dp[i-1][j-1] + 1; else dp[i][j] = something using dp[i-1][j] and dp[i][j-1]
# Edit distance (LC 72)
def minDistance(s1, s2):
m, n = len(s1), len(s2)
dp = [[0] * (n + 1) for _ in range(m + 1)]
for i in range(m + 1): dp[i][0] = i
for j in range(n + 1): dp[0][j] = j
for i in range(1, m + 1):
for j in range(1, n + 1):
if s1[i-1] == s2[j-1]:
dp[i][j] = dp[i-1][j-1]
else:
dp[i][j] = 1 + min(dp[i-1][j], # delete
dp[i][j-1], # insert
dp[i-1][j-1]) # replace
return dp[m][n]
Drill: LC 1143 Longest common subsequence, LC 72 Edit distance, LC 115 Distinct subsequences, LC 516 Longest palindromic subsequence, LC 10 Regex matching
C. Longest Increasing Subsequence (LIS)
O(N²) DP: dp[i] = 1 + max(dp[j] for j < i if nums[j] < nums[i])
O(N log N) trick: maintain a sorted "tails" array; for each x, binary-search the smallest tail ≥ x and replace it (or append if x is bigger than all). Length of tails = LIS length.
from bisect import bisect_left
def lengthOfLIS(nums):
tails = []
for x in nums:
i = bisect_left(tails, x)
if i == len(tails):
tails.append(x)
else:
tails[i] = x
return len(tails)
Drill: LC 300 LIS, LC 354 Russian doll envelopes (sort + LIS), LC 673 Number of LIS
- Grid DP: usually 2 transitions (up + left); base case is the first row/col.
- LCS family: if chars match, +1 from diagonal; else look at one of two adjacents.
- LIS has an O(N log N) trick — memorize
bisect_left + replace.
The four "harder" DP shapes. Interval DP iterates by interval length. Bitmask DP stores subset state in an integer. Tree DP uses post-order recursion. Digit DP counts numbers with digit-level constraints.
A. Interval DP
Signal: "merge / split / pick from an interval [i,j] to optimize total cost"
State: dp[i][j] = best for interval [i,j]. Iterate by length (smallest intervals first).
# Burst balloons (LC 312): max coins burning balloons in some order
# Trick: think backwards — which balloon do you burst LAST in interval (i,j)?
def maxCoins(nums):
nums = [1] + nums + [1]
n = len(nums)
dp = [[0]*n for _ in range(n)]
for length in range(2, n): # interval length
for i in range(n - length):
j = i + length
for k in range(i+1, j): # k is last burst in (i,j)
dp[i][j] = max(dp[i][j],
nums[i]*nums[k]*nums[j] + dp[i][k] + dp[k][j])
return dp[0][n-1]
Drill: LC 312 Burst balloons, LC 1547 Min cost to cut a stick, LC 486 Predict the winner, LC 877 Stone game
B. Bitmask DP
Signal: N is small (≤ 20). You need to track "which subset has been used."
State: dp[mask] where mask is a bit set representing which items are used. 2^N states total.
# Travelling Salesman style: min cost to visit all cities
def shortestPathVisitingAll(graph): # LC 847
n = len(graph)
target = (1 << n) - 1
# state: (node, mask). BFS gives shortest.
seen = {(i, 1 << i) for i in range(n)}
q = deque((i, 1 << i, 0) for i in range(n))
while q:
node, mask, dist = q.popleft()
if mask == target: return dist
for nb in graph[node]:
new_mask = mask | (1 << nb)
if (nb, new_mask) not in seen:
seen.add((nb, new_mask))
q.append((nb, new_mask, dist + 1))
Drill: LC 847 Shortest path visiting all nodes, LC 943 Shortest superstring, LC 2050 Parallel courses III
C. Tree DP
Signal: tree input, compute some best/sum/count per subtree.
Pattern: post-order recursion. Each recurrence returns a tuple of (state, value) per subtree; combine in parent.
# House robber III (LC 337): rob houses on a tree, can't rob both parent + child
def rob(root):
def helper(node):
if not node: return (0, 0) # (rob, no_rob)
L = helper(node.left)
R = helper(node.right)
rob_here = node.val + L[1] + R[1]
no_rob = max(L) + max(R)
return (rob_here, no_rob)
return max(helper(root))
Drill: LC 337 House robber III, LC 968 Binary tree cameras, LC 543 Diameter
D. Digit DP (rarer but appears)
Signal: "count numbers in [a,b] with property on digits"
State: dp[pos][tight][...]. Build number digit by digit; "tight" tracks whether we're still bounded by the original upper bound.
Drill: LC 600 Non-negative integers without consecutive ones
- Interval DP: iterate by length. Don't iterate i, j as nested loops directly.
- Bitmask DP: only when N ≤ 20 (2^20 ≈ 1M states).
- Tree DP: return a tuple per subtree; combine in parent.
XOR is the killer trick: a ^ a = 0, a ^ 0 = a, XOR is associative + commutative. "Find the single number among duplicates" is XOR of all elements.
The XOR tricks
- Find single non-duplicate (LC 136): XOR all elements; pairs cancel.
- Find two non-duplicates (LC 260): XOR everything → A^B. Find a set bit; split by that bit; XOR each group.
- Number of bits set:
bin(n).count("1")or Brian Kernighan:while n: n &= n-1; cnt += 1. - Subsets via bitmask:
for mask in range(1 << n)— enumerate all subsets of n elements. - Last set bit:
n & -n.
# Single number (LC 136)
def singleNumber(nums):
res = 0
for x in nums: res ^= x
return res
# Number of 1 bits (LC 191, Brian Kernighan)
def hammingWeight(n):
cnt = 0
while n:
n &= n - 1 # clear lowest set bit
cnt += 1
return cnt
Drill: LC 136 Single number, LC 260 Single number III, LC 191 Number of 1 bits, LC 338 Counting bits, LC 421 Max XOR (bit trie)
- XOR cancels duplicates. This is the trick for ~80% of bit-manip problems.
n & (n-1)clears the lowest set bit — Brian Kernighan's trick.- Bitmask DP enumerates subsets in
for mask in range(1 << n).
The LC concurrency problems test 5 primitives: Lock, Semaphore, Condition, Event, Barrier. Once you know which primitive maps to which pattern, every problem is mechanical.
The 5 primitives in Python
| Primitive | Use case | Key ops |
|---|---|---|
threading.Lock | Mutual exclusion — only one thread at a time | acquire / release / with |
threading.Semaphore(n) | Counting — at most n threads, or signal/wait pattern | acquire (decrement, block if 0) / release (increment) |
threading.Condition | Wait until predicate true; another thread notifies | wait / notify / notify_all |
threading.Event | One-shot signal: "something happened" | set / wait / clear |
threading.Barrier(n) | Block until n threads all arrive | wait |
The 5 multithreading patterns
Pattern → primitive cheatsheet
- Strict ordering (A → B → C): use semaphores or events as "permission slips." Each thread acquires its semaphore, does work, releases the next thread's semaphore.
- Alternating output (foo / bar): two semaphores, one starts at 1, other at 0. Each thread acquires its own + releases the other after work.
- N-way coordination (zero / even / odd): one semaphore per "turn-type"; controller releases the right one.
- Bounded buffer (producer/consumer): condition variable + while-predicate-wait pattern. Or 2 semaphores (not_full, not_empty).
- Group-of-N (H2O: collect 2H + 1O): semaphore + barrier. Semaphores cap counts; barrier syncs at the "atomic moment."
Semaphore as "permission slip" — the mental model
Think of a semaphore as a stack of permission slips. acquire() takes a slip (blocks if none left). release() adds one. For ordering: start with 1 slip in the "first thread" semaphore, 0 in others. Each thread takes its slip, does work, gives a slip to the next thread. Threads sleep until their slip is available.
- 5 primitives — know which to reach for. Most LC problems use Semaphore.
- Semaphore counts can be "permission slips" passed between threads.
- Condition variable = mutex + wait/notify. Always wait in a
whileloop (spurious wakeups).
Every LC concurrency problem solved below in Python. Pattern-match these → you can solve any concurrency interview question.
LC 1114 — Print in Order (ordering)
Pattern: Strict A → B → C ordering. Two events.
class Foo:
def __init__(self):
self.first_done = threading.Event()
self.second_done = threading.Event()
def first(self, printFirst):
printFirst()
self.first_done.set()
def second(self, printSecond):
self.first_done.wait()
printSecond()
self.second_done.set()
def third(self, printThird):
self.second_done.wait()
printThird()
LC 1115 — Print FooBar Alternately (alternation)
Pattern: Two threads alternate. Two semaphores; foo starts with 1, bar with 0.
class FooBar:
def __init__(self, n):
self.n = n
self.foo_sem = threading.Semaphore(1)
self.bar_sem = threading.Semaphore(0)
def foo(self, printFoo):
for _ in range(self.n):
self.foo_sem.acquire()
printFoo()
self.bar_sem.release()
def bar(self, printBar):
for _ in range(self.n):
self.bar_sem.acquire()
printBar()
self.foo_sem.release()
LC 1116 — Print Zero Even Odd (3-way coord)
Pattern: Three threads (zero, even, odd). Pattern: 0,1,0,2,0,3,0,4… Three semaphores; zero starts with 1, others with 0.
class ZeroEvenOdd:
def __init__(self, n):
self.n = n
self.zero_sem = threading.Semaphore(1)
self.odd_sem = threading.Semaphore(0)
self.even_sem = threading.Semaphore(0)
def zero(self, printNumber):
for i in range(1, self.n + 1):
self.zero_sem.acquire()
printNumber(0)
if i % 2 == 1: self.odd_sem.release()
else: self.even_sem.release()
def odd(self, printNumber):
for i in range(1, self.n + 1, 2):
self.odd_sem.acquire()
printNumber(i)
self.zero_sem.release()
def even(self, printNumber):
for i in range(2, self.n + 1, 2):
self.even_sem.acquire()
printNumber(i)
self.zero_sem.release()
LC 1117 — Building H2O (group-of-N)
Pattern: Collect 2H + 1O before any can proceed. Use semaphores to cap counts + a barrier for the "atomic moment."
class H2O:
def __init__(self):
self.h_sem = threading.Semaphore(2) # at most 2 H at a time
self.o_sem = threading.Semaphore(1) # at most 1 O at a time
self.barrier = threading.Barrier(3) # wait for 3 to form molecule
def hydrogen(self, releaseHydrogen):
self.h_sem.acquire()
self.barrier.wait()
releaseHydrogen()
self.h_sem.release()
def oxygen(self, releaseOxygen):
self.o_sem.acquire()
self.barrier.wait()
releaseOxygen()
self.o_sem.release()
LC 1188 — Bounded Blocking Queue (producer/consumer)
Pattern: Mutex + 2 condition variables. Wait while invariant violated.
class BoundedBlockingQueue:
def __init__(self, capacity):
self.cap = capacity
self.q = deque()
self.lock = threading.Lock()
self.not_full = threading.Condition(self.lock)
self.not_empty = threading.Condition(self.lock)
def enqueue(self, element):
with self.not_full:
while len(self.q) >= self.cap:
self.not_full.wait()
self.q.append(element)
self.not_empty.notify()
def dequeue(self):
with self.not_empty:
while not self.q:
self.not_empty.wait()
x = self.q.popleft()
self.not_full.notify()
return x
def size(self):
with self.lock:
return len(self.q)
LC 1195 — Fizz Buzz Multithreaded (4-way coord)
Pattern: Four threads (fizz / buzz / fizzbuzz / number). Use 4 semaphores; a controller (number thread) releases the right one based on i.
LC 1226 — Dining Philosophers (deadlock-free)
Pattern: 5 philosophers, 5 forks. Naive grab-left-then-right deadlocks. Fix: enforce lock ordering (always grab lower-numbered fork first), OR have one philosopher grab right-first. OR use a semaphore that caps concurrent eaters at 4 (Dijkstra's solution).
class DiningPhilosophers:
def __init__(self):
self.forks = [threading.Lock() for _ in range(5)]
self.eaters = threading.Semaphore(4) # at most 4 eat at once → no deadlock
def wantsToEat(self, p, pickL, pickR, eat, putL, putR):
L, R = p, (p + 1) % 5
with self.eaters:
with self.forks[L], self.forks[R]:
pickL(); pickR()
eat()
putL(); putR()
LC 1242 — Multithreaded Web Crawler (Anthropic favorite)
Pattern: Thread pool + shared queue + visited set with lock.
def crawl(startUrl, htmlParser):
from concurrent.futures import ThreadPoolExecutor
host = startUrl.split('/')[2]
visited = {startUrl}
lock = threading.Lock()
with ThreadPoolExecutor(max_workers=16) as ex:
def fetch(url):
new_links = []
for link in htmlParser.getUrls(url):
if link.split('/')[2] != host: continue
with lock:
if link in visited: continue
visited.add(link)
new_links.append(link)
return new_links
pending = [ex.submit(fetch, startUrl)]
while pending:
future = pending.pop()
for link in future.result():
pending.append(ex.submit(fetch, link))
return list(visited)
- Ordering → semaphores or events as permission slips.
- Alternation → 2 semaphores (one starts at 1, other at 0).
- N-way coord → controller releases the right semaphore.
- Producer/consumer → mutex + 2 condvars with while predicate-wait.
- Group-of-N → semaphore caps + barrier for atomic moment.
- Deadlock fix → lock ordering OR cap concurrent acquirers.
| Week | Patterns | Goal |
|---|---|---|
| 1 | Two pointers, sliding window, prefix sum, hashmap, monotonic stack | 30 problems, ~8 mediums + 2 hards |
| 2 | Binary search, heap, BFS, DFS/backtracking, tree | 30 problems, ~8 mediums + 2 hards |
| 3 | Graph (incl. union-find), DP framework, 1D DP, 2D DP | 25 problems, focus on hards in DP |
| 4 | Interval DP, bitmask, bit manip, multithreading + mixed timed mocks | 20 problems + 4 mock interviews |
Daily ritual
- 1 medium from a NEW pattern (refresher chapter first if rusty)
- 1 medium from a pattern you've already covered (reinforcement)
- 1 hard / week, on weekends, timed
What to do when you're stuck
- Restate the problem in your own words.
- Pick the pattern using the decoder table above.
- Write brute force first. O(N²) before O(N log N). Always.
- Talk through ONE example end-to-end on paper. Bugs usually surface here.
- Check data structure fit: hashmap, heap, sorted set, deque, trie, segment tree.
- 15-min rule: stuck 15 min → look at hint (not solution). Stuck 25 → look at solution outline. Don't grind 90 min — diminishing returns.
Don't drill all 200 hards. Drill these — they map to common Sr Staff difficulty and recur across companies.
- LC 4 Median of two sorted arrays — binary search on partition
- LC 10 Regex matching — DP on strings
- LC 23 Merge K sorted lists — heap
- LC 25 Reverse nodes in K-group — linked list in-place
- LC 32 Longest valid parentheses — stack or DP
- LC 42 Trapping rain water — two pointers
- LC 72 Edit distance — DP classic
- LC 76 Minimum window substring — sliding window
- LC 84 Largest rectangle in histogram — monotonic stack
- LC 124 Binary tree maximum path sum — tree DP
- LC 127 Word ladder — BFS
- LC 128 Longest consecutive sequence — hashmap or DSU
- LC 140 Word break II — DP + backtracking
- LC 146 LRU cache — extremely common, memorize cold
- LC 239 Sliding window maximum — monotonic deque
- LC 295 Find median from data stream — two heaps
- LC 297 Serialize/deserialize binary tree
- LC 312 Burst balloons — interval DP
- LC 460 LFU cache — pair with LC 146
- LC 895 Maximum frequency stack — composition design
- LC 1235 Max profit in job scheduling — DP + binary search
0 → hero coding-prep path
- foundation NeetCode Blind 75 — every Sr Staff candidate has done this. 1-2 weeks.
- foundation NeetCode 250 roadmap — full pattern coverage. 4-6 weeks.
- build Drill the 22 chapters above + canonical hards. ~80 problems.
- build Each company's known questions (coding by company).
- depth ML coding from scratch — for every frontier-lab loop.
- depth Exponent for mock interviews; interviewing.io for mocks with ex-FAANG.
- depth AlgoExpert for video walk-throughs (subscription).
Pattern-recognition quiz
Read the prompt; name the pattern in < 30 seconds.
- "Given a sorted array, find two indices that sum to a target."
Pattern + approach
Two pointers (sorted). Left at 0, right at n−1. If sum < target, left++; if sum > target, right−−. O(N) time, O(1) space.
- "Length of the longest substring without repeating characters."
Pattern + approach
Sliding window (variable). Hashmap of last seen index. Expand right; on conflict, jump left to last_seen[c]+1. O(N).
- "Find the K-th largest element in an unsorted array."
Pattern + approach
Min-heap of size K → O(N log K), safest answer. Or quickselect → O(N) average. State both; pick heap unless they push for tighter.
- "Given an integer array and integer k, count subarrays whose sum equals k."
Pattern + approach
Prefix sum + hashmap. Track prefix at each index; if (prefix − k) was seen, add the count. Init seen with
{0: 1}. O(N). - "Implement a data structure with O(1) insert, delete, getRandom."
Pattern + approach
Hashmap + dynamic array. Hashmap maps value → index in array. Delete: swap with last, pop, update map. LC 380.
- "Given a binary tree, find the maximum path sum (path can start and end anywhere)."
Pattern + approach
Tree DP. Per node return max gain through one direction. Also compute path-through-node = node + max(L, 0) + max(R, 0); update global max.
- "Given jobs with (start, end, profit), find max profit no two overlap."
Pattern + approach
DP + binary search. Sort by end. dp[i] = max(dp[i−1], profit_i + dp[j]) where j = largest index with end ≤ start_i (bisect).
- "Find smallest k such that you can finish all bananas in H hours at rate k."
Pattern + approach
Binary search on the answer. Predicate(k) = can_finish(k) is monotonic. LC 875.
- "Find all anagrams of pattern p in string s."
Pattern + approach
Sliding window (fixed size |p|). Maintain char count of window; compare with char count of p. O(N).
- "Course schedule: can you finish all courses given prerequisites?"
Pattern + approach
Topological sort. Build graph; compute indegrees; BFS from 0-indegree. If final order has all nodes, no cycle. LC 207.
- "Find the largest rectangle in a histogram."
Pattern + approach
Monotonic stack of indices with strictly increasing heights. For each bar, while top is taller, pop and compute rectangle. LC 84.
- "Find the maximum in every sliding window of size k."
Pattern + approach
Monotonic deque (decreasing). Push index, pop back while smaller, pop front if outside window. LC 239.
- "Implement an LRU cache (O(1) get and put)."
Pattern + approach
Hashmap + doubly linked list. Move-to-front on get/put; evict tail on overflow. Memorize cold. LC 146.
- "Find the median from a data stream."
Pattern + approach
Two heaps: max-heap for lower half, min-heap for upper. Balance sizes. Median = top of larger heap (or mean of tops). LC 295.
- "Generate all permutations of a list of integers."
Pattern + approach
Backtracking with a
usedset. Append to path, mark used, recurse, undo. For dedup with duplicates: sort + skip same-as-previous when previous not used. LC 46. - "Minimum number of coins to make amount N from given denominations."
Pattern + approach
1D DP, unbounded knapsack. dp[c] = min(dp[c], dp[c − coin] + 1) for each coin. Iterate c forward. LC 322.
- "Word ladder: transform begin to end one letter at a time, given dictionary."
Pattern + approach
BFS on word graph. Neighbors = words differing by 1 letter (or try 26 swaps per position). BFS level = transformation length. Use bidirectional BFS for speed. LC 127.
- "Edit distance between two strings."
Pattern + approach
2D DP. dp[i][j] = min(dp[i−1][j]+1, dp[i][j−1]+1, dp[i−1][j−1] + (s1[i]≠s2[j])). Base: dp[i][0]=i, dp[0][j]=j. LC 72.
- "Burst balloons to maximize coins (interval DP signal)."
Pattern + approach
Interval DP. dp[i][j] = max over k in (i, j) of dp[i][k] + dp[k][j] + nums[i]*nums[k]*nums[j]. Iterate by interval length. Think "which balloon is bursted LAST." LC 312.
- "Two threads need to alternate printing 'foo' and 'bar' N times."
Pattern + approach
Two semaphores. foo_sem starts with 1, bar_sem with 0. Each thread acquires its own semaphore, prints, releases the other's. LC 1115.