May get “lost” deep in graph, missing the shortest path
Avoids searching “shallow states” for long solution
BFS
Traverse left to right, level by level
Explore shallowest node from start node
(Queue - FIFO)
✅
✅
\(O(b^{m_s} + 1)\)
\(O(b^{m_s}+1)\)
High memory usage if states have high avg no of children
Always finds shortest path
Iterative-Deepening
Combination of DFS and BFS)
Perform DFS for every level DFS with depth bound
Not necessarily
\(O(b^d)\)
\(O(bd)\)
Repeated work (negligible though: \(O(1/b)\))
Prevents DFS from getting lost in infinite path
Depth threshold
UCS (Uniform Cost)/ Branch & Bound
Orders by path/backward cost \(c(i, x)\)
Explore least cost node from start node
✅ (\(\iff\) all costs are non-negative)
✅
\(O(b^m)\)
\(O(b^m)\)
Explores options in “every direction”
Keeps cost low
Bi-Directional
Performed on search graph
Two simultaneous search - forward search from start vertex toward goal vertex and backward search from goal vertex toward start vertex
✅
✅ \(\iff\) we use BFS in both searches
\(O(b^{d/2})\)
\(O(b^{d/2})\)
Can prune many options
- Which goal state(s) to use - Handling search overlap - Which search to use in each direction - 2 BFS searches
Informed
Greedy/Best-First
Explore the node with lowest heuristic value which takes closer to goal
Orders by goal proximity/forward cost \(h(x, G)\)
Similar to UCS, but with a priority queue
❌
❌
\(O(b^m)\)
\(O(b^m)\)
Tentative May directly go to wrong end state May behave like a poorly-guided DFS
Helps find solution quickly
A*
Explore the node with lowest total cost value
\(f = C(i, x) + h(x, G)\)
Uses priority queue
Combination of UCS & Greedy
✅
✅ (\(\iff\) heuristic is admissible)
\(O(b^m)\)
\(O(b^m)\)
Hill-Climbing
Basically gradient-descent
❌
❌
\(O(bm)\)
\(O(b)\)
Irreversible If bad heuristic, may prune away goals Stuck at local minima/maxima Skips ridges Plateaus
Fast Low memory
Beam
Compromise b/w hill-climbing & greedy
\(n=1:\) Hill-Climbing \(n=\infty:\) Best-First
\(n \in (1, \infty) :\) Beam width (no of children to search)
❌
❌
\(O(nm)\)
\(O(bn)\)
IDA* (Iterative Deepening A *)
Similar to Iterative Deepening, but uses A* cost threshold
Increase always includes at least one new node
✅
✅
\(O(b^m)\)
\(O(m)\)
Some redundant search, but negligible Dangerous if continuous \(h\) values or if \(h\) values very close to threshold
Ensures search never looks beyond optimal cost soln
- Threshold - \(h\)(root) - \(f\)(min_child)
min_child is the cut off child with min \(f\)
RBFS (Recursive Best-First/Greedy)
Linear space variant of \(A^*\)
Backtrack if current node is worse than next best alternative
Perform \(A^*\) but discard subtrees when performing recursion Keep track of alternative (next best) subtree Expand subtree until \(f>\) bound Update \(f\) before (from parent) and after (from child) recursive call
\(O(2^m)\)
\(O(bm)\)
Stores more info than IDA*
More efficient than IDA*
SMA*
Simplified Memory-Bounded A*
Perform A*, but when memory is full, discard worst leaf (highest \(f\)) Back value of discarded node to parent
Hill-Climbing
Random restart helps overcome local maxima/minima
Random sideways moves help escape from - shoulders - loop on flat maxima
❌ Trivially complete with random restart
❌
Irreversible steps Skips ridges
Fast Low memory requirement
Local Beam
\(k\) hill climbs
Choose \(k\) random successors Similar to natural selection
❌
❌
Inefficient All \(k\) states end up on same local hill
\(k\)
Simulated Annealing
Trade-off b/w hill-climbing & random search
Randomness at high “temperature” When temperature cools, reduce prob of random moves
Can find global optima when temperature chosen correctly
Temperature
Genetic Algorithm
where
\(b =\) max branching factor (nodes at each level)
Helps avoid repeated states - Do not return to parent/grand-parent states - Do not create solution paths with cycles - Do not generate repeated states as options (need to store & check more states)