DevToolNow

Big-O Notation Cheat Sheet for Working Engineers

DevToolNow Editorial Team··~10 min read

Big-O describes how an algorithm scales with input size. This cheat sheet is the minimum useful subset for engineers shipping production code — common operations, their complexities, and the trade-offs you actually face when picking a data structure.

1. The complexity hierarchy

From fastest to slowest, with concrete examples:

ComplexityNamen=1kn=1MExample
O(1)Constant11Hash map lookup, array index
O(log n)Logarithmic1020Binary search, balanced tree
O(n)Linear1,0001,000,000Linear scan, single loop
O(n log n)Linearithmic10,00020,000,000Quicksort, mergesort, heapsort
O(n²)Quadratic1,000,00010¹²Nested loops, bubble sort
O(n³)Cubic10⁹10¹⁸Naive matrix multiplication
O(2ⁿ)Exponential2¹⁰⁰⁰infeasibleRecursive Fibonacci, subset enumeration
O(n!)FactorialinfeasibleinfeasibleBrute-force traveling salesman

Practical limit: Modern hardware does ~10⁸–10⁹ simple operations per second. Anything beyond 10¹⁰ total operations is too slow for real-time use. A request with O(n²) on n=100k is already 10¹⁰ ops — you'll feel it.

2. Data structure operation complexities

StructureAccessSearchInsertDeleteNotes
Array (dynamic)O(1)O(n)O(n)O(n)Append amortized O(1)
Linked listO(n)O(n)O(1)*O(1)**If node ref already known
Hash mapO(1) avgO(1) avgO(1) avgWorst O(n) on collision
Balanced BST (Red-Black)O(log n)O(log n)O(log n)O(log n)Sorted iteration
Heap (binary)O(1)O(n)O(log n)O(log n)Min/max access O(1)
TrieO(k)O(k)O(k)k = key length

3. Sorting algorithms

AlgorithmBestAverageWorstSpaceStable
QuicksortO(n log n)O(n log n)O(n²)O(log n)No
MergesortO(n log n)O(n log n)O(n log n)O(n)Yes
HeapsortO(n log n)O(n log n)O(n log n)O(1)No
Timsort (Python, Java)O(n)O(n log n)O(n log n)O(n)Yes
Bubble/InsertionO(n)O(n²)O(n²)O(1)Yes
Counting sortO(n+k)O(n+k)O(n+k)O(k)Yes

What your language uses: Python and Java's Arrays.sort use Timsort (a mergesort variant optimized for partially-sorted data). C++ std::sort uses introsort (quicksort + heapsort fallback). Rust uses a stable sort based on Timsort.

4. Graph algorithm complexities

Where V = vertices, E = edges:

  • BFS / DFS: O(V + E)
  • Dijkstra (binary heap): O((V + E) log V)
  • Bellman-Ford: O(V·E) — handles negative weights
  • Floyd-Warshall (all pairs): O(V³)
  • Topological sort: O(V + E)
  • Minimum spanning tree (Kruskal): O(E log E)

5. Common patterns and their complexity

// O(1) — constant
arr[42]
hashMap.get(key)

// O(log n) — halving the search space
binarySearch(sortedArr, target)
balancedTree.insert(value)

// O(n) — single loop or scan
arr.filter(x => x > 0)
linkedList.find(value)

// O(n log n) — sort + linear, or divide-and-conquer
arr.sort()
mergeSort(arr)

// O(n²) — nested loops over the same input
for (i = 0; i < n; i++) {
  for (j = 0; j < n; j++) {
    // pair work
  }
}

// O(n + m) — two separate inputs
arr1.concat(arr2)
mergeSorted(a, b)

6. Space complexity gotchas

Space complexity often gets ignored, but matters as much as time in memory-bounded systems:

  • Recursion depth: Recursive algorithms use O(depth) stack space. A naive recursive Fibonacci uses O(n) stack — and stack overflows are real.
  • Auxiliary vs total: Mergesort is "O(n) extra space" — that's auxiliary. Heapsort is in-place (O(1) auxiliary). For limited RAM, in-place wins.
  • Hash map memory overhead: Open-addressing hash maps typically use 1.5–2× the slots needed (load factor 0.5–0.7). A million-entry map can cost 100MB+.

7. When Big-O lies

Big-O hides constant factors — and constants often dominate at small scale:

  • Cache locality: Linear scan over a contiguous array (O(n)) is often faster than O(log n) tree search at small n, because the array fits in L1/L2 cache.
  • SIMD / vectorization: Modern CPUs process 4–8 elements per cycle for simple operations. An "O(n)" linear scan can be 10× faster than expected.
  • Memory allocation: A "free" O(1) hash insert that triggers a rehash is suddenly O(n). Amortized analysis covers this, but a single request can still spike.

Rule of thumb: Choose data structures by Big-O. Profile before optimizing constants.

FAQ

Q. Is O(log n) really faster than O(n)?

A. Yes, for any reasonably large n. log₂(1,000,000) ≈ 20 — meaning a binary search on a million-element sorted array does ~20 comparisons vs ~1,000,000 for linear search. The crossover is small (around n=10), so even moderately sized inputs benefit dramatically.

Q. When does Big-O actually matter?

A. When input size grows beyond constants. For n < 100, O(n²) is often faster than O(n log n) because of cache locality and constant factors. But for any production code that may scale (web requests, batch jobs, log processing), the asymptotic behavior dominates within months. As a rule: optimize for clarity at small scale, complexity at large scale.

Q. Why is hash map lookup O(1) average but O(n) worst case?

A. Hash maps rely on a hash function distributing keys uniformly across buckets. In the average case, each bucket holds ~1 item. But adversarial inputs (chosen to collide on the same bucket) can degrade lookup to O(n) — this is why production hash maps often use random seeds (Python 3.3+, Rust's SipHash) to prevent hash-DoS attacks.

Q. What's the difference between Big-O, Big-Theta, and Big-Omega?

A. Big-O (O) is upper bound: 'no worse than'. Big-Omega (Ω) is lower bound: 'no better than'. Big-Theta (Θ) is tight bound: 'exactly this asymptotic'. Most developers use Big-O loosely to mean Theta. Technically O(n) includes O(1), but in practice 'O(n)' implies Θ(n).

Q. Should I memorize all complexities?

A. No. Memorize the common operations on arrays, hash maps, and binary search trees. Then learn to derive others from first principles: a single loop over n is O(n); nested loops are O(n²); halving the search space is O(log n); divide-and-conquer that processes everything is O(n log n).

References

About the DevToolNow Editorial Team

DevToolNow's editorial team is made up of working software developers who use these tools every day. Every guide is reviewed against primary sources — IETF RFCs, W3C/WHATWG specifications, MDN Web Docs, and project repositories on GitHub — before publication. We update articles when standards change so the guidance stays current.

Sources we cite: IETF RFCs · MDN Web Docs · WHATWG · ECMAScript spec · Official project READMEs on GitHub