Demystifying Big O Notation: Time and Space Complexity
When you write code, it is easy to assume that if it runs fast on your laptop, it will run fast everywhere. However, an algorithm that processes ten items in a millisecond might take a hundred years to process a million items.
How do we talk about the efficiency of our code without relying on hardware specs or stopwatch measurements? We use Big O Notation.
Big O notation is a mathematical framework that describes how an algorithm’s runtime or memory requirements scale as the size of the input (usually denoted as n) grows towards infinity. It leaves no room for ambiguity.
To truly understand Big O, we need to evaluate two separate dimensions of our code: Time Complexity and Space Complexity.
Time vs. Space Complexity
- Time Complexity: How many operations does the algorithm perform as the input size grows? (We measure steps, not seconds).
- Space Complexity: How much extra memory does the algorithm allocate as the input size grows? (We measure variables, arrays, and call stacks, not megabytes).
A great algorithm balances both, but often in software engineering, you will find yourself trading space for time (e.g., using caching/memoization).
Let’s break down the most common complexities you will encounter, going from the most efficient to the least efficient.
O(1) - Constant Time / Space
An algorithm is O(1) if its execution time or memory usage does not change, regardless of how massive the input becomes.
O(1) Time Example
Accessing an element in an array by its index takes the same amount of time whether the array has 10 items or 10 billion items.
function getFirstItem<T>(items: T[]): T | undefined {
// Time: O(1) - Just one operation regardless of items.length
// Space: O(1) - No extra variables are created that scale with input
return items[0];
}
O(N) - Linear Time / Space
An algorithm is O(N) if the time or space it requires grows directly in proportion to the input size. If the input doubles, the cost doubles.
O(N) Time Example
Searching for a specific value in an unsorted array requires checking every single item in the worst-case scenario.
function findItem(items: string[], target: string): boolean {
// Time: O(N) - In the worst case, we check every single item once
// Space: O(1) - We only allocate a few loop variables, no scaling memory
for (let i = 0; i < items.length; i++) {
if (items[i] === target) {
return true;
}
}
return false;
}
O(N) Space Example
If we need to create a copy of an array, the memory required scales exactly with the size of the original array.
function cloneArray<T>(items: T[]): T[] {
// Time: O(N) - We iterate over N items
// Space: O(N) - We create a new array holding exactly N items
const clone: T[] = [];
for (const item of items) {
clone.push(item);
}
return clone;
}
O(N²) - Quadratic Time / Space
When you have a loop inside of another loop, and both iterate over the input, you usually have O(N²) complexity. If your input size is 10, it takes 100 operations. If your input size is 1,000, it takes 1,000,000 operations. This scales poorly and should be avoided for large datasets.
O(N²) Time Example
Finding all possible pairs in an array.
function printAllPairs(items: string[]): void {
// Time: O(N²) - For every item, we loop through all items again
// Space: O(1) - We just print to the console, no extra data structures
for (let i = 0; i < items.length; i++) {
for (let j = 0; j < items.length; j++) {
console.log(`Pair: ${items[i]}, ${items[j]}`);
}
}
}
O(N²) Space Example
Creating a 2D grid (matrix) based on a single input size.
function createMatrix(size: number): number[][] {
// Time: O(N²) - Nested loops creating the grid
// Space: O(N²) - The resulting matrix holds N * N elements
const matrix: number[][] = [];
for (let i = 0; i < size; i++) {
const row: number[] = [];
for (let j = 0; j < size; j++) {
row.push(0);
}
matrix.push(row);
}
return matrix;
}
O(log N) - Logarithmic Time
Logarithmic complexity is incredibly efficient. It typically occurs when an algorithm cuts the remaining workload in half with each step.
The classic example is Binary Search. Imagine looking up a name in a phone book. You don’t read page by page (O(N)). You open the middle, see if the name comes before or after, and discard the other half.
function binarySearch(sortedItems: number[], target: number): number {
let left = 0;
let right = sortedItems.length - 1;
// Time: O(log N) - We halve the search space on each iteration
// Space: O(1) - Only pointers (left, right, mid) are kept in memory
while (left <= right) {
const mid = Math.floor((left + right) / 2);
if (sortedItems[mid] === target) return mid;
if (sortedItems[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return -1;
}
If an array has 1,000,000 items, binary search takes at most ~20 steps to find the target!
O(N log N) - Linearithmic Time
You will see O(N log N) frequently, as it is the standard time complexity for the most efficient general-purpose sorting algorithms, like Merge Sort and Quick Sort.
It involves performing a logarithmic operation (log N) for every item (N) in the dataset. Under the hood, TypeScript’s Array.prototype.sort() is typically implemented with an O(N log N) algorithm.
The Golden Rules of Big O
When calculating Big O for your own functions, remember these two fundamental rules:
1. Drop the Constants
Big O describes the rate of growth, not the exact number of steps.
function printTwice(items: string[]): void {
for (const item of items) console.log(item); // O(N)
for (const item of items) console.log(item); // O(N)
}
Technically, this is O(2N). But as N approaches infinity, the 2 becomes irrelevant. We simplify this and say the algorithm is O(N).
2. Drop Non-Dominant Terms
If an algorithm consists of multiple steps with different complexities, the highest complexity dictates the overall Big O.
function printAllNumbersThenAllPairs(numbers: number[]): void {
// Step 1: O(N)
for (const num of numbers) {
console.log(num);
}
// Step 2: O(N²)
for (let i = 0; i < numbers.length; i++) {
for (let j = 0; j < numbers.length; j++) {
console.log(numbers[i] + numbers[j]);
}
}
}
The total time complexity is O(N + N²). However, as N grows drastically large, the N² term will absolutely dominate the execution time. Therefore, we drop the N and confidently call this algorithm O(N²).
Conclusion
Understanding Big O notation shifts your mindset from “Does it work?” to “How well does it scale?”.
- When writing loops, keep an eye out for nested iterations (
O(N²)). - When building large arrays or objects, be mindful of your memory footprint (Space Complexity).
- Whenever possible, try to leverage Hash Maps (
MaporSetin TypeScript) to turnO(N)search operations intoO(1)lookups.
Performance isn’t about premature optimization; it’s about making informed architectural decisions before you write your first line of code.