Big-O Notation

IB Syllabus: B2.4.1 – Describe efficiency by calculating Big O to analyse scalability; time and space complexities; algorithm choice based on scalability and efficiency.

Table of Contents

  1. Key Concepts
    1. What is Big-O Notation?
    2. Common Complexity Classes
    3. Growth Rate Comparison
    4. Time Complexity
    5. Space Complexity
    6. Calculating Big-O from Code
    7. Choosing the Right Algorithm
  2. Worked Examples
    1. Example 1: Identify the Big-O
    2. Example 2: Compare Algorithms
  3. Quick Check
  4. Trace Exercise
  5. Spot the Error
  6. Fill in the Blanks
  7. Predict the Output
  8. Practice Exercises
    1. Core
    2. Extension
    3. Challenge
  9. Connections

Key Concepts

What is Big-O Notation?

When comparing algorithms, we need a way to describe how their performance changes as the input size grows. Big-O notation provides this – it describes the worst-case growth rate of an algorithm’s time or space requirements as a function of the input size n.

Big-O answers the question: “If I double the input size, how much longer will this algorithm take?”

Big-O describes the growth rate, not the exact number of operations. An O(n) algorithm might take 3n + 5 steps, but we drop the constants and lower-order terms because they become insignificant as n grows large. What matters is the shape of the growth curve.

Common Complexity Classes

Big-O Name Growth Example
O(1) Constant Same time regardless of input size Accessing an array element by index: arr[5]
O(log n) Logarithmic Doubles input, adds one step Binary search – halves the data each step
O(n) Linear Doubles input, doubles time Linear search – checks every element
O(n log n) Linearithmic Slightly worse than linear Efficient sorting algorithms (e.g., merge sort, quicksort average case)
O(n²) Quadratic Doubles input, quadruples time Bubble sort, selection sort – nested loops over the data

Growth Rate Comparison

To understand why Big-O matters, consider how these complexity classes scale as input size increases:

n O(1) O(log n) O(n) O(n log n) O(n²)
10 1 3 10 33 100
100 1 7 100 664 10,000
1,000 1 10 1,000 9,966 1,000,000
10,000 1 13 10,000 132,877 100,000,000
100,000 1 17 100,000 1,660,964 10,000,000,000

At n = 100,000, an O(n²) algorithm performs 10 billion operations while an O(n log n) algorithm performs only about 1.7 million – roughly 6,000 times faster. This difference determines whether a program finishes in milliseconds or takes hours.

A common exam mistake is thinking O(n²) is “just a bit worse” than O(n). The table above shows the reality: quadratic growth becomes catastrophically slow for large inputs. This is why algorithm choice matters – selecting the right algorithm for the data size is a fundamental skill in computer science.

Time Complexity

Time complexity measures how the number of operations an algorithm performs grows with input size. It answers: “How much longer does this take as the data gets bigger?”

You have already seen time complexity in action:

  • Linear search checks each element one by one. In the worst case (target not found or at the end), it checks all n elements. Time complexity: O(n).
  • Binary search eliminates half the remaining elements with each comparison. The maximum number of comparisons for n elements is log₂(n). Time complexity: O(log n).
  • Bubble sort uses a nested loop – the outer loop runs n-1 times, and the inner loop runs up to n-1 times for each pass. Time complexity: O(n²).
  • Selection sort also uses a nested loop – for each of the n-1 positions, it scans the remaining unsorted elements to find the minimum. Time complexity: O(n²).

Space Complexity

Space complexity measures how much additional memory an algorithm requires beyond the input data itself. It answers: “How much extra memory does this need?”

  • An algorithm that uses only a few extra variables (like temp, i, j) regardless of input size has O(1) space complexity – constant extra space.
  • An algorithm that creates a copy of the entire input array has O(n) space complexity – the extra memory grows linearly with input size.

Both bubble sort and selection sort sort in place – they rearrange the original array using only a few extra variables. Their space complexity is O(1).

Binary search (iterative version) also uses O(1) extra space – just the low, mid, and high variables.

When an exam asks you to “evaluate” an algorithm, discuss both time and space complexity. An algorithm might be fast (good time complexity) but require a lot of extra memory (poor space complexity), or vice versa. The best choice depends on the constraints of the situation.

Calculating Big-O from Code

To determine the Big-O of a piece of code, look at the loop structure:

Single loop over n elements = O(n)

for (int i = 0; i < n; i++) {
    // constant-time operation
}

Nested loops, each over n elements = O(n²)

for (int i = 0; i < n; i++) {
    for (int j = 0; j < n; j++) {
        // constant-time operation
    }
}

Loop that halves the data each iteration = O(log n)

int low = 0, high = n - 1;
boolean found = false;
while (low <= high && !found) {
    int mid = (low + high) / 2;
    // compare and adjust low or high
}

No loops (or loops that run a fixed number of times) = O(1)

int first = arr[0];  // constant time
int last = arr[arr.length - 1];  // constant time

When code contains sequential operations (one loop followed by another), add the complexities: O(n) + O(n) = O(n). When code contains nested operations (one loop inside another), multiply: O(n) * O(n) = O(n²). When adding, the higher-order term dominates: O(n²) + O(n) = O(n²).

Choosing the Right Algorithm

Algorithm choice depends on the data and the situation. Consider these factors:

Factor Favours faster algorithm Favours simpler algorithm
Data size Large datasets need efficient algorithms Small datasets run fast regardless
Data order Pre-sorted or nearly-sorted data benefits from adaptive algorithms Random data shows less difference
Memory constraints In-place algorithms (O(1) space) are better when memory is limited Extra memory is acceptable if available
Implementation complexity More complex algorithms may have bugs Simple algorithms are easier to write and debug

Example decision: You need to search a sorted array of 1 million student records for a specific ID.

  • Linear search: O(n) = up to 1,000,000 comparisons
  • Binary search: O(log n) = at most 20 comparisons

Binary search is clearly the right choice here – it is roughly 50,000 times faster. However, binary search requires the array to be sorted. If the data is unsorted and you only need to search once, sorting first (O(n²) with bubble sort, or O(n log n) with an efficient sort) plus one binary search might be slower than a single linear search.


Worked Examples

Example 1: Identify the Big-O

For each code snippet, determine the time complexity.

Snippet A:

int sum = 0;
for (int i = 0; i < arr.length; i++) {
    sum = sum + arr[i];
}

Answer: O(n) – a single loop that visits each element once. The number of operations grows linearly with the array length.

Snippet B:

boolean hasDuplicate = false;
for (int i = 0; i < arr.length && !hasDuplicate; i++) {
    for (int j = i + 1; j < arr.length && !hasDuplicate; j++) {
        if (arr[i] == arr[j]) {
            hasDuplicate = true;
        }
    }
}

Answer: O(n²) – nested loops where the outer loop runs n times and the inner loop runs up to n-1 times. Even though the inner loop starts at i + 1 (reducing the total comparisons to roughly n²/2), we drop the constant factor in Big-O notation.

Snippet C:

System.out.println(arr[0]);
System.out.println(arr[arr.length - 1]);

Answer: O(1) – accessing specific array indices is a constant-time operation regardless of array size. Whether the array has 10 elements or 10 million, this code performs the same number of operations.


Example 2: Compare Algorithms

A school stores 50,000 student records in an array sorted by student ID. The system needs to find a specific student’s record.

Algorithm Time Complexity Max Operations Suitable?
Linear search O(n) 50,000 Slow for frequent lookups
Binary search O(log n) ~16 Fast and efficient

Analysis: Binary search requires at most log₂(50,000) = approximately 16 comparisons. Linear search could require up to 50,000 comparisons. For a system that performs many lookups per second, binary search is the clear choice.

Space complexity: Both algorithms use O(1) extra space – just a few variables for indices and the target value. Neither requires additional memory proportional to the input size.

Trade-off: The array must remain sorted for binary search to work. If new students are added frequently, the cost of maintaining a sorted order should be considered alongside the search performance gain.


Quick Check

Q1. What does O(n²) time complexity mean?

Q2. Which algorithm has O(log n) time complexity?

Q3. What does O(1) space complexity mean?

Q4. A piece of code has two nested loops, each iterating over all n elements of an array. What is its time complexity?

Q5. An O(n) algorithm and an O(n²) algorithm both process 10,000 items. Approximately how many times slower is the O(n²) algorithm?


Trace Exercise

For each code snippet, count the number of operations (comparisons or assignments inside the loop body) and identify the Big-O complexity class.

Trace: Operation Counting

Code PatternnOperationsBig-O
Single loop: for i = 0 to n-1 8
Nested loop: for i, for j = 0 to n-1 4
Halving loop: while n > 1, n = n/2 16
Array index access: arr[0] 1000
Two sequential loops: for i; for j (not nested) 5

For the halving loop with n=16: 16 -> 8 -> 4 -> 2 -> 1, which is 4 iterations. log₂(16) = 4.


Spot the Error

A student is analysing the time complexity of the following pseudocode. They made an error in their analysis. Click the line with the error, then pick the fix.

1Loop 1: for i = 0 to n-1 → O(n) 2Loop 2: for j = 0 to n-1 → O(n) 3Loops are sequential, so total = O(n) * O(n) = O(n²) 4Space complexity: O(1) -- only uses loop variables

Pick the correct analysis:


Fill in the Blanks

Complete the Big-O complexity summary:

Algorithm Complexities
======================
Linear search:    Time = O()    Space = O()
Binary search:    Time = O()    Space = O()
Bubble sort:      Time = O()    Space = O()
Selection sort:   Time = O()    Space = O()

Key Rules
=========
Nested loops:     O(n) inside O(n) = O()
Sequential loops: O(n) then O(n) = O()

Predict the Output

This code counts how many times the inner operation executes. What value does count hold after both loops finish?

int n = 4;
int count = 0;
for (int i = 0; i < n; i++) {
    for (int j = 0; j < n; j++) {
        count++;
    }
}
System.out.println(count);

What is printed?

This code counts how many times a value can be halved before reaching 1. What does it print?

int n = 32;
int count = 0;
while (n > 1) {
    n = n / 2;
    count++;
}
System.out.println(count);

What is printed?


Practice Exercises

Core

  1. Classify the complexity – For each scenario, state whether the time complexity is O(1), O(log n), O(n), or O(n²):
    • (a) Looking up a student’s grade by their index in an array
    • (b) Checking every email in a mailbox for spam
    • (c) Finding a word in a sorted dictionary by opening to the middle and narrowing down
    • (d) Comparing every student’s test score with every other student’s score
  2. Growth rate table – Create a table showing the number of operations for O(n), O(n log n), and O(n²) when n = 10, 100, 1,000, and 10,000. At what input size does the difference between O(n) and O(n²) become significant?

  3. Algorithm comparison – A library has 100,000 books sorted alphabetically. Compare using linear search versus binary search to find a specific book. State the maximum number of comparisons each algorithm would need and recommend which to use.

Extension

  1. Analyse this code – Determine the time and space complexity of the following pseudocode. Justify your answer by explaining what each loop does.
    for i = 0 to n-1
        for j = i+1 to n-1
            if arr[i] > arr[j]
                swap arr[i] and arr[j]
    
  2. Trade-off analysis – A dataset of 500 items needs to be searched 10,000 times. Compare two approaches: (a) search the unsorted data with linear search each time, (b) sort the data first with selection sort, then use binary search for each lookup. Calculate the approximate total operations for each approach and recommend the better option.

Challenge

  1. Algorithm design evaluation – A social media platform stores 10 million user profiles. Design a system that allows searching by username. Evaluate the trade-offs between keeping profiles sorted (for binary search) versus unsorted (for linear search), considering that new profiles are added frequently and searches happen constantly.

  2. Best, worst, and average case – Explain why bubble sort has a best-case time complexity of O(n) when using the swapped flag optimisation, but O(n²) in the worst case. Describe what kind of input data produces each case. Why does selection sort always take O(n²) regardless of the input data?


Connections

  • Prerequisites: Searching Algorithms – linear search O(n), binary search O(log n) are concrete examples of complexity classes
  • Prerequisites: Sorting Algorithms – bubble sort O(n²) and selection sort O(n²) demonstrate quadratic growth
  • Related: 1D Arrays – arrays are the data structure most algorithms operate on; array access is O(1)
  • Forward: Recursion (HL) – recursive algorithms have their own complexity analysis patterns (e.g., quicksort O(n log n) average case)

Back to top

© EduCS.me — A resource hub for IB Computer Science

This site uses Just the Docs, a documentation theme for Jekyll.