Algorithm Complexity and Big O Notation
What is Algorithm Complexity?
Algorithm complexity measures how the performance of an algorithm changes as the input size grows. It helps us understand the efficiency and scalability of different algorithms.
Why Analyze Complexity?
- Compare algorithm efficiency
- Predict performance on large inputs
- Choose the best algorithm for a problem
- Optimize code performance
- Understand scalability limits
Big O Notation
Big O notation describes the upper bound of an algorithm's growth rate. It focuses on the worst-case scenario and ignores constant factors and lower-order terms.
Common Complexity Classes
Notation | Name | Description | Example |
---|---|---|---|
O(1) | Constant | Time doesn't change with input size | Array access |
O(log n) | Logarithmic | Time grows slowly with input size | Binary search |
O(n) | Linear | Time grows proportionally with input | Linear search |
O(n log n) | Linearithmic | Common in efficient sorting | Merge sort |
O(n²) | Quadratic | Time grows with square of input | Bubble sort |
O(2ⁿ) | Exponential | Time grows exponentially | Recursive Fibonacci |
O(1) - Constant Time
Operations that take the same time regardless of input size. These are the most efficient operations.
Array Access
Accessing an element in an array by index is a constant time operation.
# O(1) - Constant time array access
numbers <-- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# These operations take the same time regardless of array size
first_element <-- numbers[1]
last_element <-- numbers[LENGTH(numbers)]
DISPLAY("First element: ")
DISPLAY(first_element)
DISPLAY("Last element: ")
DISPLAY(last_element)
Simple Arithmetic
Basic arithmetic operations are constant time.
# O(1) - Constant time arithmetic
PROCEDURE CHECK_IF_EVEN(number) {
# Simple arithmetic operation - constant time
IF (number % 2 == 0) {
DISPLAY("Even")
} ELSE {
DISPLAY("Odd")
}
}
# Test with different numbers
result1 <-- CHECK_IF_EVEN(42)
result2 <-- CHECK_IF_EVEN(17)
DISPLAY("42 is ")
DISPLAY(result1)
DISPLAY("17 is ")
DISPLAY(result2)
O(n) - Linear Time
Operations where time grows linearly with input size. The algorithm must process each element once.
Linear Search
Searching through an array by checking each element is linear time.
# O(n) - Linear search
PROCEDURE LINEAR_SEARCH(array, target) {
# Must check each element until found
i <-- 1
REPEAT LENGTH(array) TIMES {
IF (array[i] == target) {
RETURN "Found at position " + i
}
i <-- i + 1
}
RETURN "Not found"
}
# Test linear search
numbers <-- [10, 20, 30, 40, 50]
search_result <-- LINEAR_SEARCH(numbers, 30)
DISPLAY(search_result)
Array Sum
Calculating the sum of all elements in an array requires visiting each element once.
# O(n) - Sum all elements in array
PROCEDURE SUM_ARRAY(array) {
total <-- 0
i <-- 1
REPEAT LENGTH(array) TIMES {
total <-- total + array[i]
i <-- i + 1
}
RETURN total
}
# Test array sum
numbers <-- [1, 2, 3, 4, 5]
sum_result <-- SUM_ARRAY(numbers)
DISPLAY("Sum of array: ")
DISPLAY(sum_result)
Finding Maximum
Finding the maximum value in an array requires checking each element.
# O(n) - Find maximum value
PROCEDURE FIND_MAX(array) {
max_value <-- array[1]
i <-- 2
REPEAT (LENGTH(array) - 1) TIMES {
IF (array[i] > max_value) {
max_value <-- array[i]
}
i <-- i + 1
}
RETURN max_value
}
# Test finding maximum
numbers <-- [5, 12, 3, 8, 20, 1]
maximum <-- FIND_MAX(numbers)
DISPLAY("Maximum value: ")
DISPLAY(maximum)
O(n²) - Quadratic Time
Operations where time grows with the square of input size. These often involve nested loops.
Bubble Sort
Bubble sort uses nested loops to compare and swap adjacent elements.
# O(n²) - Bubble sort
PROCEDURE BUBBLE_SORT(array) {
n <-- LENGTH(array)
i <-- 1
REPEAT n TIMES {
j <-- 1
REPEAT (n - 1) TIMES {
IF (array[j] > array[j + 1]) {
# Swap elements
temp <-- array[j]
array[j] <-- array[j + 1]
array[j + 1] <-- temp
}
j <-- j + 1
}
i <-- i + 1
}
RETURN array
}
# Test bubble sort
numbers <-- [5, 2, 8, 1, 9]
sorted <-- BUBBLE_SORT(numbers)
DISPLAY("Sorted array: ")
DISPLAY(sorted)
Finding Duplicates
Checking for duplicates using nested loops creates quadratic complexity.
# O(n²) - Find duplicates
PROCEDURE FIND_DUPLICATES(array) {
duplicates <-- []
i <-- 1
REPEAT LENGTH(array) TIMES {
j <-- i + 1
REPEAT (LENGTH(array) - i) TIMES {
IF (array[i] == array[j]) {
APPEND(duplicates, array[i])
BREAK
}
j <-- j + 1
}
i <-- i + 1
}
RETURN duplicates
}
# Test finding duplicates
numbers <-- [1, 2, 3, 2, 4, 5, 1]
duplicates <-- FIND_DUPLICATES(numbers)
DISPLAY("Duplicates: ")
DISPLAY(duplicates)
O(log n) - Logarithmic Time
Operations where time grows logarithmically with input size. These algorithms divide the problem in half each step.
Binary Search
Binary search works on sorted arrays by dividing the search space in half each time.
# O(log n) - Binary search
PROCEDURE BINARY_SEARCH(array, target) {
left <-- 1
right <-- LENGTH(array)
REPEAT UNTIL (left > right) {
mid <-- (left + right) / 2
IF (array[mid] == target) {
RETURN "Found at position " + mid
} ELSE IF (array[mid] < target) {
left <-- mid + 1
} ELSE {
right <-- mid - 1
}
}
RETURN "Not found"
}
# Test binary search
numbers <-- [1, 3, 5, 7, 9, 11, 13, 15]
search_result <-- BINARY_SEARCH(numbers, 7)
DISPLAY(search_result)
Comparing Complexities
Let's compare how different algorithms perform on the same problem.
Linear vs Binary Search
Compare the number of steps needed for linear and binary search.
# Compare linear vs binary search
PROCEDURE LINEAR_SEARCH_STEPS(array, target) {
steps <-- 0
i <-- 1
REPEAT LENGTH(array) TIMES {
steps <-- steps + 1
IF (array[i] == target) {
RETURN steps
}
i <-- i + 1
}
RETURN steps
}
PROCEDURE BINARY_SEARCH_STEPS(array, target) {
steps <-- 0
left <-- 1
right <-- LENGTH(array)
REPEAT UNTIL (left > right) {
steps <-- steps + 1
mid <-- (left + right) / 2
IF (array[mid] == target) {
RETURN steps
} ELSE IF (array[mid] < target) {
left <-- mid + 1
} ELSE {
right <-- mid - 1
}
}
RETURN steps
}
# Test with sorted array
numbers <-- [1, 2, 3, 4, 5, 6, 7, 8]
target <-- 7
linear_steps <-- LINEAR_SEARCH_STEPS(numbers, target)
binary_steps <-- BINARY_SEARCH_STEPS(numbers, target)
DISPLAY("Linear search took ")
DISPLAY(linear_steps)
DISPLAY(" steps")
DISPLAY("Binary search took ")
DISPLAY(binary_steps)
DISPLAY(" steps")
Space Complexity
Space complexity measures how much additional memory an algorithm uses beyond the input.
Constant Space
Algorithms that use a fixed amount of extra space regardless of input size.
# O(1) space complexity
PROCEDURE FIND_MAX_MIN(array) {
# Uses only a few variables regardless of array size
max_val <-- array[1]
min_val <-- array[1]
i <-- 2
REPEAT (LENGTH(array) - 1) TIMES {
IF (array[i] > max_val) {
max_val <-- array[i]
}
IF (array[i] < min_val) {
min_val <-- array[i]
}
i <-- i + 1
}
DISPLAY("Maximum: ")
DISPLAY(max_val)
DISPLAY("Minimum: ")
DISPLAY(min_val)
}
# Test constant space algorithm
numbers <-- [5, 12, 3, 8, 20, 1]
FIND_MAX_MIN(numbers)
Linear Space
Algorithms that use space proportional to the input size.
# O(n) space complexity
PROCEDURE REVERSE_ARRAY(array) {
# Creates a new array of same size
reversed <-- []
i <-- LENGTH(array)
REPEAT LENGTH(array) TIMES {
APPEND(reversed, array[i])
i <-- i - 1
}
RETURN reversed
}
# Test linear space algorithm
original <-- [1, 2, 3, 4, 5]
reversed <-- REVERSE_ARRAY(original)
DISPLAY("Original: ")
DISPLAY(original)
DISPLAY("Reversed: ")
DISPLAY(reversed)
Practice Exercises
Practice analyzing algorithm complexity with these exercises.
Exercise 1: Analyze Complexity
Determine the time complexity of this algorithm.
# What is the time complexity of this algorithm?
PROCEDURE COUNT_PAIRS(array) {
count <-- 0
i <-- 1
REPEAT LENGTH(array) TIMES {
j <-- i + 1
REPEAT (LENGTH(array) - i) TIMES {
IF (array[i] == array[j]) {
count <-- count + 1
}
j <-- j + 1
}
i <-- i + 1
}
RETURN count
}
# Test the algorithm
numbers <-- [1, 2, 1, 3, 2, 1]
pairs <-- COUNT_PAIRS(numbers)
DISPLAY("Number of pairs: ")
DISPLAY(pairs)
DISPLAY("Time complexity: O(n²) - nested loops")
Exercise 2: Optimize Algorithm
Convert a quadratic algorithm to linear time.
# Optimized version using hash table concept
PROCEDURE FIND_DUPLICATE_OPTIMIZED(array) {
# Use array to track seen elements
seen <-- []
i <-- 1
REPEAT LENGTH(array) TIMES {
# Check if element already seen
found <-- 0
j <-- 1
REPEAT LENGTH(seen) TIMES {
IF (seen[j] == array[i]) {
found <-- 1
BREAK
}
j <-- j + 1
}
IF (found == 1) {
RETURN "Duplicate found: " + array[i]
}
APPEND(seen, array[i])
i <-- i + 1
}
RETURN "No duplicates"
}
# Test optimized algorithm
numbers <-- [1, 2, 3, 4, 2, 5]
result <-- FIND_DUPLICATE_OPTIMIZED(numbers)
DISPLAY(result)
Exercise 3: Compare Algorithms
Compare the performance of different sorting algorithms.
# Simple selection sort (O(n²))
PROCEDURE SELECTION_SORT(array) {
n <-- LENGTH(array)
i <-- 1
REPEAT n TIMES {
min_idx <-- i
j <-- i + 1
REPEAT (n - i) TIMES {
IF (array[j] < array[min_idx]) {
min_idx <-- j
}
j <-- j + 1
}
# Swap elements
temp <-- array[i]
array[i] <-- array[min_idx]
array[min_idx] <-- temp
i <-- i + 1
}
RETURN array
}
# Test selection sort
numbers <-- [5, 2, 8, 1, 9]
sorted <-- SELECTION_SORT(numbers)
DISPLAY("Selection sort result: ")
DISPLAY(sorted)
DISPLAY("Time complexity: O(n²)")