Big O
Last updated
Last updated
Big O notation in computer science is a mathematical notation used to describe the performance or complexity of an algorithm. It specifically measures the worst-case scenario, or the maximum amount of time an algorithm takes to complete as the size of the input data increases.
O(1)
Constant Time: The running time of the algorithm is constant. It does not grow with the size of the input.
O(log n)
Logarithmic Time: The running time grows logarithmically with the size of the input
O(n)
Linear Time: The running time grows linearly with the size of the input.
O(n log n)
Log Linear Time: The running time grows in proportion to n log n
.
O(n^2)
Quadratic Time: The running time grows quadratically with the size of the input.
O(2^n)
Exponential Time: The running time grows exponentially with the size of the input.
O(n!)
Factorial Time: The running time grows in proportion to the factorial of the input size.
Always consider Worst Case.
Remove Constants e.g O(2n)
. Here, drop constant 2
and it would be O(n)
.
Different terms for inputs e.g. O(a+b)
, O(a*b)
.
Drop non dominants e.g. O(n)
, O(n^2)
. Here, drop O(n)
which is non dominated as compare to O(n^2)
.
NOTE:
n
: Represent the number of elements.
H
: Represent height (Depth) of graph
and tree
.
L
: Represent the length of string etc.
Time Complexity: O(1)
Time Complexity: O(n)
Time Complexity: O(n^2)
Time Complexity: O(n log n)
Time Complexity: O(n log n)
Time Complexity: O(log n * log n) = O(log^2 n)
Time Complexity: O(log n n^2) = O(n^2) drop no dominated (log n
) complexity.