Big O
Big O notation in computer science is a mathematical notation used to describe the performance or complexity of an algorithm. It specifically measures the worst-case scenario, or the maximum amount of time an algorithm takes to complete as the size of the input data increases.
Big O Notation | Description |
---|---|
O(1) | Constant Time: The running time of the algorithm is constant. It does not grow with the size of the input. |
O(log n) | Logarithmic Time: The running time grows logarithmically with the size of the input |
O(n) | Linear Time: The running time grows linearly with the size of the input. |
O(n log n) | Log Linear Time: The running time grows in proportion to |
O(n^2) | Quadratic Time: The running time grows quadratically with the size of the input. |
O(2^n) | Exponential Time: The running time grows exponentially with the size of the input. |
O(n!) | Factorial Time: The running time grows in proportion to the factorial of the input size. |
Rules
Always consider Worst Case.
Remove Constants e.g
O(2n)
. Here, drop constant2
and it would beO(n)
.Different terms for inputs e.g.
O(a+b)
,O(a*b)
.Drop non dominants e.g.
O(n)
,O(n^2)
. Here, dropO(n)
which is non dominated as compare toO(n^2)
.
NOTE:
n
: Represent the number of elements.H
: Represent height (Depth) ofgraph
andtree
.L
: Represent the length of string etc.
Examples
Time Complexity: O(1)
Time Complexity: O(n)
Time Complexity: O(n^2)
Time Complexity: O(n log n)
Time Complexity: O(n log n)
Time Complexity: O(log n * log n) = O(log^2 n)
Time Complexity: O(log n n^2) = O(n^2) drop no dominated (log n
) complexity.
Last updated