Open In App

Asymptotic Notation and Analysis (Based on input size) in Complexity Analysis of Algorithms

Last Updated : 04 May, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

Asymptotic Analysis is defined as the big idea that handles the above issues in analyzing algorithms. In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size (we don’t measure the actual running time). We calculate, how the time (or space) taken by an algorithm increases with the input size. 

Asymptotic notation is a way to describe the running time or space complexity of an algorithm based on the input size. It is commonly used in complexity analysis to describe how an algorithm performs as the size of the input grows. The three most commonly used notations are Big O, Omega, and Theta.

  1. Big O notation (O): This notation provides an upper bound on the growth rate of an algorithm’s running time or space usage. It represents the worst-case scenario, i.e., the maximum amount of time or space an algorithm may need to solve a problem. For example, if an algorithm’s running time is O(n), then it means that the running time of the algorithm increases linearly with the input size n or less.
  2. Omega notation (?): This notation provides a lower bound on the growth rate of an algorithm’s running time or space usage. It represents the best-case scenario, i.e., the minimum amount of time or space an algorithm may need to solve a problem. For example, if an algorithm’s running time is ?(n), then it means that the running time of the algorithm increases linearly with the input size n or more.
  3. Theta notation (?): This notation provides both an upper and lower bound on the growth rate of an algorithm’s running time or space usage. It represents the average-case scenario, i.e., the amount of time or space an algorithm typically needs to solve a problem. For example, if an algorithm’s running time is ?(n), then it means that the running time of the algorithm increases linearly with the input size n.

In general, the choice of asymptotic notation depends on the problem and the specific algorithm used to solve it. It is important to note that asymptotic notation does not provide an exact running time or space usage for an algorithm, but rather a description of how the algorithm scales with respect to input size. It is a useful tool for comparing the efficiency of different algorithms and for predicting how they will perform on large input sizes.

Why performance analysis? 

There are many important things that should be taken care of, like user-friendliness, modularity, security, maintainability, etc. Why worry about performance?  The answer to this is simple, we can have all the above things only if we have performance. So performance is like currency through which we can buy all the above things. Another reason for studying performance is – speed is fun! To summarize, performance == scale. Imagine a text editor that can load 1000 pages, but can spell check 1 page per minute OR an image editor that takes 1 hour to rotate your image 90 degrees left OR … you get it. If a software feature can not cope with the scale of tasks users need to perform – it is as good as dead. 

How to study efficiency of  algorithms?

The way to study the efficiency of an algorithm is to implement it and experiment by running the program on various test inputs while recording the time spent during each execution. A simple mechanism in Java is based on use of the currentTimeMillis() method of the System class  for collecting such running times. That method reports the number of milliseconds that have passed since a benchmark time known
as the epoch (January 1, 1970 UTC).The key is that if we record the time immediately before executing the algorithm and then immediately after it.

long start = System.currentTimeMillis( ); // record the starting time
 /? (run the algorithm) ?/
 long end = System.currentTimeMillis( ); // record the ending time
 long elapsed = end ? start; //Total time elapsed

Measuring elapsed time  provides a reasonable reflection of an algorithm’s efficiency.

Given two algorithms for a task, how do we find out which one is better? 

One naive way of doing this is – to implement both the algorithms and run the two programs on your computer for different inputs and see which one takes less time. There are many problems with this approach for the analysis of algorithms. 

  • It might be possible that for some inputs, the first algorithm performs better than the second. And for some inputs second performs better. 
  • It might also be possible that for some inputs, the first algorithm performs better on one machine, and the second works better on another machine for some other inputs.

Asymptotic Analysis is the big idea that handles the above issues in analyzing algorithms. In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size (we don’t measure the actual running time). We calculate, how the time (or space) taken by an algorithm increases with the input size. 

For example, let us consider the search problem (searching a given item) in a sorted array. 

The solution to above search problem includes: 

  • Linear Search (order of growth is linear) 
  • Binary Search (order of growth is logarithmic). 

To understand how Asymptotic Analysis solves the problems mentioned above in analyzing algorithms, 

  • let us say: 
    • we run the Linear Search on a fast computer A and 
    • Binary Search on a slow computer B and 
    • pick the constant values for the two computers so that it tells us exactly how long it takes for the given machine to perform the search in seconds. 
  • Let’s say the constant for A is 0.2 and the constant for B is 1000 which means that A is 5000 times more powerful than B. 
  • For small values of input array size n, the fast computer may take less time. 
  • But, after a certain value of input array size, the Binary Search will definitely start taking less time compared to the Linear Search even though the Binary Search is being run on a slow machine
Input Size Running time on A Running time on B
10 2 sec ~ 1 h 
100 20 sec ~ 1.8 h
10^6  ~ 55.5 h ~ 5.5 h
10^9 ~ 6.3 years  ~ 8.3 h
  • The reason is the order of growth of Binary Search with respect to input size is logarithmic while the order of growth of Linear Search is linear. 
  • So the machine-dependent constants can always be ignored after a certain value of input size. 

Running times for this example: 

  • Linear Search running time in seconds on A: 0.2 * n 
  • Binary Search running time in seconds on B: 1000*log(n) 

Challenges of Experimental Analysis:

Experimental running times of two algorithms are difficult to directly compare unless the experiments are performed in the same hardware and software environments. Experiments can be done only on a limited set of test inputs; hence, they leave out the running times of inputs not included in the experiment (and these inputs may be important).

To overcome the challenges in the Experimental analysis Asymptotic Analysis is used.

Does Asymptotic Analysis always work? 

Asymptotic Analysis is not perfect, but that’s the best way available for analyzing algorithms. For example, say there are two sorting algorithms that take 1000nLogn and 2nLogn time respectively on a machine. Both of these algorithms are asymptotically the same (order of growth is nLogn). So, With Asymptotic Analysis, we can’t judge which one is better as we ignore constants in Asymptotic Analysis. 

Also, in Asymptotic analysis, we always talk about input sizes larger than a constant value. It might be possible that those large inputs are never given to your software and an asymptotically slower algorithm always performs better for your particular situation. So, you may end up choosing an algorithm that is Asymptotically slower but faster for your software.

Advantages or Disadvantages:

Advantages:

  1. Asymptotic analysis provides a high-level understanding of how an algorithm performs with respect to input size.
  2. It is a useful tool for comparing the efficiency of different algorithms and selecting the best one for a specific problem.
  3. It helps in predicting how an algorithm will perform on larger input sizes, which is essential for real-world applications.
  4. Asymptotic analysis is relatively easy to perform and requires only basic mathematical skills.

Disadvantages:

  1. Asymptotic analysis does not provide an accurate running time or space usage of an algorithm.
  2. It assumes that the input size is the only factor that affects an algorithm’s performance, which is not always the case in practice.
  3. Asymptotic analysis can sometimes be misleading, as two algorithms with the same asymptotic complexity may have different actual running times or space usage.
  4. It is not always straightforward to determine the best asymptotic complexity for an algorithm, as there may be trade-offs between time and space complexity.


Previous Article
Next Article

Similar Reads

Types of Asymptotic Notations in Complexity Analysis of Algorithms
We have discussed Asymptotic Analysis, and Worst, Average, and Best Cases of Algorithms. The main idea of asymptotic analysis is to have a measure of the efficiency of algorithms that don't depend on machine-specific constants and don't require algorithms to be implemented and time taken by programs to be compared. Asymptotic notations are mathemat
8 min read
Asymptotic Analysis and comparison of sorting algorithms
It is a well established fact that merge sort runs faster than insertion sort. Using asymptotic analysis. we can prove that merge sort runs in O(nlogn) time and insertion sort takes O(n^2). It is obvious because merge sort uses a divide-and-conquer approach by recursively solving the problems where as insertion sort follows an incremental approach.
15+ min read
Analysis of Algorithms | Big - Θ (Big Theta) Notation
In the analysis of algorithms, asymptotic notations are used to evaluate the performance of an algorithm, in its best cases and worst cases. This article will discuss Big - Theta notations represented by a Greek letter (Θ). Definition: Let g and f be the function from the set of natural numbers to itself. The function f is said to be Θ(g), if there
6 min read
Analysis of Algorithms | Big-Omega Ω Notation
In the analysis of algorithms, asymptotic notations are used to evaluate the performance of an algorithm, in its best cases and worst cases. This article will discuss Big-Omega Notation represented by a Greek letter (Ω). Table of Content What is Big-Omega Ω Notation?Definition of Big-Omega Ω Notation?How to Determine Big-Omega Ω Notation?Example of
9 min read
Guidelines for asymptotic analysis
In this article, the focus is on learning some rules that can help to determine the running time of an algorithm. Asymptotic analysis refers to computing the running time of any operation in mathematical units of computation. In Asymptotic Analysis, the performance of an algorithm in terms of input size (we don’t measure the actual running time) is
4 min read
Time Complexity and Space Complexity
Generally, there is always more than one way to solve a problem in computer science with different algorithms. Therefore, it is highly required to use a method to compare the solutions in order to judge which one is more optimal. The method must be: Independent of the machine and its configuration, on which the algorithm is running on.Shows a direc
14 min read
Time and Space Complexity Analysis of Tree Traversal Algorithms
Let us discuss the Time and Space complexity of different Tree Traversal techniques, such as Inorder Traversal, Preorder Traversal, Postorder Traversal, etc. Time Complexity of Tree Traversal AlgorithmsLet us see different corner cases: Complexity function T(n) — for all problems where tree traversal is involved — can be defined as: T(n) = T(k) + T
2 min read
How to Analyse Loops for Complexity Analysis of Algorithms
We have discussed Asymptotic Analysis, Worst, Average and Best Cases and Asymptotic Notations in previous posts. In this post, an analysis of iterative programs with simple examples is discussed. The analysis of loops for the complexity analysis of algorithms involves finding the number of operations performed by a loop as a function of the input s
15+ min read
Sample Practice Problems on Complexity Analysis of Algorithms
Prerequisite: Asymptotic Analysis, Worst, Average and Best Cases, Asymptotic Notations, Analysis of loops. Problem 1: Find the complexity of the below recurrence: { 3T(n-1), if n>0,T(n) = { 1, otherwise Solution: Let us solve using substitution. T(n) = 3T(n-1) = 3(3T(n-2)) = 32T(n-2) = 33T(n-3) ... ... = 3nT(n-n) = 3nT(0) = 3n This clearly shows
15 min read
Asymptotic Notations and how to calculate them
In mathematics, asymptotic analysis, also known as asymptotics, is a method of describing the limiting behavior of a function. In computing, asymptotic analysis of an algorithm refers to defining the mathematical boundation of its run-time performance based on the input size. For example, the running time of one operation is computed as f(n), and m
5 min read