Browse by Domains

What is Time Complexity And Why Is It Essential?

What is Time complexity?

Time complexity is defined as the amount of time taken by an algorithm to run, as a function of the length of the input. It measures the time taken to execute each statement of code in an algorithm. It is not going to examine the total execution time of an algorithm. Rather, it is going to give information about the variation (increase or decrease) in execution time when the number of operations (increase or decrease) in an algorithm. Yes, as the definition says, the amount of time taken is a function of the length of input only.

Time Complexity Introduction

Space and Time define any physical object in the Universe. Similarly, Space and Time complexity can define the effectiveness of an algorithm. While we know there is more than one way to solve the problem in programming, knowing how the algorithm works efficiently can add value to the way we do programming. To find the effectiveness of the program/algorithm, knowing how to evaluate them using Space and Time complexity can make the program behave in required optimal conditions, and by doing so, it makes us efficient programmers.

While we reserve the space to understand Space complexity for the future, let us focus on Time complexity in this post. Time is Money! In this post, you will discover a gentle introduction to the Time complexity of an algorithm, and how to evaluate a program based on Time complexity.

Let’s get started.

Why is Time complexity Significant?

Let us first understand what defines an algorithm.

An Algorithm, in computer programming, is a finite sequence of well-defined instructions, typically executed in a computer, to solve a class of problems or to perform a common task. Based on the definition, there needs to be a sequence of defined instructions that have to be given to the computer to execute an algorithm/ perform a specific task. In this context, variation can occur the way how the instructions are defined. There can be any number of ways, a specific set of instructions can be defined to perform the same task. Also, with options available to choose any one of the available programming languages, the instructions can take any form of syntax along with the performance boundaries of the chosen programming language. We also indicated the algorithm to be performed in a computer, which leads to the next variation, in terms of the operating system, processor, hardware, etc. that are used, which can also influence the way an algorithm can be performed.

Now that we know different factors can influence the outcome of an algorithm being executed, it is wise to understand how efficiently such programs are used to perform a task. To gauge this, we require to evaluate both the Space and Time complexity of an algorithm.

By definition, the Space complexity of an algorithm quantifies the amount of space or memory taken by an algorithm to run as a function of the length of the input. While Time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the input. Now that we know why Time complexity is so significant, it is time to understand what is time complexity and how to evaluate it.

Python is a great tool to implement algorithms if you wish to become a programmer. Take up the Machine Learning Certificate Course and enhance your skills to power ahead in your career.

To elaborate, Time complexity measures the time taken to execute each statement of code in an algorithm. If a statement is set to execute repeatedly then the number of times that statement gets executed is equal to N multiplied by the time required to run that function each time.

The first algorithm is defined to print the statement only once. The time taken to execute is shown as 0 nanoseconds. While the second algorithm is defined to print the same statement but this time it is set to run the same statement in FOR loop 10 times. In the second algorithm, the time taken to execute both the line of code – FOR loop and print statement, is 2 milliseconds. And, the time taken increases, as the N value increases, since the statement is going to get executed N times.

Note: This code is run in Python-Jupyter Notebook with Windows 64-bit OS + processor Intel Core i7 ~ 2.4GHz. The above time value can vary with different hardware, with different OS and in different programming languages, if used.

By now, you could have concluded that when an algorithm uses statements that get executed only once, will always require the same amount of time, and when the statement is in loop condition, the time required increases depending on the number of times the loop is set to run. And, when an algorithm has a combination of both single executed statements and LOOP statements or with nested LOOP statements, the time increases proportionately, based on the number of times each statement gets executed.

This leads us to ask the next question, about how to determine the relationship between the input and time, given a statement in an algorithm. To define this, we are going to see how each statement gets an order of notation to describe time complexity, which is called Big O Notation.

What are the Different Types of Time Complexity Notation Used?

As we have seen, Time complexity is given by time as a function of the length of the input. And, there exists a relation between the input data size (n) and the number of operations performed (N) with respect to time. This relation is denoted as the Order of growth in Time complexity and given notation O[n] where O is the order of growth and n is the length of the input. It is also called as ‘Big O Notation’

Big O Notation expresses the run time of an algorithm in terms of how quickly it grows relative to the input ‘n’ by defining the N number of operations that are done on it. Thus, the time complexity of an algorithm is denoted by the combination of all O[n] assigned for each line of function.

There are different types of time complexities used, let’s see one by one:

1. Constant time – O (1)

2. Linear time – O (n)

3. Logarithmic time – O (log n)

4. Quadratic time – O (n^2)

5. Cubic time – O (n^3)

and many more complex notations like Exponential time, Quasilinear time, factorial time, etc. are used based on the type of functions defined.

Constant time – O (1)

An algorithm is said to have constant time with order O (1) when it is not dependent on the input size n. Irrespective of the input size n, the runtime will always be the same.

The above code shows that irrespective of the length of the array (n), the runtime to get the first element in an array of any length is the same. If the run time is considered as 1 unit of time, then it takes only 1 unit of time to run both the arrays, irrespective of length. Thus, the function comes under constant time with order O (1).

Linear time – O(n)

An algorithm is said to have a linear time complexity when the running time increases linearly with the length of the input. When the function involves checking all the values in input data, with this order O(n).

The above code shows that based on the length of the array (n), the run time will get linearly increased. If the run time is considered as 1 unit of time, then it takes only n times 1 unit of time to run the array. Thus, the function runs linearly with input size and this comes with order O(n).

Logarithmic time – O (log n)

An algorithm is said to have a logarithmic time complexity when it reduces the size of the input data in each step. This indicates that the number of operations is not the same as the input size. The number of operations gets reduced as the input size increases. Algorithms are found in binary trees or binary search functions. This involves the search of a given value in an array by splitting the array into two and starting searching in one split. This ensures the operation is not done on every element of the data.

Quadratic time – O (n^2)

An algorithm is said to have a non-linear time complexity where the running time increases non-linearly (n^2) with the length of the input. Generally, nested loops come under this order where one loop takes O(n) and if the function involves a loop within a loop, then it goes for O(n)*O(n) = O(n^2) order.

Similarly, if there are ‘m’ loops defined in the function, then the order is given by O (n ^ m), which are called polynomial time complexity functions.

Thus, the above illustration gives a fair idea of how each function gets the order notation based on the relation between run time against the number of input data sizes and the number of operations performed on them.

How to calculate time complexity?

We have seen how the order notation is given to each function and the relation between runtime vs no of operations, input size. Now, it is time to know how to evaluate the Time complexity of an algorithm based on the order notation it gets for each operation & input size and compute the total run time required to run an algorithm for a given n.

Let us illustrate how to evaluate the time complexity of an algorithm with an example:

The algorithm is defined as: 

1. Given 2 input matrix, which is a square matrix with order n  

2. The values of each element in both the matrices are selected randomly using np.random function 

3. Initially assigned a result matrix with 0 values of order equal to the order of the input matrix 

4. Each element of X is multiplied by every element of Y and the resultant value is stored in the result matrix 

5. The resulting matrix is then converted to list type 

6. For every element in the result list, is added together to give the final answer

Let us assume cost function C as per unit time taken to run a function while ‘n’ represents the number of times the statement is defined to run in an algorithm.

For example, if the time taken to run print function is say 1 microseconds (C) and if the algorithm is defined to run PRINT function for 1000 times (n),

then total run time = (C * n) = 1 microsec * 1000 = 1 millisec

Run time for each line is given by: 

Line 1 = C1 * 1 
Line 2 = C2 * 1 
Line 3,4,5 = (C3 * 1) + (C3 * 1) + (C3 * 1)
Line 6,7,8 = (C4*[n+1]) * (C4*[n+1]) * (C4*[n+1]) 
Line 9 = C4*[n] 
Line 10 = C5 * 1 
Line 11 = C2 * 1 
Line 12 = C4*[n+1] 
Line 13 = C4*[n] 
Line 14 = C2 * 1 
Line 15 = C6 * 1

Total run time = (C1*1) + 3(C2*1) + 3(C3*1) + (C4*[n+1]) * (C4*[n+1]) * (C4*[n+1]) + (C4*[n]) + (C5*1) + (C4*[n+1]) + (C4*[n]) + (C6*1)

Replacing all cost with C to estimate the Order of notation,

Total Run Time

 = C + 3C + 3C + ([n+1]C * [n+1]C * [n+1]C) + nC + C + [n+1]C + nC + C
                                = 7C + ((n^3) C + 3(n^2) C + 3nC + C + 3nC + 3C
             = 12C + (n^3) C + 3(n^2) C + 6nC
 
             = C(n^3) + C(n^2) + C(n) + C
             = O(n^3) + O(n^2) + O(n) + O (1)

By replacing all cost functions with C, we can get the degree of input size as 3, which tells the order of time complexity of this algorithm. Here, from the final equation, it is evident that the run time varies with the polynomial function of input size ‘n’ as it relates to the cubic, quadratic and linear forms of input size.

This is how the order is evaluated for any given algorithm and to estimate how it spans out in terms of runtime if the input size is increased or decreased. Also note, for simplicity, all cost values like C1, C2, C3, etc. are replaced with C, to know the order of notation. In real-time, we need to know the value for every C, which can give the exact run time of an algorithm given the input value ‘n’.

Sorting Algorithms

  • Quick Sort: Exhibits O(n log n) complexity, making it efficient for large datasets.
  • Merge Sort: Also has O(n log n) complexity, known for its stability in sorting.
  • Bubble Sort: With O(n²) complexity, it’s less efficient for large datasets.

Search Algorithms

  • Binary Search: O(log n) complexity makes it efficient for sorted arrays.
  • Linear Search: Simple but less efficient with O(n) complexity.

Space Complexity vs. Time Complexity

While time complexity focuses on the time an algorithm takes, space complexity deals with the amount of memory it requires. There’s often a trade-off between the two, where improving one can adversely affect the other.

Time Complexity of Sorting algorithms

Understanding the time complexities of sorting algorithms helps us in picking out the best sorting technique in a situation. Here are some sorting techniques:

What is the time complexity of insertion sort?

The time complexity of Insertion Sort in the best case is O(n). In the worst case, the time complexity is O(n^2).

What is the time complexity of merge sort?

This sorting technique is for all kinds of cases. Merge Sort in the best case is O(nlogn). In the worst case, the time complexity is O(nlogn). This is because Merge Sort implements the same number of sorting steps for all kinds of cases.

What is the time complexity of bubble sort?

The time complexity of Bubble Sort in the best case is O(n). In the worst case, the time complexity is O(n^2).

What is the time complexity of quick sort?

Quick Sort in the best case is O(nlogn). In the worst case, the time complexity is O(n^2). Quicksort is considered to be the fastest of the sorting algorithms due to its performance of O(nlogn) in best and average cases.

Time Complexity of Searching algorithms

Let us now dive into the time complexities of some Searching Algorithms and understand which of them is faster.

Linear Search follows sequential access. The time complexity of Linear Search in the best case is O(1). In the worst case, the time complexity is O(n).

Binary Search is the faster of the two searching algorithms. However, for smaller arrays, linear search does a better job. The time complexity of Binary Search in the best case is O(1). In the worst case, the time complexity is O(log n).

Space Complexity

You might have heard of this term, ‘Space Complexity’, that hovers around when talking about time complexity. What is Space Complexity? Well, it is the working space or storage that is required by any algorithm. It is directly dependent or proportional to the amount of input that the algorithm takes. To calculate space complexity, all you have to do is calculate the space taken up by the variables in an algorithm. The lesser space, the faster the algorithm executes. It is also important to know that time and space complexity are not related to each other.

Time Complexity Example

Example: Ride-Sharing App

Consider a ride-sharing app like Uber or Lyft. When a user requests a ride, the app needs to find the nearest available driver to match the request. This process involves searching through the available drivers’ locations to identify the one that is closest to the user’s location.

In terms of time complexity, let’s explore two different approaches for finding the nearest driver: a linear search approach and a more efficient spatial indexing approach.

  1. Linear Search Approach: In a naive implementation, the app could iterate through the list of available drivers and calculate the distance between each driver’s location and the user’s location. It would then select the driver with the shortest distance.
Driver findNearestDriver(List<Driver> drivers, Location userLocation) { Driver nearestDriver = null; double minDistance = Double.MAX_VALUE; for (Driver driver : drivers) { double distance = calculateDistance(driver.getLocation(), userLocation); if (distance < minDistance) { minDistance = distance; nearestDriver = driver; } } return nearestDriver; }

The time complexity of this approach is O(n), where n is the number of available drivers. For a large number of drivers, the app’s performance might degrade, especially during peak times.

  1. Spatial Indexing Approach: A more efficient approach involves using spatial indexing data structures like Quad Trees or K-D Trees. These data structures partition the space into smaller regions, allowing for faster searches based on spatial proximity.
Driver findNearestDriverWithSpatialIndex(SpatialIndex index, Location userLocation) { Driver nearestDriver = index.findNearestDriver(userLocation); return nearestDriver; }

The time complexity of this approach is typically better than O(n) because the search is guided by the spatial structure, which eliminates the need to compare distances with all drivers. It could be closer to O(log n) or even better, depending on the specifics of the spatial index.

In this example, the difference in time complexity between the linear search and the spatial indexing approach showcases how algorithmic choices can significantly impact the real-time performance of a critical operation in a ride-sharing app.

Summary

In this blog, we introduced the basic concepts of Time complexity and the importance of why we need to use it in the algorithm we design. Also, we had seen what are the different types of time complexities used for various kinds of functions, and finally, we learned how to assign the order of notation for any algorithm based on the cost function and the number of times the statement is defined to run.

Given the condition of the VUCA world and in the era of big data, the flow of data is increasing unconditionally with every second and designing an effective algorithm to perform a specific task, is needed of the hour. And, knowing the time complexity of the algorithm with a given input data size, can help us to plan our resources, process and provide the results efficiently and effectively. Thus, knowing the time complexity of your algorithm, can help you do that and also makes you an effective programmer. Happy Coding!

Feel free to leave your queries in the comments below and we’ll get back to you as soon as possible.

Avatar photo
Great Learning Team
Great Learning's Blog covers the latest developments and innovations in technology that can be leveraged to build rewarding careers. You'll find career guides, tech tutorials and industry news to keep yourself updated with the fast-changing world of tech and business.

Leave a Comment

Your email address will not be published. Required fields are marked *

Great Learning Free Online Courses
Scroll to Top