Time Complexity
Analysis
With Time and Space Complexity Cheat Sheets !
Aditya Chatterjee
Ue Kiao, PhD.
Introduction to this Book
This book Time Complexity Analysis introduces you to the basics of Time Complexity notations, meaning of the Complexity values and How to analyze various Algorithmic problems.
We have tackled several significant problems and demonstrated the approach to analyze them and arrived at the Time and Space Complexity of the problems and Algorithms.
This is a MUST-READ book for all Computer Science students and Programmers. Do not miss this opportunity .
You will get a better idea to judge which approach will work better and will be able to make better judgements in your development work.
See the Table of content to get the list of exciting topics you will learn about.
Some of the key points you will understand:
- Random Access Memory does not take O(1) time. It is complicated and in general, has a Time Complexity of O(N).
- Multiplication takes O(N ) time, but the most optimal Algorithm (developed in 2019) takes O(N logN) time which is believed to be the theoretical limit.
- As per Time Complexity, finding the largest element and the i th largest element takes the same order of time.
It is recommended that you go through this book twice. First time, you may skip the minute details that you may not understand at first go and get the overview.
In the second reading, you will get all the ideas, and this will strengthen your insights.
In 1950s, Computing was not a Science.
It was a collective effort by several Computer Scientists such as Robert Tarjan and Philippe Flajolet who analyzed several computational problems to demonstrate that Computation Problems are equally complicated as Physics and Mathematics Problems.
The ideas captured in this book include some of these analyses which glorified Computer Science and made it a Scientific field.
Book: Time Complexity Analysis
Authors: Aditya Chatterjee; Ue Kiao, PhD.
Contributors (7): Vansh Pratap Singh, Shreya Shah, Vikram Shishupalsingh Bais, Mallika Dey, Siddhant Rao, Shweta Bhardwaj, K. Sai Drishya.
Published: August 2021.
Contact:
If you would like to participate in OpenGenuss remote Internship program, apply at: internship.opengenus.org .
It will help you gain valuable practical experience and contribute to the community.
Table of content
# | Chapter | Page Number |
| Introduction to Time and Space Complexity (+ different notations) | |
| How to calculate Time Complexity? | |
| Meaning of different Time Complexity | |
| Brief Background on NP and P | |
| Does O(1) time exist?: Cost of accessing Memory | |
| Time Complexity of Basic Arithmetic Operations | |
6.1 | Bitwise operations | |
6.2 | Addition | |
6.3 | Subtraction | |
6.4 | Multiplication | |
6.5 | Division | |
| Analysis of Array | |
| Analysis of Dynamic Array | |
| Find largest element | |
| Find Second largest element | |
| Find i th largest element | |
| Time Complexity Bound for comparison-based sorting | |
12.1 | Analysis of Selection Sort | |
12.2 | Analysis of Insertion Sort | |
12.3 | Analysis of Bubble Sort | |
12.4 | Analysis of Quick Sort | |
| Bound for non-comparison-based sorting | |
13.1 | Analysis of Counting Sort | |
13.2 | Analysis of Bucket Sort | |
| Analysis of Linked List | |
| Analysis of Hash functions | |
| Analysis of Binary Search | |
| Time and Space Complexity Cheat Sheets | |
In addition to this book, you should read:
Binary Tree Problems: Must for Interviews and Competitive Coding by Aditya Chatterjee, Ue Kiao and Srishti Guleri a .
Introduction to Time and Space Complexity (+ different notations)
Time Complexity is a notation/ analysis that is used to determine how the number of steps in an algorithm increase with the increase in input size. Similarly, we analyze the space consumption of an algorithm for different operations.
This comes in the analysis of Computing Problems where the theoretical minimum time complexity is defined. Some Computing Problems are difficult due to which minimum time complexity is not defined.
For example, it is known that the Problem of finding the i-th largest element has a theoretical minimum time complexity of O(N). Common approaches may have O(N logN) time complexity.
One of the most used notations is known as Big-O notation.
For example, if an algorithm has a Time Complexity Big-O of O(N^2), then the number of steps is of the order of N^2 where N is the number of data.
Note that the number of steps is not exactly N^2. The actual number of steps may be 4 * N^2 + N/3 but only the dominant term without constant factors is considered.
Different notations of Time Complexity
Different notations of Time Complexity include:
- Big-O notation
- Little-O notation
- Big Omega notation
- Little-Omega notation
- Big-Theta notation
In short:
Notation | Symbol | Meaning |
Big-O | O | Upper bound |
Little-o | o | Tight Upper bound |
Big Omega | Lower bound |
Little Omega | Tight Lower bound |
Big Theta | Upper + Lower bound |
This will make sense as you go through the details. We will revisit this table following it.
1. Big-O notation
Big-O notation to denote time complexity which is the upper bound for the function f(N) within a constant factor.
f(N) = O(G(N)) where G(N) is the big-O notation and f(N) is the function we are predicting to bound.
There exists an N1 such that:
f(N) <= c * G(N) where:
Therefore, Big-O notation is always greater than or equal to the actual number of steps.
The following are true for Big-O, but would not be true if you used little-o:
- x O(x)
- x O(x + x)
- x O(200 * x)
2. Little-o notation
Little-o notation to denote time complexity which is the tight upper bound for the function f(N) within a constant factor.
f(N) = o(G(N)) where G(N) is the little-o notation and f(N) is the function we are predicting to bound.
There exists an N1 such that:
f(N) < c * G(N) where:
Note in Big-O, <= was used instead of <.
Therefore, Little-o notation is always greater than to the actual number of steps .
The following are true for little-o:
Next page