Time complexity and Big O
Times complexity is a measure of computational complexity, that describer how much time an algorithm takes to run.
For example if we have an array like:
and and we write a function the function need to visit all elements of the array A. This is a linear time, so che time complexity is O(N). Every single time we add an element to an array we have one more thing to do, it is proportional.
If we wanted to always calculate for the same array all the possible non-repeated pairs (combinations, not permutations):
this time complexity is much more than just because we have more output.
In this case the time complexity is because we have this sort of sliding windows in which we need to visit all the position at least 2 times.
Excluding the non-dominant terms, the final complexity is O(N 2).
This quadratic function grows very quickly; its graph becomes extremely steep as the input size increases.
After a certain value of (known as the threshold), the complexity will always be worse (meaning it will always require more execution time) compared to a linear complexity like .
Space complexity
Speaking of space complexity, we are referring to the amount of auxiliary memory an algorithm requires relative to the input size, .
Consider our initial array. If the goal is to square every element:
-
Non-In-Place Solution: If we create a new array (of size ) to store the results, we are allocating extra memory that is proportional to the input size. This scenario results in an (linear) space complexity, because the required space grows as grows.
-
In-Place Solution: If, instead, we modify the original array by applying the square operation directly to the elements, we are not allocating any significant extra memory. This is considered a constant space solution, and its space complexity is (constant).
In summary the space complexity is costant only when the amount of extra memory used does not depend on the size of the input .