kevin
kevin

Reputation: 147

Time complexity vs Space complexity

I'm trying to understand the difference between time and space complexity from a short algorithm. This code takes a list/array and returns the single number that occurs an odd number of times. In Python:

def foo(array_of_ints):
    hash_table = dict()
    for i in array_of_ints:
        hash_table[i] = hash_table.setdefault(i, 0) + 1
    for key, value in hash_table.items():
        if value%2 != 0:
            return key

There are two for loops that iterate over the length of the array. But the first loop simply creates a hash table/dictionary in memory. So is the time complexity O(n^2), because we twice iterate over an array of length n, or is the time complexity and space complexity each O(n), since the first loop simply creates a new data structure in memory?

Upvotes: 0

Views: 2648

Answers (3)

code_dredd
code_dredd

Reputation: 6085

So is the time complexity O(n^2), because we twice iterate over an array of length n

The time complexity is of your code is not O(N²).

Iterating over a collection of length N is considered to be O(N) in time complexity. The reason is that, in Big-Oh notation, you're always interested in the term that dominates the function. When you reach large-enough N, the constants start to become less influential on overall performance.

This is not to say that they are "not important"; only that, as N tends to infinity, the actual effect of these constants is more analogous to adding one more drop or bucket of water into the ocean. This notation is meant to give you a rough (not exact) understanding of what behavior to expect from the algorithm -i.e. how well it scales as the size of the input grows.

This means that your function could iterate over the same collection 5 times and it would be O(5N), which is still O(N).

But how do you get O(N²) then? You'd start to see these as your start nesting the loops inside one another.

Example A - O(N)

def linear(integers):
    # no nesting
    for i in integers: print(i)
    for i in integers: print(i+1)

Example B - O(N²)

def quadratic(integers):
    # notice double nesting
    for i in integers:
        for j in integers:
            print(i+j)

Example C - O(N³)

def cubed(integers):
    # notice triple-nesting
    for i in integers:
        for j in integers:
            for k in integers:
                print(i+j+k)

You'll find examples of O(N³) algorithms if you work with matrices, at least if you're using naive implementations. In case you're not clear, this is called asymptotic notation.

Also note that Big-Oh notation expresses the upper bound of an algorithm's running time. This means that it's a measure of it's worst case scenario.

For example, linearly searching for a non-existent item in a list will get your algorithm to hit its upper bound of O(N) because it has to check each element in the list.

or is the time complexity and space complexity each > O(n), since the first loop simply creates a new data structure in memory?

What the loop is doing is itself not relevant to the measurement in this case. What's relevant is the operation that dominates here, which are the comparisons and increments that make the loops work. For example, value % 2 != 0 is a constant time operation¹, or O(1), and won't make any substantial contribution to the running time of your code.

So what is the space-complexity?

The space required will also depend on the size and content of the input. The worst case for your algorithm is an array of distinct integers, meaning that each value is unique.

If each value is unique, then each value will cause a key/value pair to be added. Therefore, the algorithm requires O(N) space.

Note that it could be less, but big-O notation communicates an upper bound.

Just keep in mind that, usually, there's also a trade-off between time/space constraints, where faster algorithms may require more memory and more memory-efficient alternatives may require more time.


¹This assumes that we've defined arithmetic operations like +, -, /, *, %, etc. as basic operations, which is commonly done.

Upvotes: 4

David Morton
David Morton

Reputation: 74

In big O notation, 2n and n are not considered different, therefore the algorithm has O(n) behavior. To have O(n^2) behavior, your algorithm would have to traverse/construct the entire array once for every element in the array. Another way of saying it, is you'd need nested loops to achieve O(n^2).

Upvotes: 0

Mangu Singh Rajpurohit
Mangu Singh Rajpurohit

Reputation: 11420

Both the time and space complexity are O(n).

Upvotes: -1

Related Questions