Auxiliary Space Complexity of Dictionaries whose Keys are Iterables of Variable Size

36 views Asked by At

Recently, I began delving into complexity analysis with dictionaries. More specifically, I have been looking at auxiliary space complexity. For the most part, this type of analysis has been straightforward. For instance, suppose we have the following function foo, which takes in an integer n:

def foo(n):
   d = {}

   for i in range(n):
      d[i] = 0
   
   return d

In a simple example like this, it is clear that the auxiliary space complexity is O(N), since the function will be creating a dictionary that contains n key-value pairs. Most problems that I have encountered follow this model, where a given key is mapped to a "singular" object (usually a number).

What I find more complicated, however, is attempting to navigate dictionaries whose keys map to iterables, which we extend/modify throughout the algorithm.

Suppose, for instance, that we are considering the function bar, which takes in an array of integers:

def bar(nums):
    d = {}
    
    for num in nums:
        if num not in d:
           d[num] = []
        d[num].append(num)

    return d

In such a scenario, how do we discuss auxiliary space complexity? My guess is to view each key as a data "unit" and each element in each created list as a data "unit". In the worst case, we would have N keys in our dictionary (where N is the size of our input list), each mapping to a list of length 1. In total, therefore, there are 2N data "units" expended (N keys and N elements across all of our value lists), which is just O(N).

Is there a more proper way of thinking about auxiliary space complexity in these types of scenarios? Or, should I continue to think about a key as a data "unit" and each element in each iterable as a data "unit"?

0

There are 0 answers