Why is the running time of disjoint-sets calculated in terms of number of operations instead of size of input?

995 views Asked by At

CLRS says "we shall analyze the running time of disjoint-set in terms of two parameters:

  1. n, the number of MAKE-SET operations
  2. m, the total number of MAKE-SET, UNION and FIND-SET operations"

Why is this different from most analyzations of other algorithms where the complexity is calculated in terms of input size?

1

There are 1 answers

0
arunk2 On BEST ANSWER

Any algorithm using the Disjoint-Set data structure will use these 3 operations. We need to analyze the runtime of all these operations with respect to input size.

Typically,

  1. we starts with creating n sets (1 for every item in our input) with - 'n' MAKE-SET operations.
  2. We will do MAKE-SET, UNION and FIND-SET operations as per our need. Let us bounded with a max number of operations - say 'm' (including the initial n initialisation operations).

These are the 2 numbers n, m described in CLRS. Let me rephrase it

  • n - actual input size (MAKE-SET for creating initial sets)
  • m - no.of operations (MAKE-SET, UNION and FIND-SET) as per algorithm.

Per Theorem 21.14:

To perform 'm' operations, with disjoint-set forest with union by rank and path compression implementation, You will have a worst-case runtime of O(m * α(n))

Hope it helps!