Complexity measurement of NP-complete

315 views Asked by At

For example, the set-cover decision problem is known to be a NP-complete problem. The input of this problems is a universe U, a family S of subsets of U, and an integer k ().

One thing that I'm confused with is that if we let k=1, then obviously the problem can be solved in time |S|, by simply checking each element in S. More generally, when k is a constant, the problem can be solved in polynomial time of |S|. In such a way, the time complexity becomes exponentially high only when k also increases with |S|, like |S|/2, |S|/3...

So here're my questions:

  1. My current understanding is that the time complexity measurement of a NP-complete problem is measured in terms of the WORST case. Can anybody please tell me if the understanding is right?

  2. I saw somebody proved that another problem is NP-hard by showing that a set-covering desicion problem with input <U,S,|U|/3> can be reduced to that problem. I'm just wondering why did he only prove for <U,S,|U|/3>, instead of <U,S,ARBITRARY k>?? Is such a proof reliable?

Many thanks!

1

There are 1 answers

4
Jakub Kotowski On BEST ANSWER

Time complexity is measured as a function of the input instance size. The input instance size can be measured in bits. The input instance size increases as any of the inputs U, S, and k increase. So the question that one is trying to answer is how much more time does it take to solve the problem of instance size for example 2n bits vs the problem of instance size n.

So simply the size of the whole input instance has to increase and in this particular case it means increasing the size of U and/or S and/or k.

To answer your two questions:

  1. Yes, the worst case time complexity is used: you are looking for the hardest problem of input size n and you correctly noticed that the problem (of the same size) probably becomes harder as you increase more parameters than just one.
  2. It would be better to see the proof you are referring to but the thinking probably goes like:

    I give a polynomial reduction of the set-covering decision problem instance of size n to my problem's instance of size m. If the size of the set-covering decision problem's input instance increases to 2n then the result of the reduction will be my problem's instance of size 2m because there is a direct correspondence between the input size of U, S, and k and the input size of my problem.

    So all set-covering decision problem instances of size n map to my problem instances of size m. Thus if I am looking for the hardest instance of the set-covering decision problem using this reduction I will find the hardest instance of my problem of size m.

EDIT

From the proof in your linked paper:

Proof. We reduce an arbitrary 3-cover problem instance—in which we are given a universe U, a family S of subsets of U, such that each subset contains 3 elements, and we are asked whether we can (exactly) cover all of U using |U|/3 elements of S—to a game with homogeneous resources and schedules of size 3.

As you correctly say, they need to convert all instances of the set-cover problem to their problem. But they are using a reduction from a different problem: the Exact 3-cover problem that is proven to be NP-complete in "Computers and intractability - MR Garey, DS Johnson, 1979".

The Exact 3-Cover problem is like the set cover decision problem but with |U| = 3t and S is a set of 3-element subsets of U.