Is poly-time functions class recursively enumerable?

531 views Asked by At

If I define Poly-time functions, the functions that are computable by a turing machine in maximum polynomial(n) time, which n is size of input. Is the class of these functions recursively enumerable?

3

There are 3 answers

4
btilly On

I'm pretty sure that the answer is no. A program to recursively enumerate Poly-time functions would have to at some point tell you whether a function that solves the traveling salesman problem is Poly-time. Whether or not that specific question is answerable is at this time still open.

I don't know what I was smoking with that terrible answer. If the traveling salesman problem is not Poly-time, the program never has to discover that fact. It just has to never get around to listing any solutions to that. Which is easy because it will never find one.

One important detail that I am unclear on is how you are representing functions, and what you consider a unique function.

Suppose that function equals, "program that runs in Poly-time" and you want to list all programs, regardless of whether they produce the same output.. Then your answer comes down to, "If a given program is Poly-time, is there always a proof of that fact?" That is clearly false. You can have a program that searches for a poly time for a proof of some open question, and if it finds it it wastes exponential time before producing a final output. This program is poly time, but you can't prove it without proving that the open question is false.

Suppose that function for you equals, "rule that associates inputs to outputs" and you're not OK with listing multiple programs that encode the same function. Let us modify the previous pathological function to modify its output rather than wasting time if said proof was discovered. Now you can prove that this program is poly time, but you can't prove whether it represents a different function from one that doesn't do the whole proof step (and possibly modify its output).

Suppose that function for you equals, "rule that associates inputs to outputs" and you are OK with listing multiple programs that encode the same function but don't want every one. Suppose that prog is a program that might or might not be poly time and p(x) is a polynomial. It is easy to write a second program that emulates the first for p(x) steps, and if the other is still running emits some fixed output. This second program is guaranteed to be poly time. If, in fact, prog is poly time, then some program of this form will represent the same function that prog does, and the list of outputs will therefore include every single possible poly functions. (But the same function will be encoded in lots of different ways.)

0
Saiiiira On

The answer is yes,actually I've also found the proof in a book. Thanks to all,helped me a lot with directions given:)

0
Andrea Asperti On

It is relatively easy to give an enumeration for all functions computable in polynomial time: just give an enumeration pi_j for all polynomials, for j in N, and then consider for any pair (i,j) the machine that works like the Turing machine M_i with clock pi_j. Any function computable in polynomial time can be expressed in such a way.

This is deeply different from the problem to understand if a generic Turing machine works in polynomial time, that is not decidable. The proof is not entirely trivial, since it is not an extensional property of programs and we cannot apply Rice's theorem. You may find a proof in my article "The intensional content of Rice's theorem", POPL 2008 (pearl).

The problem of providing syntactical characterizations of subrecursive complexity classes such as P, Pspace, etc. has received much attention in the literature. The recent field of Implicit Complexity aims at studying the computational complexity of programs with no reference to a particular machine model and explicit bounds on time or memory, but instead relying on logical or computational principles that entail complexity properties, typically via a controlled use of the available resources. For an introduction to this topic, you may consult the special issue on Implicit Computational Complexity, ACM Trans. Comput. Log., Vol 10, n.4 2009.

Other interesting characterizations have been obtained restricting the interpretation of programming languages to finite domains. The seminal result in this area is an old work of Gurevich ("Algebras of feasible functions", FOCS 1983) where he proved that interpreting primitive recursive functions (resp. recursive functions) over finite structures one precisely get the log-space (resp. polynomial time) computable functions.

Please have a look at my article "Computational Complexity via Finite Types", ACM Trans. Comput. Log., Vol 16, n.15 2015, for more references in this area.