Proving the inexpressibility of a function in a given language

82 views Asked by At

I'm currently reading through John C. Mitchell's Foundations for Programming Languages. Exercise 2.2.3, in essence, asks the reader to show that the (natural-number) exponentiation function cannot be implicitly defined via an expression in a small language. The language consists of natural numbers and addition on said numbers (as well as boolean values, a natural-number equality predicate, & ternary conditionals). There are no loops, recursive constructs, or fixed-point combinators. Here is the precise syntax:

<bool_exp> ::= <bool_var> | true | false | Eq? <nat_exp> <nat_exp> |
               if <bool_exp> then <bool_exp> else <bool_exp>
<nat_exp>  ::= <nat_var> | 0 | 1 | 2 | … | <nat_exp> + <nat_exp> |
               if <bool_exp> then <nat_exp> else <nat_exp>

Again, the object is to show that the exponentiation function n^m cannot be implicitly defined via an expression in this language.

Intuitively, I'm willing to accept this. If we think of exponentiation as repeated multiplication, it seems like we "just can't" express that with this language. But how does one formally prove this? More broadly, how do you prove that an expression from one language cannot be expressed in another?

2

There are 2 answers

0
Ryan Culpepper On BEST ANSWER

No solution, only hints:

First, let me point out that if there are finitely many numbers in the language, then exponentiation is definable as an expression. (You'd have to define what it should produce when the true result is unrepresentable, eg wraparound.) Think about why.

Hint: Imagine that there are only two numbers, 0 and 1. Can you write an expression involving m and n whose result is n^m? What if there were three numbers: 0, 1, and 2? What if there were four? And so on...

Why don't any of those solutions work? Let's index them and call the solution for {0,1} partial_solution_1, the solution for {0,1,2} partial_solution_2, and so on. Why isn't partial_solution_n a solution for the set of all natural numbers?

Maybe you can generalize that somehow with some metric f : Expression -> Nat so that every expression expr with f(expr) < n is wrong somehow...

You may find some inspiration from the strategy of Euclid's proof that there are infinitely many primes.

0
kaya3 On

Here's a simple way to think about it: the expression has a fixed, finite size, and the only arithmetic operation it can do to produce numbers not written as literals or provided as the values of variables is addition. So the largest number it can possibly produce is limited by the number of additions plus 1, multiplied by the largest number involved in the expression.

So, given a proposed expression, let k be the number of additions in it, let c be the largest literal (or 1 if there is none) and choose m and n such that n^m > (k+1)*max(m,n,c). Then the result of the expression for that input cannot be the correct one.

Note that this proof relies on the language allowing arbitrarily large numbers, as noted in the other answer.