I've read about the Curry-Howard Correspondence. If I got it right, it says that there is a correspondence such that propositions in propositional logic correspond to types, and the proposition is true if the type is inhabited. That a type is inhabited is certainly easiest shown by providing an expression of that type; so proofs correspond to expressions. That's neat; if I have to solve an exercise sheet in propositional logic and am not sure if my solution is correct, I translate my solution into an expression in my favourite programming language and let the compiler check it. If it accepts it, my reasoning was correct.
The sad thing is, I have to come up with the proof. It would be certainly more impressive if the computer could proof it. I.e., a theorem solver should be able to prove a provable expression by repeatedly applying inference rules on the axioms until it reaches the theorem. Termination is, of course, not guaranteed, but if this process terminates, we have a proof.
Can I make a compiler do that for me? I.e., write a program such that the compiler may or may not terminate; if it does not, I learn nothing, but if does, my theorem-program is true.
NB: I guess this is not fun im imperative languages; Haskell-related answers are particularly appreciated.
In Haskell you can use the functionality provided by Data.Type.Equality and Data.Type.Bool to turn the compiler into a theorem prover for simple propositions: