It has always been hard to me to understand why OWL-DL reasoning is so efficient in real-world application.
For example, I often use ontologies which are in ALCQI fragment processed by the Fact++ reasoner, and it is well-known that concept satisfiability is PSPACE-complete and logical implication is EXPTIME.
Here, at slide 29 is clearly stated that:
the state-of-the-art DL reasoning systems are based on tableaux techniques and not on automata techniques
+easier to implement
-not computationally optimal (NEXPTIME, 2NEXPTIME)
-->the systems are highly optimized
despite the high computational complexity, the performance is surprisingly good in real world applications:
- knowledge bases with thousands of concepts and hundreds of axioms
- outperform specialized modal logics reasoners
From one side it is proven that computationally they are not optimal, on the other hand they are efficient in the real world application even for large input, but I can't find sources online that explain why this is possible and how this is handled.
Definitively, I'm missing this step that I'm really trying to understand.
Does anybody here have an idea?