I am new to semantic technologies. I have created some OWL classes and running pellet reasoner to check for inconsistencies in the classes. This is a snippet of what I have created so far:
<owl:NamedIndividual rdf:about="&object_test;obj_5678">
<rdf:type rdf:resource="&object_test;WorkPiece"/>
<xyz:widthOfObject rdf:datatype="&xsd;float">0.1</xyz:widthOfObject> <!--X-->
<xyz:depthOfObject rdf:datatype="&xsd;float">0.1</xyz:depthOfObject> <!--Y-->
<xyz:heightOfObject rdf:datatype="&xsd;float">0.2</xyz:heightOfObject> <!--Z-->
</owl:NamedIndividual>
<owl:NamedIndividual rdf:about="&xyz;PQR_WorkPiece_5678">
<rdf:type rdf:resource="&xyz;PQR"/>
<xyz:eventOccursAt rdf:resource="&object_test;Transform_5678"/>
<xyz:startTime rdf:resource="&object_test;timepoint_0"/>
<xyz:objectActedOn rdf:resource="&object_test;obj_5678"/>
</owl:NamedIndividual>
<owl:NamedIndividual rdf:about="&object_test;Transform_5678">
<rdf:type rdf:resource="&xyz_paramserver;Transform"/>
<xyz:quaternion rdf:datatype="&xsd;string">0.0 0.0 1.0 0.0</xyz:quaternion>
<xyz:translation rdf:datatype="&xsd;string">0.5 0.1 0.5</xyz:translation>
</owl:NamedIndividual>
when I run the pellet reasoner
sync_reasoner_pellet(infer_property_values = True, infer_data_property_values = True, debug=2)
there are errors regarding inconsistencies in classes and following is the explanation
This is the output of `pellet explain`:
Axiom: Thing subClassOf Nothing
Explanation(s):
1) Region subClassOf Abstract
hasParticipant range Object
hasRegionDataValue domain Region
objectActedOn subPropertyOf preActor
SemanticMapPerception_WorkPiece_1234 objectActedOn obj_1234
obj_1234 depthOfObject 0.02f
depthOfObject subPropertyOf hasDepth
actor subPropertyOf hasParticipant
Abstract disjointWith Object
hasShapeParameter subPropertyOf hasRegionDataValue
preActor subPropertyOf actor
hasDepth subPropertyOf hasShapeParameter
I am not sure how to read this output. What is this inconsistency?
This refers to a logical inconsistency, saying for example that a shape is both a square and a circle, which is impossible. Something similar is happening in your ontology.
An explanation for an inconsistency consists of a minimal set of axioms and assertions that all must hold for the ontology to be inconsistent. Because it is a minimal set of axioms and assertions , if you can remove any 1 of these axioms or assertions from the ontology, the ontology will be consistent (assuming you have only the 1 explanation for the inconsistency - it is possible to have multiple to explanations for an inconsistency).
From what you provided, you are creating some individuals based on existing ontologies. However, the individuals you provided, are not referred to in the explanation. Hence, they are not the cause of the inconsistency (again, assuming you got only the 1 explanation above).
For more clarity, in the explanation, I have indicated axioms and assertions:
To figure this out I suggest:
Based on the explanation, the problematic individual seems to be
obj_1234
orSemanticMapPerception_WorkPiece_1234
. I suggest removing these, at least temporarily, and re-run the reasoner. If this is the only explanation, then your ontology should be now consistent. This means how you made assertions about the individuals is incorrect. Hopefully you can find documentation on how to use the ontology or you can contact the creators of the ontology.If you have multiple explanations, try to remove all assertions (at least temporary), and re-run the reasoner. If the ontology is still inconsistent, it means there is a problem with the axioms of the ontology, which you will need to take up with the creators of the ontology.