Skip to content

Reasoning and Inference

GCHQDeveloper81 edited this page Jul 18, 2024 · 1 revision

Reasoners are tools that can perform inference tasks based on data in a knowledge base. This allows you to infer new knowledge from existing knowledge, or to identify any knowledge inconsistencies. Typically, the rules within a reasoner are based on controlled vocabularies such as RDFS (for set-based inference) or OWL DL (for ontology-based inference) - or a combination of these (e.g. SWRL). These controlled vocabularies are a key enabler for inference - if someone associates the axiom owl:sameAs with two resources, the fact that IRIs are unique guarantees that there is only one thing in the world that can possibly mean!

Some inference is relatively straightforward - for example, one triple might allow us to infer the existence of another triple - however, interactions between various OWL and RDFS constructs can allow for some quite advanced inference to happen based on many separate assertions. Because of this, reasoning can become quite computationally expensive.

Unlike statistical machine learning techniques, inferences made by semantic web reasoners are not predictions - a reasoner never "guesses", it only tells you about things it has inferred when it knows they are true based on your model and the information you've asserted to be true already. Additionally, inferences made by a reasoner are always fully explainable - this is very difficult to do with statistical machine learning and is an area of ongoing research.

Any triple that has been inferred is known as an "inferred triple", as opposed to "asserted triples" which are ones which you have asserted directly. It might be that an inferred triple goes on to become an asserted triple though, that's up to the user.

Simple Example of OWL reasoning

Given the following triples:

:andy :fatherOf :bob
:fatherOf owl:inverseOf :childOf

A reasoner that interpreted the definition of owl:inverseOf to mean "these things are the inverse of one another" would be able to produce the new knowledge:

:bob :childOf :andy

By combining many rules together you can achieve some fairly complex inference, with the dream scenario being that a user is introduced to new knowledge that they didn't previously consider, or that they are prompted about knowledge that is provably inconsistent.

Do all reasoners implement the same set of rules?

Reasoners will often implement only part of the OWL or RDFS specifications. Performance is sometimes the reason for leaving certain elements out (a particular inference rule may be extremely expensive to run and bring little perceived value) but also because some of the OWL standard is computationally undecidable, meaning that it can be proved that no algorithm can be constructed to actually perform the reasoning. An example of this undecidability problem can be seen in the following statement:

:MyClass rdf:type :MyClass

This is perfectly legal in OWL, but no reasoning software is going to be able to reconcile this statement computationally because it is a paradox. Some of the 'cuts' from the main OWL spec have actually been recognized and enshrined as "sub-flavours" of OWL (such as OWL DL or OWL Lite) in order to give reasoners something to aim for when deciding what to implement. The Owl DL spec contains a list all of the things that were cut from OWL in order to make it decidable.

Class Relationship Inference

OWL is able to infer relationships between classes based on the way they are modelled. The following ontology demonstrates one of the simplest ways that a class relationship can be inferred, in that an intersection of two or more classes will be the subclass of each intersected class.

# Given
:BeatlesAlbum a owl:Class;
    owl:intersectionOf (
        :Album
        :ThingsWrittenByBeatles
    )

# We can infer
:BeatlesAlbum rdfs:subClassOf :Album
:ThingsWrittenByBeatles rdfs:subClassOf :Album

This is a major departure from how developers would typically think of classes as we are now making inferences about our model's structure - these are details that developers normally have complete ownership and authorship of. Classes in OWL are a moving part as much as any other piece of data, and adding new data can add new relationships and new structure. Note also that we have not mentioned any individual piece of data or any instances in the above - these are inferences that have been drawn purely based on the shape of the data!

Contradictions and Inconsistencies

As well as inferring new triples, OWL is also capable of inferring contradictions within your ontologies - however it should be noted that whilst OWL can tell you of the existence of a contradiction, it can not tell you which of your assertions is wrong (you asserted them, so they're assumed to be true - owl is just letting you know that the facts you've asserted cannot possibly all be true!).

Subtly different to contradictions is "unsatisfiable things" - you can infer for example that nothing can be a member of a particular group. If for example we make a couple of statements that make no sense:

  • All dead people are alive
  • No dead people are alive

Our reasoner will not necessarily register this as an inconsistency - it in fact still allows us to infer stuff based on the unsatisfiable logic...

# Given this nonsense...
:Person owl:intersectionOf (:AlivePeople, :DeadPeople) .
:Person owl:disjointWith (:AlivePeople, :DeadPeople) .

# We can infer...
:Person owl:equivalentClass owl:Nothing .

We have inferred that nothing can be a member of this class (owl:Nothing denotes an empty set). Note that the nonsense part above is not a contradiction, it is instead an unsatisfiable class. If we were to add a resource to one of the above classes though...

:Bob a :Person .

Now this is a contradiction. We've already inferred that :Person is an empty set and yet we've gone on to say that :Bob is a member. Both of these facts cannot be true, so our model now contains a contradiction and our reasoner would complain to us about our ontology being inconsistent.

Examples of Reasoners

Most reasoners are hosted out of universities but there are some open source and commercial offerings too. Examples include:

  • HermiT : University of Manchester - one of the only reasoners that attempts to fully support the OWL 2 specification. This one is distributed as the default reasoner within the Protoge ontology tool.
  • Fact++: University of Manchester - OWL 2 reasoner optimised for speed.
  • Pellet : Open Source - this is the reasoner embedded into Stardog.

Sometimes reasoners are built directly into a larger tool (this is the case with AllegroGraph and RDFox) whereas other times they are libraries that allow you to embed reasoning capabilities directly into software you are designing yourself (e.g. Apache Jena).

There is also research around creating a standard XML-based API known as DIG (Description logic Implementation Group). Any reasoner can implement this API, allowing it to be pluggable into any DIG compatible tool (such as Protégé).