Domain Modeling with OWL  Part 2
Join the DZone community and get the full member experience.
Join For Freewhy description logic matters?
in this second installment of the owl introductory series, we will be doing a little bit of math. if you ever plan to create a moderately complex owl 2 model, understanding the mathematical foundations will give you the right intuitions and spare you unpleasant surprises. the math is especially relevant to the reasoning portion of the owl toolset and tightly integrated reasoning services are one of the main distinguishing features of owl as opposed to many other knowledge representation languages.
last time , i gave you a bit of a historical context on owl and i mentioned that it is a fragment of firstorder logic. this is strictly speaking correct, but a bit misleading as far as the original motivation of the formalism. owl 2.0 is based on description logic (henceforth dl) which was developed in order to put semantic networks (or "conceptual graphs") and frames on a firm foundation. several variants of these conceptual networks were proposed in the 70s, but the ideas at the core of dl first appeared as something called structured inheritance networks . specifically, those ideas are the use of classes, instances and properties combined with means to create complex logical expressions and, most importantly, the ability to make inferences about subclassing and class membership. and this last aspect, the fact that nontrivial isa relationships between two classes or between a class and an instance can be inferred, is the crucial contribution of that work and the reason it eventually got a comprehensive mathematical treatment gaining the name description logic .
when you create a domain model in owl, you can state various constraints between classes and properties and those constraints have logical consequences. you are actually able to create a rather rich logical model of your domain and then ask interesting questions about it. part of the knowledge that you would have represented is explicit. that's simply the collection of statements that you make to describe the domain. you can retrieve that knowledge by using the standard semantic web query language sparql or by direct api calls. no reasoning involved here, just plain data querying. but another portion of your knowedge base is the implicit knowledge that can be derived as a consequence of your statements. accessing that implicit knowledge requires a precise logical interpretation , reasoning services and an appropriate expression language to formulate queries. that is what dl is about.
in what follows, i will introduce the actual mathematical formalism and show you sidebyside how it maps to owl 2.0. then we will discuss a few uncommon assumptions that owl reasoners make and what their consequences are. one of the goals of this introduction is to give you a head start should you want to read more about the subject of description logic.
for more information and an extensive literature review, please consult the description logic handbook .
let's get formal
formal languages, mathematical logic languages just list programming languages, are specified in two parts: syntax and semantics. logic languages in particular are traditionally specified through what is known as tarskistyle semantics, where the meaning of an expression in the language is ascribed via a correspondence with set theoretical constructs. this allows the whole aparatus of model theory to be applied and classical proof techniques can be used to show whether a language is decideable and, if so, to what complexity class it belongs.
here is a summary of the dl language elements:
atomic concepts, usually written in capital letters from the beginning
of the alphabet:
a, b, c, d
etc. those correspond to owl classes. 
the special concepts top and bottom, denoted by
and
respectively. in owl, those are referred to as
owl:thing
andowl:nothing.

roles, also in capital but from another part of the alphabet:
r,s etc.
dl roles are equivalent to owl properties. 
individuals usually written in lower case:
a, b, c
etc. in owl we call them individuals as well.  logical operators like (intersection), (union), ∀ (every), ∃ (there exists), ¬ (complement) as well as number comparison ≤, ≥ etc. in owl, all those operators are represented differently depending on the owl syntax used, xml or functional or the very concise manchester syntax.
the difference in terminology between owl and dl shouldn't lead to confusion. knowing the original dl terms and history should help understand the meaning adopted by owl of what are otherwise standard oo notions. for example, knowing that owl properties come from dl roles hints at the idea of a connection between two entities rather than having one "belong" to the other. i will be using both dl and owl terms freely, and they should be considered synonymous. the single letter naming conventions are used only when doing the math. in modeling, both dl and owl use more descriptive names, usually camel case, starting with capital letter for concepts and individuals, and starting with lower case for roles.
so the core elements in the language are concepts, roles and individuals. the interpretation of those elements is based on a universal set of things that are being talked about, a domain, and a mapping that assigns individual names to elements of that domain, concepts to subsets of the domain and roles to relations over the domain. thinking about concepts as sets is not far from seeing them as classes, both in the mathematical and in the programming sense.
more formally, the meaning of the language elements is defined by an
interpretation function ℑ that assigns a set to each named dl concept.
we write
x
^{
ℑ
}
instead
ℑ(x)
for the element that the interpretations maps
x
to. the domain of interpretation, or application domain in software engineering terms, is denoted by δ
^{
ℑ
}
.
one may use just δ for the namespace of individuals, the domain of
discourse. this is an important distinction in mathematical logic as one
is moving from formulas to what they are denoting and back. as an
example, top and bottom are formally interpreted as the whole domain and
the empty set respectively:
^{
ℑ
}
= ∅
^{
ℑ
}
=δ
^{
ℑ
}
when proving theorems about a logic language, one frequently reasons about different possible interpretations and each interpretation is called a model , not to be confused with our domain models in software. a model may offer a different mapping of individuals, concepts and roles, but it may not change the meaning of the logical operators. for example, the intersection operation is analogous to what would be logical conjunction ∧ and is always interpreted as set intersection:
(c
d)
^{
ℑ
}
= c
^{
ℑ
}
∩d
^{
ℑ
}
intuitively, half of what dl allows you to do is express concept descriptions , i.e. descriptions of sets of individuals. the atomic concepts and the roles are elementary descriptions and then you have operators in the syntax to make up more complex ones. the other half is description of individuals in terms of how they are classified and how they are related to other individuals.
in logical terms, concepts can be seen as oneplace predicates while roles as twoplace predicates. in fact, the claim that dl is a fragment of firstorder logic (fol) starts with that correspondence. then it is easy to see how the formulas and statements below can be translated into fol. so one can state in dl that a given individual belongs to a concept:
c(a)
 a belongs c, akin to owl's classassertion axiom.
or that two individuals are related by a role:
r(a,b)

b
is a
filler
of the role r (or an
rfiller
) for
a
. this is akin to owl's
objectpropertyassertion
that we saw last time.
note the phrasing here: b is "a" filler, not "the" filler as there may be many. but the more interesting part are the complex concept description that one is allowed to form.
describing concepts
the power of dl as a language lies in its ability to describe classes of entities via complex logical formulas. that is what makes it into a useful logic language. in the table below you can see the list of available constructors for building those complex descriptions.the 3d column shows the equivalent owl syntax in the standard owl functional syntax and the manchester syntax . i won't be covering the bloated and ugly xml syntax. imho, pushing xml as the default serialization mechanism for rdf/owl is probably an important reason for the slowish adoption of the technology. the functional syntax is both complete and userfriendly. the manchester syntax is incomplete but even better looking than dl's own and used in protege whenever class expressions are needed. so i'm showing both of those.
dl syntax  name  owl syntax  meaning 

bottom 
owl:nothing

the empty set.  
top 
owl:thing

the entire domain of interest.  
c
d

intersection 
objectintersectionof(c d)

the set of individuals that belong to both
c
and
d
.

c
d

union 
objectunionof(c d)

describes the individuals that belong either to c or to d (or to both!). 
¬ c

complement 
objectcomplementof(c)

describe the set of all things that do not belong to the concept
c
.

∀r.c

universal value restriction 
objectallvaluesfrom(r c)

describes the individuals all of whose rfillers belong to c. in owl terms, this is the class of objects where all the values of the property r are of type c. 
∃r.c

existential quantification 
objectsomevaluesfrom(r c)

describes the individuals that have at least one rfiller that belongs to c. in owl terms, all objects that have at least 1 property r whose value is of type c. 
{a, b, c, ... }

enumeration 
objectoneof(a,b,c,..)

describes the concept consisting of exactly the individuals a, b, c , etc. 
r:a

individual value restriction 
objecthasvalue(r a)

describes the individuals having
a
as an rfiller. in owl terms, the objects that have property r with value
a
.

≥ n r.c

minimum cardinality 
objectmincardinality(n r c)

describes the individuals that have at least n fillers of the role r belonging to the concept c. 
≤ n r.c

maximum cardinality 
objectmaxcardinality(n r c)

describes the individuals that have at most n fillers of the role r belonging to the concept c. 
= n r.c

exact cardinality 
objectexactcardinality(n r c)

this is a shorthand for
≤ n r.c
and
≥ n r.c
combined.

each and every construct listed above has a precise formal, settheoretic interpretation. for example, universal value restriction is interpreted thus:
(∀r.c)
^{
ℑ
}
= {a∈δ
^{
ℑ
}
 ∀b:(a,b)∈r
^{
ℑ
}
→ b∈c
^{
ℑ
}
}
as a little exercise, you could spell out the formal semantics of some of the other forms. this list of constructors constitutes a powerful means of expressing all sorts of concepts. description logic as a formalism has several variants with different computational characteristics. a particular variant is defined by the set of constructors that are allowed in it. suffice it to say that all of them are available in owl. so let's see a few concrete examples of what can be expressed in this language so far, staying on our automobile theme from last time:
dl expression  denoting concept 

car
¬red

all cars that are not red. 
∃haspart.american

all objects that are at least in part made in america. 
person
∀owns.
(hybrid
biodiesel)

climate change conscious people that don't own cars based exclusively on fossil fuels. 
in the constructor table, i showed the owl manchester syntax right below the owl functional syntax. to get a feel, here's what the last expression above looks like in the manchester syntax:
person and owns only (hybrid or biodiesel)
if you are familiar with mathematical logic, you probably noticed the absence of variables. if this makes you uncomfortable, just think of concept expressions in dl as implicitly containing one free variable ranging over the domain of discourse. in other words, concept expressions are what you get as logical formulas in dl.
making statements  the tbox and the abox
so far so good. we have seen how to make complex class descriptions in terms of simpler ones. let's see how we can state facts (a.k.a. axioms). there are two fundamental kinds of axioms in dl: axioms expressing constraints purely within the conceptual model and axioms talking about individuals in the world being described. the former comprise the so called tbox (terminological box) while the latter comprise the abox (assertion box). owl itself doesn't make that distinction, but reasoning algorithms use it and you will come across those terms in the literature and discussion groups. i already showed you the two main types of abox axioms, concept and role assertions with the following semantics:
c(a) is true in ℑ if a
^{
ℑ
}
∈ c
^{
ℑ
}
r(a,b) is true in ℑ if (a
^{
ℑ
}
,b
^{
ℑ
}
)∈r
^{
ℑ
}
another way to say the above is that the interpretation ℑ satisfies c(a) and that ℑ satisfies r(a,b). the concept and individual assignments that ℑ makes are consistent with those assertions. if an interpretation satisfies all axioms in an abox, it is a model for that abox. concept and role assertions are not the only possible kinds of statements in an abox, but they are the most important ones. other assertions allow you to say when different names should be interpreted as the same individual and when not. more on them below.
in the tbox, the axioms establish a priori facts about concepts and roles. two main types of axioms are used, inclusions and equalities:
c
d (inclusion or subsumption) is true in ℑ if c
^{
ℑ
}
⊆d
^{
ℑ
}
c ≡ d (equality or definition) is true in ℑ if c
^{
ℑ
}
=d
^{
ℑ
}
r
s (role subsumption) is true in ℑ if r
^{
ℑ
}
=s
^{
ℑ
}
and similarly, an interpretation ℑ satisfies a tbox whenever it satisfies all axioms in it. note that we can also define property inheritence in description logic, not only class inheritence. an example would be hasson that is a subrole of haschild  if somebody has a son, one can infer that they definitely have a child. even though we've just used atomic names here, a full concept expression can appear on either side of an inclusion or an equality axiom. for example we can define a pedestrian as somebody who doesn't own a car:
pedestrian ≡ person
∀owns.(not car
)
from this, a reasoner can already trivially infer that a pedestrian is a person. as another example, we can say that true sports cars must have no more than two doors:
sportscar
car
≤ 2 haspart.door
the above axiom states that whenever something is known to be a sports car, it
is definitely a car (so if somebody owns it, they can't be a pedestrian) and
it can't have more than 2 doors. if you declare an individual as a
sportscar and then proceed to assign 4 different doors to it:
haspart(mycar, frontleftdoor)
haspart(mycar, frontrightdoor)
haspart(mycar, backleftdoor)
haspart(mycar, backrightdoor)
a reasoner would complain about an inconsistency in your knowledge base, it
will enforce a constraint.
even though operators like intersection and union could be defined for
roles, this is not done in owl. there are other ways to specify role
constraints at the conceptual level though. besides role subsumption,
one can constrain the source and target of roles, or domain and range of
a property in owl terms. talking about "domains" and "ranges" is more
familiar than "roles" and "fillers", and consistent with the view of owl
properties as binary relations. owl provides the special axioms
objectpropertydomain
and
objectpropertyrange
.
however, one should keep in mind that such constraints can be specified
using the existing dl tools and are in fact interpreted in exactly that
way by owl reasoners:
≥ 1 r.t
c
(domain of r is c)
∀ r. c
(range of r is c)
in english, the first axiom above says that anything with at least 1
role r
filled by whatever also belongs to the concept c. so in other words,
whenever you have r(x, ?), you can infer the x ∈ c. similarly, the
second axiom
says that any individual can only be a filler of role r if it belongs to
the
concept c. therefore, a statement of the form r(?, x) would allow a dl
inference engine to conclude that x belong to c. notice the pattern here
that allows you to introduce a constraint that applies to everything.
saying that the universe (
) is subsumed by a concept
c
is the same as saying that all individuals belong to
c
.
moreoever, just like binary relations in classical set theory, roles in description logic can be classified semantically into symmetric , asymmetric etc. one can directly make such declarations about owl properties as tbox axioms and enrich the conceptual model this way. and this is again where the beauty of dl and owl shines. the logical aparatus that you learn in basic discrete math, something that you might use for documentation purposes, is available in a simple declarative software modeling language. here are the options and a refresher on what they mean:
characteristic  syntax  meaning 

functional 
functionalobjectproperty(ismadeby)

only one value is permitted as a role filler for r. an object can have only one such property. 
inverse 
inverseobjectproperties(hasmade ismadeby)

this says that
hasmade ≡ ismadeby
^{

}
. the domain and range are reversed:
ismadeby
^{

}
= { (a,b)  (b,a) ∈ hasmade}
.

symmetric 
symmetricobjectproperty(ismarriedto)

symmetric means that the relation goes both ways:
(a,b)∈r⇔(b,a)∈r

asymmetric 
asymmetricobjectproperty(ismarriedto)

an asymmetric means that the relation
cannot
go both ways:
(a,b)∈r⇒(b,a)∉r

reflexive 
reflexiveobjectproperty(feeds)

in a reflexive role, every individual is its own filler. for example, everybody feeds themselves. 
irreflexive 
irreflexiveobjectproperty(ismarriedto)

in an irreflexive role, no individual can be its own filler. 
transitive 
transitiveobjectproperty(ispartof)

in a transitive role, whenever
(a,b) ∈ r
and
(b,c) ∈ r
we have also
(a,c) ∈ r
.

some of those role semantics can be expressed using available machinery. for example, the fact that a role
r
is functional can be expressed as
≤ 1 r. but transitivity can't. another such "irreducible" construction available in owl is
objectpropertychain
which allows you to express role composition. you can say that a chain
of roles that indirectly connects two individuals establishes a
relationship between them. a common example of this is the uncle
relationship which would be defined in owl like this:
subobjectpropertyof(objectpropertychain(hasfather hasbrother) hasuncle)
to sum up, it is common for dl systems to separate the knowledge into a purely conceptual model, sort of like a schema definition, the tbox and actual data which associates individuals to concepts and assigns them roles, the abox. reasoning tools tend to use different algorithms and optimization techniques dependending on whether they are dealing exclusively with a tbox or an abox or a mix of both. the gist of the formalism is the ability to describe complex concepts in terms of simpler ones and the describe individuals in terms of how they are classified and how they relate to other individuals. it is a logic language with no variables all right, but not a propositional one. i've advertised the ability to make nontrivial inferences about the accumulated knowedge, so let's take a look at those now.
reasoning with concepts and roles
there are a few core reasoning problems about the conceptual portion (tbox) of a dl knowledge base stemming from natural questions that one might ask. for example, is a given concept satisfiable (remember, when we say concept here, we may mean a possibly complex logical formula, a full description) in the sense that it is possible to find a model where that concept describes a nonempty set. another question is whether one concept subsumes another (in owl terms if a class is a subclass of another), which can be reduced to satisfiability because:
c
d
if and only if
c
¬d
is not satisfiable.
another question is if two concepts (or concept formulas) describe the same set
of individuals. and again, this can be reduced to satisfiability by first
observing that two concepts are equivalent if and only if both
c
d
and
d
c
are true. finally, note that two concepts c and d are disjoint if
c
d
is unsatisfiable. if you think about it a bit, you'd find that all of those
reasoning tasks are reducible to one another. for example, suppose you have an
algorithm for subsumption. you can then determine if c a unsatisfiable by checking if it is subsumed by
.
the satisfiability question is more interesting to tool builders because that's
how
tableau algorithms
tend to operate  they try to obtain a contradiction when building a model for
a formula. and it is also sometimes a necessary condition for a whole ontology
to be consistent. but to people, the subsumption question is often the more
interesting one because it can be viewed as implication. if you think of
a concept expressions as logical formulas with one free variable, then concept
subsumption is logical implication in the sense that whenever the subconcept formula
is true of an individual, the superconcept formula is true as well.
another reason the subsumption question is interesting in practice is the ability of inference engines to list all named concepts subsumed by a given concept description, thus automatically constructing a conceptual hierarchy out of tbox constraints.
reasoning with individuals
unlike tbox reasoning where we are dealing purely with conceptual constraints, in the abox we are stating facts about objects. something may be wrong with a tbox if a concept is unsatisfiable, which simply means that no individual can belong to it, which simply means that it's equivalent to the bottom concept . and there's nothing special about that, there aren't any other consequences. usually, when a contradiction is found in a logic language, it is devastating for the language because it allows one to prove anything as a consequence. an unsatisfiable concept is a sort of a contradiction, but it doesn't make an ontology useless because it doesn't prevent one from creating a model of the set of axioms. that is, it doesn't prevent one from coming up with a sensible interpretation. it's just that the sets corresponding to the unsatisfiable concepts will be empty.
when a tbox has an unsatisfiable concept, it is simply called incoherent . that's an undesirable property, and a knowledge engineer should strive to maintain coherent ontologies. in fact, studying the consequences of incoherence is a topic on its own .
a contradiction involving the abox however is a different story. it means that it's impossible to find a model for the axioms, i.e. there is no way to interpret them! and this is what defines an inconsistency in dl: an ontology (tbox+abox) is called inconsistent if there's no model for it. interestingly, concept satisfiability and consistency have been proven to be equivalent problems! an example of an inconsistency would be asserting that an individual belongs to two disjoint concepts:
pedestrian(tom)
car(h1)
owns(tom, h1)
here we've stated that tom is a
pedestrian
and that he owns the
individual
h1
which we've asserted to be a
car
. according
to our definition of a
pedestrian
above, tom can only own things that are not
cars. therefore, a reasoner would infer that
h1∈¬car
and
that's an inconsistency. note that the inference engine can't tell you
exaclty what you did wrong. if this were a real world example, perhaps the
statement
pedestrian(tom)
is at fault. but a reasoner won't
complain about it because there's no problem with declaring tom a pedestrian per se.
while consistency is a natural question to ask, a more practical question is
instance checking
which asks if a given individual belongs to a given
concept. since a concept can be a complex logical formula, this essentially
allows you to ask a logical question about an object. and to check whether
c(a)
is true, one needs to prove that if you add it to the
ontology
as an axiom, it leads to an inconsistency. for example to find if a
given car model is an all american green energy car we could ask if it
is an instance of the concept:
∀haspart.american
(hybrid
biodiesel)
even more fun is the ability to ask for all individuals that belong to a concept. this is known as the retrieval problem . it is akin to querying a database by specifying the desired data's characteristics via a logical expression. conversely, given an individual one may query for all the named concepts the individual belongs to. that is, we can ask for all the types that individual has been classified under.
what about data values
last time you learned that there are two kinds of owl properties that an
individual can have: object properties and data properties. in
description logic, data properties are introduced by extending the
formalism with
concrete domains
and further allowing standard
logical predicates (i.e. nary boolean functions) over those domains. an
example of a concrete domain is the set of natural numbers with binary
predicates for the comparison operators <, ≥ to make the formalism
work, certain restrictions are imposed on the available set of
predicates for a given domain. however, we won't go into details here
because owl doesn't allow use of concrete datatype predicates in class
definitions. thus, it is not possible to say that somebody is allowed to
drink if their
age > 21
. so data values in owl are used
more or less like individuals except they can only appear as role
fillers. when we cover rules later, we will see how to get around this
owl limitation by using the swrl (semantic web rule language).
reasoning in an openended world
the economist john keynes is famously quoted as saying "when the facts change, i change my mind. what do you do, sir?" so it goes for much of (sound) human reasoning. we are quick to draw conclusions, taking shortcuts, making assumptions and faced with new information we rapidly retract our deductions and change our reasoning. the alternative would be to very rarely commit ourselves to a conclusion, say "i'm not sure" most of the time, only infer things that are certain so we don't have the embarassment of being wrong. what's the right attitude? that's the debate between monotonic and nonmonotonic reasoning.
in monotonic reasoning, when new axioms are added to the knowledge base, all existing inferences remain unchanged. in other words, knowledge can only grow, deductions are never retracted. if one never makes assumptions that are not explicitly stated, the reasoning will be monotonic. in nonmonotonic reasoning, it is possible for new information to cause retraction of previously drawn conclusions. this happens if extra assumptions are made during inference.
now, one can argue that nonmonotonic reasoning is more practical, that's how humans do it after all. or one can argue that you don't want software to deliberatery make mistakes by making the wrong assumptions. in software one cares about things like reusability, longterm maintenance, safety and contextindependence. when you draw conclusions from a set of facts, you don't want to have to do bookkeeping in what context they were arrived at (did we know a at the time or no?). so the pioneers of the semantic web had the debate and went for monotonicity. since the global semantic web is an openended knowledge source, constantly growing and being refined, monotonicity is the way to go. good. however, that leads to what's argubly the most counterintuitive aspect of working with owl dl, especially if you're coming from a software background.
we've seen above the various logical statements that can be made in owl
2.0. now, it seems natural to assume that if you don't know whether a
statement
s
is true, then you don't know, you can't simply
decide that it's false, right? well, that's what openworld semantics
say as well. and that's what dl systems generally do: they make the
openworld assumption (owa): lack of knowledge that
s
does not automatically mean
¬s
. assuming otherwise is known as the closedworld assumption (cwa) which enables a reasoning style called
negationasfailure
where failing to deduce something entails its converse. using
negationasfailure obviously leads to nonmonotonic reasoning because
new facts will invalidate the "failure" part.
explained like this, the owa doesn't seem like such a large pill to swallow, it feels fairly natural. it turns out there are surprises and sometimes frustrations when you have spent all your life in a technical environment that operates under the cwa, namely conventional database systems, in particular sql as well as more traditional logical languages like prolog. it turns out, adopting the owa is nothing short of a paradigm shift for the practicing programmer. let me give you at least one example why and i promise we'll see more later on.
say you have in the knowledge base
allamerican ≡ ∀haspart.american
american(engine123)
haspart(f100, engine123)
is
allamerican(f100)
true? since the only part that
f100
has is american and since something is
allamerican
whenver all its parts are made in america, then we'd expect
allamerican(f100)
to be true. but this inference can't be made because according to the
openworld assumption nothing prevents a new piece of information to
assert
haspart(f100, michelintires)
, some time in the
future, i.e. that frenchmade tires are used on the car. in other words,
information about the entity in question is incomplete. this is unlike
in the classic database world where you'd do a query to list all parts
of the entity, join that with the "madein" table listing where parts
were made and you get the answer. the constraint that we've stated in
the first axiom above will help detect an inconsistency if you assert
both:
allamerican(f100)
haspart(f100, michelintires)
but the concept definition itself doesn't provide a sure way to retrieve all its instances. on the other hand, if you define:
american ≡ ∃haspart.american
madein:america
madein(engine123, america)
haspart(f100, engine123)
this is a much more constructive definition. it only defines something
that is at least part american. note how dl is capable of dealing with
cyclic definitions. now, if you ask for everything american, you will
get both
engine123
and
f100
in the result set.
in general, problems with owa arise when a query or an expected
inference rely on the knowledge base somehow having exhaustive
information about how entities related to all other possible entities.
there are a few tricks that one can use to "close" the world by
explicitly adding information or additional constraints to an ontology
with the purpose of forcing certain inferences:

listing all individuals explicitly with the enumeration constructor
{...}
. that's a way to tell the reasoner that it has complete information about the members of a class. 
imposing precise cardinality constraints. for example, one could state
that a product has no more than, say, 3 parts. then if all those 3 parts
are explicitly listed, a reasoner knows no extra parts are possible and
can decide if the product is
allamerican
or not. 
stating explicitly that something doesn't have a certain kind of property. for example, one could state that
(¬∃haspart.¬american)(f100)
. 
stating explicitly that something doesn't have a certain property. owl allows negative property assertions with the
negativeobjectpropertyassertion
or thenegativedatapropertyassertion
. note that those are syntactic sugar. you can say the same thing using concept complement: {a} ¬ (∃p{b})
all those are valid means to "close the world" and they are used in practice. but one must also keep in mind that the reasoning algorithm is separate from the modeling language. nothing prevents you from applying nonmonotic reasoning with negationasfailure to dl models in limited and controlled contexts.
una  the unique name assumption
to complete our short account of the mathematical foundation of owl, we will have to take a look at another consequential openworld aspect of dl reasoners  the unique name assumption (una) which owl does not make. the una states that distinct names necessarily refer to distinct entities. recall that names in owl are uris, that is identifiers unique within the global namespace of all names in the semantic web. so we are saying here that several different identifiers, unique as they are, may actually identify the same thing. this is again something that we're not used to in classic logic systems or databases where distinct identifiers refer to distinct entities.
unlike the owa, the una actually makes good sense in the context of description logic and it is a natural expectation that a knowledge engineer may rely on. however, owl does not make that assumption and this is in part due to the global nature of naming in owl. everybody can come up with a vocabulary and it would be nice to be able to state post factum when we are talking about the same thing even when we were using a different name for it. in fact there's an axiom for that, called an agreement axiom:
x
y
the agreement says that x and y refer to the same entity so that all facts about x are also true about y and viceversa. before you conclude that this is akin to variable assignment and the uris of owl individuals are like variables in a programming language, note that this statement is symmetric, it goes in both directions! agreement is also something that can be automatically inferred by a reasoner as well as a question that a user may ask of a knowledge base. to repeat: a reasoner is free to conclude that two different names, two different uris are actually refering to the same real world entity. this sort of inference that dabbles with the sacred notion of identity can lead to some rather unexpected inference results. consider the sensible constraint that every car has exactly one owner:
car
= 1 isownedby.person
now, suppose also that at a certain point in time it is known, asserted in the knowledge base, that
isownedby(h1, tom)
. later, the car identified with
h1
is acquired by betty so you assert
isownedby(h1, betty)
as well, yet you forget to remove the assertion about tom's ownership.
or, maybe tom and betty got married and she became a coowner of the
car. that should violate our sole ownership constraint, right? wrong!
for the
reasoner, tom and betty are just names referring to an entity and
because names are not assumed to be unique, it happily concludes that
tom
becky
. married or not, they may very well object.
fortunately, getting around that behavior is much easier than with the owa. there are a few direct ways to dissociate individuals and here are some owl axioms available (standard accounts for dl don't have shorthand notations for these, but there are ways to encode the knowledge):

owl:differentindividuals(i1, i2, ..., in)
states that the listed individuals are all distinct. 
owl:disjointclasses(c1, c2, ..., cn)
states that each pair of classesci, cj, i≠j
, is disjoint, i.e.ci
cj =
. 
owl:disjointunionof(c, c1, c2, ..., cn)
states the classc
is the union ofc1, c2, ..., cn
and furthermore that each pairci, cj
is disjoint, i.e.ci
cj =
. in other wordsc
being the disjoint union ofc1, c2, ...cn
is the same thing as saying thatc1, c2, ..
forms a partition ofc
.
being an exhaustive list of the subclasses, the
disjointunionof
axiom is a bit like an enumeration but for classes rather than for the
individuals in a classes. so in case you were wondering, when a concept
is
defined through an enumeration of its individuals, say
c = {a,b,c}
, that doesn't imply that the individuals in question,
a,b
and
c
, are different. if they are, you'd have to state it separately with
owl:differentindividuals(a,b,c)
.
so in the tom and betty situation described above, we have several means
to avoid the undesirable inference. of course we could just declare
that they are different. but we can also have tom belong to the concept
male
declared as disjoint from the concept
female
.
there may be other indirect ways to refine a model once you discover
such an unwanted inference. inference engines are often capable of
giving an explanation of a certain inference so you can figure out the
logical steps
that led to it and break the chain somewhere else. in our example, we
may have a relationship
ismarriedto(tom,betty)
which may be declared in the tbox to be symmetric, which would
imply
ismarriedto(betty, tom)
. to fix the problem, we could declare the
ismarriedto
property as irreflexive which would imply
tom != betty
.
finally, note that some tools may allow you to set a global parameter to force the una, which has the same effect as declaring all individuals to be different. so check your documentation.
conclusion
ok, if you've read and understand the above then you know almost all of owl already. i deliberately stuck with the description logic terminology and syntax because i believe it forces one to stay in "math land" and think about owl from a mathematical logic viewpoint, rather than through the prism of the oo programmer with all the bagage that this entails.
owl dl is a formalism that merges objectoriented modeling ideas into a mathematical logic that allows you to encode highlystructured knowledge and to make nontrivial inferences from it. one of the skills that you'd need to develop while working with owl is the ability to formulate questions in terms of logical concept descriptions. and get comfortable with the idea that class constraints are expressed as logical formulas and that subclassing is logical implication. don't forget that knowledge is openended. and by the way, owl dl is sound, complete and decidable. this means it only makes true inferences, it makes all inferences and you can't make it loop forever.
coming up
in the next and following installments, we'll dive into actual modeling and coding. as a piece of homework, i'd suggest you go through the very detailed protege tutorial . we will be building an application based on an owl model and using the standard owlapi .
Opinions expressed by DZone contributors are their own.
Comments