Using Prolog, OWL, and HyperGraphDB
It's very hard to answer questions about what HyperGraphDB is. The best answer is that it is whatever you want it to be.
Join the DZone community and get the full member experience.Join For Free
In this blog, we mix news and technical postings, ranging from the nearly trivial tips that we deem might be useful to other programmers to rants regarding specific technologies and on to the more philosophical speculations on computing in general.
Prolog, OWL, and HyperGraphDB
One of the more attractive aspects of HyperGraphDB is that its model is so general and its API so open up to the storage layer that many (meta)models can be very naturally implemented efficiently on top of it. Not only that, but those metamodels coexist and can form interesting computational synergies when implementing actual business applications.
By metamodel here, I mean whatever formalism or data structure one uses to model the domain of application, i.e., RDF or UML or whatever. As I've always pointed out elsewhere, there is a price to pay for this generality. For one, it's very hard to answer the questions, "What is the model of HyperGraphDB? Is it a graph? Is it a relational model? Is it object-oriented?" The answer is that it is whatever you want it to be.
Just as there is no one correct domain model for a given application, there is no one correct meta-model. Now, the notion of multi-model or polyglot databases is becoming somewhat of a buzzword. It naturally attracts attention because just as you can't solve all computational problems with a single data structure, you can't implement all types of applications with a single data model. The design of HyperGraphDB recognized that very early. As a consequence, metamodels like FOL (first-order logic) as implemented in Prolog and DL (description logic) as realized in the OWL 2.0 standard can become happy teammates to tackle difficult knowledge representation and reasoning problems.
In this blog, we will demonstrate a simple integration between the TuProlog interpreter and the OWL HyperGraphDB module.
We will use Seco because it is easier to share sample code in a notebook. And besides, if you are playing with HyperGraphDB, Seco is a great tool. If you've seen some of the more recent notebook environments like Jupyter and the likes, the notion should be familiar. A tarball with all the code, both notebook and Maven projects can be found at the end of this blog.
Installing and Running Seco With Prolog and OWL
First, download Seco.
Then, unzip the tarball and start it with the
run.sh (Mac or Linux) or
run.cmd (Windows) script. You should see a "Welcome" notebook. Read it — it'll give you an intro of the tool.
Put the JAR in SECO_HOME/lib and restart Seco. It should pick up the new language automatically. To test it:
- Open a new notebook.
- Right-click anywhere inside and select Set Default Language as Prolog.
You can now see the Prolog interpreter in action by typing, for example:
father('Arya Stark', 'Eddard Stark'). father(computing, 'Alan Turing').
Evaluate the cell by hitting Shift + Enter. Because of the dot at the end, it will be added as a new fact. Then, you can query that by typing
Again, evaluate with Shift + Enter. Because of the question mark at the end, it will be treated as a query. The output should be a small interactive component that allows you go iterate through the possible solutions of the query. There should be two solutions — one for each declared father.
So far, so good. Now let's add OWL. The JAR you need can be found in the HyperGraphDB Maven repo:
You can just put this JAR under SECO_HOME/lib and restart. For simplicity, let's just do that.
Side Note: You can also add it to the runtime context classpath (for more, see here). This notion of a runtime context in Seco is a bit analogous to an application in a J2EE container. It has its own class loader with its own runtime dependencies, etc. Just like with a Java web server, JARs in the lib all available for all runtime contexts, while JARs added to a runtime context are just available for that context. Seco creates a default context so you always have one. But you can, of course, create others.
With the OWL module installed, we can load an ontology into the database. The sample ontology for this blog can be found here.
Save that file owlsample-1.owl somewhere on your machine. To load, open another notebook, this time without changing the default language (which will be BeanShell). Then, you can load the ontology with the following code which you should copy and paste into a notebook and cell and evaluate with Shift + Enter:
import org.hypergraphdb.app.owl.*; import org.semanticweb.owlapi.model.*; File ontologyFile = new File("/home/borislav/temp/owlsample-1.owl"); HGDBOntologyManager manager = HGOntologyManagerFactory.getOntologyManager(niche.getLocation()); HGDBOntology ontology = manager.importOntology(IRI.create(ontologyFile), new HGDBImportConfig()); System.out.println(ontology.getAxioms());
The printout should spit some axioms on your console. If that works, you have the Prolog and OWL modules running in a HyperGraphDB database instance. In this case, the database instance is the Seco niche (code and notebooks in Seco are automatically stored in a HyperGrapDB instance called the niche). To do it outside Seco, take a look at the annotated Java code in the sample project linked to in the last section.
Prolog Reasoning Over OWL Axioms
So now suppose we want to query OWL data from Prolog, just as any other Prolog predicate with the ability to backtrack, etc. All we need to do is write a HyperGraphDB query expression and associate with a Prolog predicate. For example a query expression that will return all OWL object properties looks like this:
The reason we use
typePlus (meaning atoms of this type and all its subtypes) is that the concrete type of OWL object property axioms will be HGDB-implementation dependent and it's good to remain agnostic of that. Then for the actual binding to a Prolog predicate, one needs to dig into the TuProlog implementation internals just a bit.
Here is how it looks:
// the HGPrologLibrary is what integrates the TuProlog interpeter with HGDB import alice.tuprolog.hgdb.HGPrologLibrary; // Here we associate a database instance with a Prolog interpreter instance and that association // is the HGPrologLibrary HGPrologLibrary lib = HGPrologLibrary.attach(niche, thisContext.getEngine("prolog").getProlog()); // Each library contains a map between Prolog predicates (name/arity) and arbitrary HyperGraphDB // queries. When the Prolog interpreter see that predicate, it will invoke the HGDB query // as a clause store lib.getClauseFactory().getPredicateMapping().put( "objectProperty/3", hg.typePlus(org.hypergraphdb.app.owl.model.axioms.OWLObjectPropertyAssertionAxiomHGDB.class))
Evaluate the above code in the BeanShell notebook.
With that in place, we can now perform a Prolog query to retrieve all OWL object property assertions from our ontology:
objectProperty(Subject, Predicate, Object)?.
Evaluating the above Prolog expression in the Prolog notebook should open up that solution navigator and display the object property triples one by one.
Note: We've been switching between the BeanShell and Prolog notebooks to evaluate code in two different languages. But you can also mix languages in the same notebook. The tarball of the sample project linked below contains a single notebook file called
prologandowl.seco which you can File > Import into Seco and evaluate without the copy-and-paste effort. In that notebook, cells have been individually configured for different languages.
A tighter integration between this trio of HyperGraphDB, Prolog, and OWL would include the following missing pieces:
Ability to represent OWL expressions (class and property) in Prolog.
Ability to assert and retract OWL axioms from Prolog.
Ability to invoke an OWL reasoner from Prolog.
Perhaps add a new term type in Prolog representing an OWL entity.
I promise to report when that happens. The whole thing would make much more sense if there is an OWL reasoning implementation included in the OWL-HGDB module, instead of the current limited by RAM approach of off-the-shelf reasoners like Pellet and HermiT.
Appendix: Annotated Non-Seco Version
You can find a sample standalone Java project that does exactly the above, consisting of a Maven
pom.xml with the necessary dependencies here.
The tarball also contains a Seco notebook with the above code snippets and that you can import to evaluate the cells and see the code in action. The OWL sample ontology is also in there.
Published at DZone with permission of Borislav Iordanov, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.