skip to content
An Evaluation of Knowledge Base Systems for Large OWL Datasets Preview this item
ClosePreview this item
Checking...

An Evaluation of Knowledge Base Systems for Large OWL Datasets

Author: Yuanbo Guo; Zhengxiang Pan; Jeff Heflin; LEHIGH UNIV BETHLEHEM PA DEPT OF COMPUTER SCIENCE AND ELECTRICAL ENGINEERING.
Publisher: Ft. Belvoir Defense Technical Information Center JAN 2004.
Edition/Format:   eBook : English
Database:WorldCat
Summary:
In this paper, we present our work on evaluating knowledge base systems with respect to use in large OWL applications. To this end, we have developed the Lehigh University Benchmark (LUBM). The benchmark is intended to evaluate knowledge base systems with respect to extensional queries over a large dataset that commits to a single realistic ontology. LUBM features an OWL ontology modeling university domain,  Read more...
Rating:

(not yet rated) 0 with reviews - Be the first.

Subjects
More like this

 

Find a copy online

Links to this item

Find a copy in the library

&AllPage.SpinnerRetrieving; Finding libraries that hold this item...

Details

Material Type: Internet resource
Document Type: Internet Resource
All Authors / Contributors: Yuanbo Guo; Zhengxiang Pan; Jeff Heflin; LEHIGH UNIV BETHLEHEM PA DEPT OF COMPUTER SCIENCE AND ELECTRICAL ENGINEERING.
OCLC Number: 227894594
Notes: The original document contains color images.
Description: 28 p.

Abstract:

In this paper, we present our work on evaluating knowledge base systems with respect to use in large OWL applications. To this end, we have developed the Lehigh University Benchmark (LUBM). The benchmark is intended to evaluate knowledge base systems with respect to extensional queries over a large dataset that commits to a single realistic ontology. LUBM features an OWL ontology modeling university domain, synthetic OWL data generation that can scale to an arbitrary size, fourteen test queries representing a variety of properties, and a set of performance metrics. We describe the components of the benchmark and some rationale for its design. Based on the benchmark, we have conducted an evaluation of four knowledge base systems (KBS). To our knowledge, no experiment has been done with the scale of data used here. The smallest dataset used consists of 15 OWL files totaling 8MB, while the largest dataset consists of 999 files totaling 583MB. We evaluated two memory-based systems (OWLJessKB and memory-based Sesame) and two systems with persistent storage (database-based Sesame and DLDB-OWL). We show the results of the experiment and discuss the performance of each system. In particular, we have concluded that existing systems need to place a greater emphasis on scalability.

Reviews

User-contributed reviews
Retrieving GoodReads reviews...
Retrieving DOGObooks reviews...

Tags

Be the first.
Confirm this request

You may have already requested this item. Please select Ok if you would like to proceed with this request anyway.

Linked Data


<http://www.worldcat.org/oclc/227894594>
library:oclcnum"227894594"
library:placeOfPublication
library:placeOfPublication
owl:sameAs<info:oclcnum/227894594>
rdf:typeschema:Book
schema:about
schema:about
schema:about
schema:about
schema:about
schema:about
schema:about
schema:about
schema:about
schema:about
schema:about
schema:bookFormatschema:EBook
schema:contributor
schema:contributor
schema:contributor
schema:contributor
schema:datePublished"2004"
schema:datePublished"JAN 2004"
schema:description"In this paper, we present our work on evaluating knowledge base systems with respect to use in large OWL applications. To this end, we have developed the Lehigh University Benchmark (LUBM). The benchmark is intended to evaluate knowledge base systems with respect to extensional queries over a large dataset that commits to a single realistic ontology. LUBM features an OWL ontology modeling university domain, synthetic OWL data generation that can scale to an arbitrary size, fourteen test queries representing a variety of properties, and a set of performance metrics. We describe the components of the benchmark and some rationale for its design. Based on the benchmark, we have conducted an evaluation of four knowledge base systems (KBS). To our knowledge, no experiment has been done with the scale of data used here. The smallest dataset used consists of 15 OWL files totaling 8MB, while the largest dataset consists of 999 files totaling 583MB. We evaluated two memory-based systems (OWLJessKB and memory-based Sesame) and two systems with persistent storage (database-based Sesame and DLDB-OWL). We show the results of the experiment and discuss the performance of each system. In particular, we have concluded that existing systems need to place a greater emphasis on scalability."@en
schema:exampleOfWork<http://worldcat.org/entity/work/id/137539580>
schema:inLanguage"en"
schema:name"An Evaluation of Knowledge Base Systems for Large OWL Datasets"@en
schema:numberOfPages"28"
schema:publisher
schema:url
schema:url<http://handle.dtic.mil/100.2/ADA451855>

Content-negotiable representations

Close Window

Please sign in to WorldCat 

Don't have an account? You can easily create a free account.