Background:
Knowledge Sharing and Acquisition was a topic I got interested in while working on the Office of the Future project at AT&T in the early 80's. Online databases like Compuserve, The Source and AOL were getting more popular and the Internet had moved to TCP/IP by the mid 80's when AT&T broke up and I started a consulting company DIMEX (Digital Information Management and Exchange).

I was a member of the User Requirements for Office Systems at the American National Standards Institute (ANSI) in the mid 80's where we tried to establish some standards for email, calendaring, contact and information databases, where intelligent agents would know how to organize information sent primarily by email. But nothing got approved because companies thought standards were going to stifle inovation they were developing in their research labs.

The development of the Hypertext Transfer Protocol (HTTP) by Tim Berners-Lee at CERN and the Mosaic web browser developed by Marc Andreessen for NCSA at UIUC in the early 90's led to the explosive unorganized growth of the World Wide Web.


The Semantic Web:
In 2001 Tim Berners-Lee and others, laid out a vision for a Semantic Web, which would include a notion of meaning in data and services. Intelligent agents will exchange information and rules for how to interact with that information, with or without human intervention; appointments will be automatically scheduled; and automated agents will select and invoke services. Information will be easy to find without depending solely on keywords.

A three level model was proposed:
3. Applications - Intelligent Agents
2. Services acting on requests from level 3
1. Data models and formats
Source: "Rethinking the Semantic Web", Peer to Peer at Stanford, 2005


An ontology is an explicit specification of a conceptualization. The term is borrowed from philosophy, where an Ontology is a systematic account of Existence.

In the last couple of years, as part of the Semantic Web activity, a number of different systems have been built. These systems help perform one of two tasks: (1) create ontologies, and (2) annotate web pages with ontology derived semantic tags. By and large, both classes of systems have been focused on manual and semi-automatic tooling to improve the productivity of a human ontologist or annotator rather than on fully automated methods.

Glossary:
DAML - DARPA Agent Markup Language
DAML+OIL - a successor language to DAML and OIL that combines features of both.
DARPA - Defense Advanced Research Projects Agency
OIL - Ontology Inference Layer.
OWL - Web Ontology Language - Superseeds DAML and OIL
RDF - Resource Description Framework is a family of World Wide Web Consortium (W3C) specifications originally designed as a metadata model using XML but which has come to be used as a general method of modeling knowledge, through a variety of syntax formats
TAP - (The Alpiri Project) - An RDF database of 1000s of 'real world' things,. (organised into categories).
See www.openrdf.org/
Semantic Web - A project that intends to create a universal medium for information exchange by putting documents with computer-processable meaning (semantics) on the World Wide Web.
TAP KB - TAP Knowledge Base
tap.rdf - TAP Resource Description Framework

Links:
Knowledge Systems AI Lab at ksl.stanford.edu/
  TAP Project
  What is Ontology?
SemTag and Seeker: Bootstrapping the Semantic Web via Automated Semantic Annotation at IBM Almaden Research Center, 2003
SIMILE (Semantic Interoperability of Metadata and Information in unLike Environments) Project at MIT
Levels of Knowledge
The Open Directory Project (ODP)

Return to Classification

last updated 26 July 2006