Re: CG: Re: Top level ontology

Sergei Nirenburg (sergei@crl.nmsu.edu)
Sun, 07 Dec 1997 18:33:19 -0700

John McCarthy writes:

> 1. Maybe polysemy can be divided into accidental polysemy and
> essential polysemy. The distinction between a river bank and a
> financial institution is accidental. Different languages won't have
> the same accidental polysemies. However, it seems to me that either
> concept may give rise to polysemies in a systematic way. For example,
> someone may refer to banking a boat meaning to approach the bank of
> the river and to bank money meaning putting it in the bank.

There is a vast amount of literature on polysemy and its various kinds and even
computational treatments of it (only within the last ten years or so, a *very
incomplete* list of contributors would include such names as Cruse;
Pustejovsky, Wilks; Fass; Hirst; Lesk; Cowie and Guthrie; and our Mikrokosmos
group Onyshkevych, Mahesh, Raskin, Viegas, Beale, Nirenburg). Word sense
disambiguation (WSD) has also attracted the interest of the corpus-oriented
linguists (that is, statisticians). So, the problem is being studied
seriously. [To my knowledge, however, only the Mikrokosmos project is at the
same time a) devoted exclusively to natural language semantics and b) relies on
an actual ontology.]

> AI will
> involve itself in an endless chase if it proposes to tie down all
> these polysemies in advance. I suppose they can be avoided for
> systems that don't have to tolerate human ad hoc invention of
> polysemies.

If one constrains oneself this way, there is no hope for automatic semantic
analysis of running text. Indeed, the tension between the static knowledge
sources (ontologies and lexicons) and the dynamically derived text meanings is
the crux of the entire matter. If one builds ontologies to support autoamtic
semantic analysis, one has to expect to have to build devices for exactly the
cases when, as you put it, humans "invent" ambiguities. Note that WordNet and
similar compendia have not been intended directly for computer processing. They
are used in NLP much like MRDs such as LDOCE are.

> 2. John Sowa writes at the end.
>
> What makes logic hard is that you cannot cheat. In order to
> translate anything else into logic, you must be explicit
> about every implicit detail.
>
> I'm thinking about how to use approximate concepts in logic so as
> not to need to be explicit about every implicit detail.

Let me just add, not polemically, that logic may be just fine if one makes an
effort to explain the meaning of the atomic elements (the vocabulary). This is,
it seems, what makes the enterprise messy and leads people to thinking of fuzzy
or approximate concepts.

Sergei