Selected Newsgroup Message
From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: Case Based Reasoning - is this AI?...
Date: 11 Jan 2000 00:00:00 GMT
Message-ID: <85dvti$nad$1@mulga.cs.mu.OZ.AU>
Approved: ai@cs.mu.oz.au (Moderator comp.ai)
References: <858sl1$gpu$1@mulga.cs.mu.OZ.AU>
X-Date: Mon, 10 Jan 2000 10:26:29 -0200
X-Mod: ?
Organization: Intelliwise Research and Training
Followup-To: comp.ai
Newsgroups: comp.ai
Andrzej Lewandowski wrote in message <858sl1$gpu$1@mulga.cs.mu.OZ.AU>...
>For whatever reason I am thinking about using Case Based Reasoning to
>solve some problem. I spent some time studying CBR, and, yes, I know
>that there are books that cover CBR in depth. However, I am a bit
>astonished that the "mainstream" AI textbooks and monographs don't
>mention CBR either at all, or there is very (really, very...) little
>discussion. The most up to date one, by Nillson has only about 2
>sentences about CBR.
>
>Question: why CBR is ignored in AI books? Is this a "bastard of AI" or
>is not considered as AI?
>
This is indeed an intriguing question. The fact is that CBR is
useful, with some projects using it as the central technique. But
I can try to find an explanation for the lack of interest in CBR by
the "mainstream" AI. It stems on the fact that CBR alone is
not enough.
First of all, I think this lack of interest is unfounded. CBR
(and its more general principle, Analogical Reasoning) should be
considered as one of the important aspects of any intelligent entity.
The big question is that in order to be intelligent, one entity
must be able to solve *new* problems, ones for which it does
not have a "ready-made" answer. This could easily be used as a point
*against* CBR, because the technique excels exactly in finding
ready made solutions to current problems by looking in a database.
But the question is that the procedure hardly finds an exact match,
which means it should try to find *similar* solutions and *adapt*
(modify) it in order to suit to the new circumstance (part of the
difficulty of CBR methods stem on the complexity of the strategies
to find 'similarities'). This procedure is invaluable and I offer
the thought that this happens frequently in human cognition (which
is very efficacious in considering similarities).
In fact, analogical reasoning may be seen as one of the cornerstones
of human thought. When faced with completely new problems, we tend
to act randomly, trying to find a solution by try-and-error. There
are formal reasons to support this technique, as there isn't any
better method (than random exploration) when one doesn't know nothing
about a particular domain.
But once we notice some degree of similarity among solutions and
problems, we tend to 'keep' in memory a generalized form of that
solution.
When faced with a completely new problem again, instead of using
only random methods, we often try to use random tries using similar
solutions and this frequently is enough to suggest good strategies
to solve the problem at hand. This improves our performance over
purely random methods. Also, it augments our "database" of solutions
versus problems, leading to an increase in our problem solving
abilities as experiences accumulate.
So CBR *alone* cannot be considered AI. But as a part of a more
complex architecture, it may be seen as an almost indispensable part.
Hence, it is not fair its absence from some textbooks on AI.
Regards,
______________________________________________________
Sergio Navega
Intelliwise Research
http://www.intelliwise.com/snavega
---
[ comp.ai is moderated. To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
Back to Menu of Messages Sergio's Homepage
Any Comments? snavega@attglobal.net