Selected Newsgroup Message
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Is knowledge knowable?
Date: 08 Apr 1999 00:00:00 GMT
Message-ID: <370caa47@news3.us.ibm.net>
References: <19990330175902.25132.00000172@ng66.aol.com>
<7ec9qg$8pi$1@nnrp1.dejanews.com> <923416807.836.95@news.remarQ.com>
<7eepg5$abp$1@nnrp1.dejanews.com> <7egpuc$mf$1@nnrp1.dejanews.com>
<370BF26A.FF894F6C@ix.netcom.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 8 Apr 1999 13:08:23 GMT, 129.37.182.214
Organization: SilWis
Newsgroups: comp.ai.philosophy
Phil Roberts, Jr. wrote in message <370BF26A.FF894F6C@ix.netcom.com>...
>
>
>cui_siling@my-dejanews.com wrote:
>>
>
>>
>> Knowledge is rules including natural laws disentangled by consciousness and
>> embedded in brain unconsciousness. We know, with intention, some of it only
>> when it appears consciously.
>>
>
>I would argue that knowledge results from 'the cognition of abstruse similarity
>and difference' and is about 99.9% ANAlogical. So I don't concur that knowledge
>is a matter of rules.
Nice thought, Phil. But putting it 99.9% may be a little too much. I'm a
long fan of analogical methods but even then I would reluct giving it
so much power. Let me, then, write about the aspect I think analogical
reasoning is responsible for.
A person who has some kind of knowledge may be classified, roughly, in one
of two categories: it may be a novice who have just received all the
necessary information or it may be an expert, who had time to think about
the matter. Both can recite the "basics" of the specific knowledge.
Both can be able to report a significant analogy that brings to the
surface the important aspects of the subject. But only the expert will
have fluency when reasoning about the subject, to the point of being able
to say things about it that *he have never heard before*. And, equally
important, he/she don't have to think about this new way of seeing the
things: it is just "there" in its mind, ready to be uttered.
Again in a very rough way, the transformation of novice to expert seems
to me to occur in a two step process. The novice acquires the new
knowledge through reading or listening to a lecture, whatever. At this
point, the novice will have someone else's "models" in his mind, including
some analogical situations devised by the lecturer/writer. This is the
time where I'll put 99.9% of analogical models.
But when this novice thinks about the matter, he/she will progressively
transform its initial analogies into more "deep" concepts, maybe even
using other analogies (but this time, his own's), certainly using snippets
of previous experiences as support. Understanding something as an expert
involves thinking about it in a way to fill that "terrain" where the
knowledge must be planted.
The image I usually do of this process is this. A novice uses a crane
and ropes and cables to support the 10th floor of his knowledge building.
After some time, he/she uses new bricks (or copies from other edifices he
has) to build the lower floors. This goes until he gets to the ground.
Now, he may get rid of the ropes and cables, his 10th floor will be
grounded *in his own developed knowledge*. Understanding is a highly
personal way of seeing things, and one can only be considered an expert
once a personal vision is developed.
It is interesting to explore these aspects in terms of AI architectures.
For me, AI (symbolic AI, in special) developed methods to build solid
10th floors. But there's nothing below it, and any breeze will be enough
to put it down. So how can we put some solidity into that building?
How can we make one AI system's knowledge resilient? The answer, IMHO,
is lurking in some ideas of Piaget.
Regards
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Is knowledge knowable?
Date: 09 Apr 1999 00:00:00 GMT
Message-ID: <370e0732@news3.us.ibm.net>
References: <19990330175902.25132.00000172@ng66.aol.com>
<7ec9qg$8pi$1@nnrp1.dejanews.com> <923416807.836.95@news.remarQ.com>
<7eepg5$abp$1@nnrp1.dejanews.com> <7egpuc$mf$1@nnrp1.dejanews.com>
<370BF26A.FF894F6C@ix.netcom.com> <370caa47@news3.us.ibm.net>
<370D4EB7.D0FBF17F@ix.netcom.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 9 Apr 1999 13:57:06 GMT, 166.72.21.67
Organization: SilWis
Newsgroups: comp.ai.philosophy
Phil Roberts, Jr. wrote in message <370D4EB7.D0FBF17F@ix.netcom.com>...
>
>
>Sergio Navega wrote:
>>
>
>> >
>> >I would argue that knowledge results from 'the cognition of abstruse
>> similarity
>> >and difference' and is about 99.9% ANAlogical. So I don't concur that
>> knowledge
>> >is a matter of rules.
>>
>> Nice thought, Phil.
>
>I'm a sucker for anyone who starts a response with 'Nice thought, Phil.' :)
>You just don't see a lot of that in these here parts these days.
>
I thought you would like it ;-)
>> But putting it 99.9% may be a little too much. I'm a
>> long fan of analogical methods but even then I would reluct giving it
>> so much power. Let me, then, write about the aspect I think analogical
>> reasoning is responsible for.
>>
>> A person who has some kind of knowledge may be classified, roughly, in one
>> of two categories: it may be a novice who have just received all the
>> necessary information or it may be an expert, who had time to think about
>> the matter. Both can recite the "basics" of the specific knowledge.
>> Both can be able to report a significant analogy that brings to the
>> surface the important aspects of the subject. But only the expert will
>> have fluency when reasoning about the subject, to the point of being able
>> to say things about it that *he have never heard before*. And, equally
>> important, he/she don't have to think about this new way of seeing the
>> things: it is just "there" in its mind, ready to be uttered.
>>
>
>Although an exaggeration, I would regard the novices knowledge as
>more proximal to that of a parrot which has been taught to recite
>the Gettysburg Address. The parrot has all of the syntactical
>knowledge (if you can call it that) but none of the semantic
>knowledge, and therefore quite possibly no knowledge at all
>(Gettysburg Address wise that is). And I don't think this
>would require much in the way of analogizing, whether in
>the parrot or the novice. Rote learning could do much of
>the work at this stage.
>
Your example is interesting, but I see novices a little bit differently.
The parrot would recite the phonological sequence it listens to.
It is a tape recorder. The novice would acquire initially an analogy
that could allow him/her to recite it, but also to think a bit about
it. As an example, you explain to a child that electricity flows in
a wire just like water in a pipe. The child could, then, think that
if we cut the wire the flow of electricity will stop. This is a novice
thinking using a newly learned analogy.
An expert would use different "internal" models, that may consider,
for example, the effect of resistance in the flow of electrons. His
model is more "complex" and pre-thought, allowing it to answer more
quickly and precisely to queries.
>> Again in a very rough way, the transformation of novice to expert seems
>> to me to occur in a two step process. The novice acquires the new
>> knowledge through reading or listening to a lecture, whatever. At this
>> point, the novice will have someone else's "models" in his mind,
including
>> some analogical situations devised by the lecturer/writer. This is the
>> time where I'll put 99.9% of analogical models.
>>
>> But when this novice thinks about the matter, he/she will progressively
>> transform its initial analogies into more "deep" concepts, maybe even
>> using other analogies (but this time, his own's), certainly using snippets
>> of previous experiences as support. Understanding something as an expert
>> involves thinking about it in a way to fill that "terrain" where the
>> knowledge must be planted.
>>
>
>I don't think you've said much to challenge my contention that ALL
>of the knowledge, rather the flimsy half understood stuff of the
>novice or the "seeing" it all in familiar terms of the expert isn't
>ALL explainable in terms of 'the cognition of abstruse
>similarity and difference' (my defintition of reasoning).
>
I really like the phrase "cognition of abstruse similarity and
difference". But I find it to explain only the learning/perceptual
part (or the way it gets inside one's mind). There's something else
that we call "thinking" that I don't see fit this concept. It is
obvious that these processes (learning and thinking) are sometimes
very interwoven, leading to almost unrecognizable differences.
But the "pure" thinking of an expert is more like a flow of
activations over pre-built associations. The novice don't seem to
have all paths pre-built: he/she have to construct it as he/she
goes its way. The novice must think about it, the expert "knows" it.
>> The image I usually do of this process is this. A novice uses a crane
>> and ropes and cables to support the 10th floor of his knowledge building.
>> After some time, he/she uses new bricks (or copies from other edifices he
>> has) to build the lower floors.
>
>The copies are more analogies. I am using the term to include ALL
>comparisons, even ones in which the comparisons are literal duplicates.
>
I don't have nothing strong against this position. I just want to add
that very often what we use as "source" to copy from are patterns that
come from sensorimotor circuits. This is what I think constitutes the
"first floor". More on this follows.
>> This goes until he gets to the ground.
>> Now, he may get rid of the ropes and cables, his 10th floor will be
>> grounded *in his own developed knowledge*. Understanding is a highly
>> personal way of seeing things, and one can only be considered an expert
>> once a personal vision is developed.
>>
>
>But I would argue that every bit of what the individual "understands" is
>the result of his ability to compare this to that to this to that, etc.
>etc. and etc. When he is an expert, it is only because he has found
>a path by which to connect up his new knowledge to things he already
>understands and in terms of things he already understand which, were
>themselves understood by comparing them to other things he already
>understands, etc.
>
I have few problems with this vision.
>> It is interesting to explore these aspects in terms of AI architectures.
>> For me, AI (symbolic AI, in special) developed methods to build solid
>> 10th floors. But there's nothing below it, and any breeze will be enough
>> to put it down. So how can we put some solidity into that building?
>> How can we make one AI system's knowledge resilient? The answer, IMHO,
>> is lurking in some ideas of Piaget.
>>
>
>Lost me here, Serge.
>
The way our abstract reasoning appears to work is sometimes seen as
completely disconnected from our sensory and motor interactions with
the world. After all, Stephen Hawking has such a brilliant mind but
he does have that without doing much of an "interaction" with his
environment. The traditional AI people stopped their reasoning at
this point and so concluded that if one can model this abstract
reasoning, one would build an artificial mind. There are *two*
important problems associated with this vision.
Traditional AI people failed to perceive what was the *origin* of
those abstract concepts. These concepts were the result of
understanding built from other abstract concepts that stemmed on
even other concepts and so on. But it has to stop somewhere.
Where does it stops?
As an example, think about the word "justice". Could we define
it in a single phrase? I doubt. What we use to "understand" its
meaning is a *lot* of other concepts and snippets of experiences,
down to the level of sensory and motor sensations (imagine that
you could have part of your concept of justice standing over that
experience where your momy gave back to you your toy that a mean
friend stole from you).
The process of constructing these concepts involve (among a handful
of other things) what you properly called "cognition of abstruse
similarity and difference". I call this perception of repetition
and perception of anomaly.
The second problem is that of finding that this process happens
only when one is developing concepts. I think it also happens when
one is thinking. So the initial attempts at AI failed on learning
*and* reasoning, and that was primarily because they did not see
that mathematical, rule-like formalizations only addresses the
"top level". They didn't know that that lower level was *that*
important.
As an added observation, I think that even mathematical reasoning
uses such a process. The mathematician must compare, analogize,
simulate, test, hypothesize and most of these processes are things
that he also uses when dealing with real world problems.
A mathematician is versed, in my opinion, in doing with abstract
concepts what a child do with sensory and motor experiences.
He may use a different part of his brain to do that, but the
basic mechanisms involved appear to me to be the same.
As of Piaget, he is perhaps the most important proponent of
sensorimotor experiences as the building blocks of cognition on
children. Although some of his ideas are being reviewed, the core
of what he thought is being continually confirmed by recent
neuroscientific evidences.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Searle is wrong
Date: 09 Apr 1999 00:00:00 GMT
Message-ID: <370e072d@news3.us.ibm.net>
References: <19990330175902.25132.00000172@ng66.aol.com>
<7ec9qg$8pi$1@nnrp1.dejanews.com> <7ediv6$4o9$1@usenet01.srv.cis.pitt.edu>
<370dd0e1.22913875@news.peninsula.hotkey.net.au>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 9 Apr 1999 13:57:01 GMT, 166.72.21.67
Organization: SilWis
Newsgroups: comp.ai.philosophy
Andrew Jennings wrote in message
<370dd0e1.22913875@news.peninsula.hotkey.net.au>...
>On 6 Apr 1999 18:15:02 GMT, andersw+@pitt.edu (Anders N Weinstein)
>
>>Wittgenstein discusses such cases in the Philosophical Investigations.
>>His point seems to be that there is no sharp line to be drawn between
>>the first act that is genuine understanding and the last one that
>>merely displays rote memorization.
>
>
>Quite a deal of the Philosophical Investigations is devoted to this
>problem of what it means to "understand something". Yes, there is
>discussion of this point that there is no sharp line between not
>understanding and "actually understanding". But I think there is a
>deeper point: that it is actually a convention, a "game" or a process
>that we all learn that we later call "understanding". That there is
>no formal way of defining "understanding". Or if you like, it is
>beyond language. Wittgenstein asks what we observe of a person
>understanding that leads us to conclude that he/she understood? Its
>not at all simple.
>
Nicely put.
>Searle would call the process of understanding as being part of "the
>background", which we might also call a human understanding. I suspect
>that Searle's "background" is at least semi-mystical, but
>Wittgenstein's is not.
>
I'm with you so far...
>The Chinese Room is not really directed at AI. It is directed at
>cognitive science. In fact we could probably say that for all intents
>and purposes it has (or will soon) destroy cognitive science
>completely. I'm not sure its such a loss: what has it brought us?
>There is really no <necessary> relationship between AI and cognitive
>science.
>
...but now I'm not. I see (modern) cognitive science as the missing
link that can bring AI to good results. Back in 1956 when AI was
born, all they could think of was in formalizing mathematically the
"rules" of thought. Although a large part of CogSci still preaches
the "rulification" of thinking, there's a lot of work that indicates
other avenues to be explored. Searle's argument seems to be directed
to the "rule" guys but it fails to address the other branches of CogSci.
>Only a fool would argue with whether it is possible to construct
>strong intelligences, and Searle is not a fool. What he rejects is the
>notion that constructing these intelligences will help us understand
>human intelligence. What have we learned about human intelligence by
>constructing Deep Thought and world beating chess machines? Almost
>nothing. Does that mean its a waste of time? Absolutely not: it has
>created very powerful technology.
>
We really have not learned much about intelligence by constructing
Deep Blue. But the construction of artificial artifacts is essential
to show one *really* understands the principles involved. Understanding
human intelligence will not be complete if we don't abstract its
basic working principles and *replicate* them in another kind of
architecture (that's what I think is the essence of AI).
So if Deep Blue is a bad example, that only means they chose the
wrong way to go. But doing it the right way, the way pointed to by
modern cognitive science, we can have better results. At least,
I don't know of any better method, today, of going ahead.
>It seems to go forward we have to leave behind the notion that we will
>understand ourselves better by constructing non-human intelligences.
>
I'm somewhat puzzled by what you said. A bonobo ape is what I can call
a non-human intelligence and yet studying it may give us a lot of
insights into our own mental structure. How can one justify that we
wouldn't understand better the way whales swim by constructing
submarines?
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Searle is wrong
Date: 13 Apr 1999 00:00:00 GMT
Message-ID: <37132f9d@news3.us.ibm.net>
References: <19990330175902.25132.00000172@ng66.aol.com>
<7ec9qg$8pi$1@nnrp1.dejanews.com> <7ediv6$4o9$1@usenet01.srv.cis.pitt.edu>
<370dd0e1.22913875@news.peninsula.hotkey.net.au> <370e072d@news3.us.ibm.net>
<3712A61E.905F6DF8@rmit.edu.au>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 13 Apr 1999 11:50:53 GMT, 166.72.29.174
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Andrew Jennings wrote in message <3712A61E.905F6DF8@rmit.edu.au>...
>
>Sergio Navega wrote:
>>
>> ... Searle's argument seems to be directed
>> to the "rule" guys but it fails to address the other branches of
CogSci.
>
>It parodies the "rule" approach, but it is directed at the whole dream.
>
It may have been directed to all, but only the rule guys cared. The
connectionist guys seemed to leave the whole story aside. Searle's
"chinese gym" is a piece trying to address the connectionists but it
is so contrived as to be easily dismissed by them.
>>
>>
>> So if Deep Blue is a bad example, that only means they chose the
>> wrong way to go. But doing it the right way, the way pointed to by
>> modern cognitive science, we can have better results. At least,
>> I don't know of any better method, today, of going ahead.
>>
>> >It seems to go forward we have to leave behind the notion that we will
>> >understand ourselves better by constructing non-human intelligences.
>> >
>>
>> I'm somewhat puzzled by what you said. A bonobo ape is what I can call
>> a non-human intelligence and yet studying it may give us a lot of
>> insights into our own mental structure. How can one justify that we
>> wouldn't understand better the way whales swim by constructing
>> submarines?
>>
>>
>
>Sorry to be so cryptic. Of course studying non-human intelligences gives us
>some insight
>into human intelligence.
>
>What I should have said was: the AI dream that simply constructing a
>non-human, computer
>based intelligence will give us great insight into human intelligence seems
>to be flawed. The
>Chinese Room argues that we may be able to construct trivial mechanisms that
>exhibit
>human intelligence. It doesn't prove that we cannot construct strong machine
>intelligences: who
>can say what is possible and what is not possible?
>
This is a good point, that we're able to build mechanisms who appear
to exibit intelligence. That report us not only to Searle's considerations
but also to the whole argument of the Turing test. The problem, in my
vision, is that what we're assessing is not intelligence, it is a
mechanical reasoning ability. The subtle difference between these concepts
is what I find to be the root of the question. Intelligence does not
appear to be ability to reason mechanically. Intelligence appears to
be the ability to perceive things from the environment and use that
perceptions to refine even more the "perceptual mechanism", to a
point where the entity starts doing an internal model of the world.
We should assess intelligence, among several other ways, by inspecting
the quality of this model created by the entity. Granted, this
assessment is not easy, a reason why peeking into someone else's
intelligence may be difficult.
>Maybe what I'm also trying to say is that expecting computers to be
>intelligent is OK, but
>expecting them to be like people is not realistic.
>
I agree entirely, but perhaps by different reasons than yours.
Intelligent computers will not perform humanly because they will not
have the biological "problems" and drives we have. They will be
subject to different conditions. That will obviously give birth to
a different "kind" of intelligence, which will be an intelligence,
given a wide enough definition of this term.
>The central argument here is Searle's critique of cognitive science in "The
>Rediscovery of
>the Mind". I think that its possible to accept that argument and still
>pursue AI: it becomes
>a technological quest and not a branch of psychology.
>
Although I agree with you in your vision of AI as an engineering effort,
I really believe that we will learn a lot about human intelligence in our
attempts to do computer versions of it. The problem of intelligence and
cognition is too much complex, and AI will show ideas to help us
understand what sort of processes happen inside our skull.
Regards,
Sergio Navega.
Back to Menu of Messages Sergio's Homepage
Any Comments? snavega@attglobal.net