Selected Newsgroup Message
From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: arguments about humans and computers
Date: 10 Jan 2000 00:00:00 GMT
Message-ID: <387a458f_3@news3.prserv.net>
References: <946264351.6873.0.nnrp-07.9e989aee@news.demon.co.uk>
<84lsoh$c72$1@nntp9.atl.mindspring.net>
<84mavu$r5g@edrn.newsguy.com> <84o32n$c1k$1@nntp9.atl.mindspring.net>
<386F9FE0.FB9E851B@math.okstate.edu> <84pbun$61b$1@nntp9.atl.mindspring.net>
<3870DFF4.E1EADE98@aspi.net> <84qopb$4t0$1@nntp2.atl.mindspring.net>
<387112F1.381B4B4F@aspi.net> <84srks$8ul$1@nntp3.atl.mindspring.net>
<38722337.5E36701@robustai.net> <84tfl7$i4m$1@nntp8.atl.mindspring.net>
<G4Bc4.4862$G55.62101@news1.rdc1.ab.home.com> <3872EEEF.3E8018F2@netwood.net>
<rjEc4.297$XX8.182092288@news.telia.no> <38738C30.CDF95947@robustai.net>
<TuMc4.524$0s5.12119@news1.online.no> <38739BB1.152B08C2@robustai.net>
<yYYc4.106$fTc.170817024@news.telia.no> <387627F2.5C1198F1@robustai.net>
<38776A8D.B7D7797C@mgfairfax.rr.com> <3877CD65.4B881DEB@robustai.net>
<h0Rd4.3875$V2.40403@sea-read.news.verio.net> <387818A7.8B1D323D@robustai.net>
<4k4e4.3912$V2.41336@sea-read.news.verio.net>
<X56e4.1428$jt5.18830@news1.online.no>
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: abuse@prserv.net
X-Trace: 10 Jan 2000 20:48:15 GMT, 32.101.186.14
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,comp.ai.nat-lang,comp.ai.alife
patrik bagge wrote in message ...
>>Yes, I'm sure there are things we learn from the language itself.
>>
>>None the less, even these concepts are grounded in composites of sense
>>data and/or the hardware itself.
>
>
>Interesting topic,has anyone chewed on it already?
>
>I mean, how much of our concepts/knowledge is
>grounded in reality perceived thru senses and
>how much is inherent ?
>
The answers to this question are decidedly important to the
AI endeavor of creating intelligences. If the rationalists
advocate that much of our cognition is "pre-wired" in our
brain, hard-core empiricists come to the rescue uttering that
most of what we know came directly from experience. Both
camps are, when taken in their most radical positions, wrong,
in my opinion.
I have no doubt that the fundamental aspects of the 'information
processing' strategy used by our brain are genetically specified.
DNA codes core aspects of the neurogenesis (growth) and migration
(transport) of neural tissue in the brains of fetuses. But the
number and strength of synaptic connections among these neurons
are molded later and are a direct function of experiences.
In this first stage, children's brains are mostly sensorily
grounded. Their "concepts" are built directly over perceptions
and sensation/action loops. Even in this tender age, children's
brains are able to generalize, to induce structure from
sequences of sensory experiences, in a preliminary demonstration
of their future potential.
But then, amazingly, another magic occurs. They become capable of
thinking about 'unreal' objects, like the cookie that was
eaten by a 'fellow' child. They want that cookie, even knowing
that it does not exist anymore. They say 'She ate my cookie!',
she is uttering a phrase about an exemplar that ceased to
exist, at least in their original form.
With time (and due to the beautiful abilities of the human brain)
the child starts to understand concepts such as 'the tooth fairy',
which is an entity that is thought (by the child) to exist, even
being unreal. She ascribes unearthly characteristics to that
entity, characteristics such that of flying through the clouds
with a couple of small wings. She never saw that, but she
understands that it may exist. She is doing a primitive form of
analogical reasoning, mirroring concepts from one instance to
another, copying sensorily grounded concepts (which came from
other experiences) to assemble a *new* concept. This new concept
could logically exist and appears to be the first step toward
more complex concepts.
But then, again after some time, the child (now perhaps a
pre-teenager) starts to think about 'second order' concepts.
Concepts like 'justice', which don't have direct logical and
physical existence. Such concepts are, again, assembled by
'piercing' together lower level concepts and sensing/action loops.
But this time, the result transcends the material world: it is
a concept with a structure that cannot be directly reduced to
sensory experiences (although one could find a multi-level
descending path to a bunch of low level concepts). These
concepts are very hard to define in fundamental terms, only
in terms of other closely related concepts.
Such concepts are the ones defined *differently* by different
people: each person grounds his/her own definition of 'justice' in
particular, specific ways. The only thing really in common
between them is the symbolic convention 'j-u-s-t-i-c-e'.
Most of our daily language uses these abstract concepts,
and a very important technique to understand new exemplars
is analogical reasoning.
What one learns in college, for instance, (other than beer
drinking) may be roughly divided in two categories: abilities
and skills (learned mostly through laboratory sessions) and
new abstract concepts (essentially through lectures, diagrams,
discussions).
Thus, when we look after building AIs, we'd be better off knowing
that they will have to ground their concepts in a solid and
supportive set of lower level concepts, up to the basic
sensory level. However, it is open to discussion if we really
need to go that deep. Helen Keller could talk about colors and
scenery, even being deaf-blind. Maybe we could devise a method
to build an Helen Keller AI, able to talk about our world, even
being completely blind to its real nature.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: arguments about humans and computers
Date: 11 Jan 2000 00:00:00 GMT
Message-ID: <387b33dc_2@news3.prserv.net>
References: <946264351.6873.0.nnrp-07.9e989aee@news.demon.co.uk>
<386F9FE0.FB9E851B@math.okstate.edu>
<84pbun$61b$1@nntp9.atl.mindspring.net> <3870DFF4.E1EADE98@aspi.net>
<84qopb$4t0$1@nntp2.atl.mindspring.net> <387112F1.381B4B4F@aspi.net>
<84srks$8ul$1@nntp3.atl.mindspring.net> <38722337.5E36701@robustai.net>
<84tfl7$i4m$1@nntp8.atl.mindspring.net>
<G4Bc4.4862$G55.62101@news1.rdc1.ab.home.com> <3872EEEF.3E8018F2@netwood.net>
<rjEc4.297$XX8.182092288@news.telia.no> <38738C30.CDF95947@robustai.net>
<TuMc4.524$0s5.12119@news1.online.no> <38739BB1.152B08C2@robustai.net>
<yYYc4.106$fTc.170817024@news.telia.no> <387627F2.5C1198F1@robustai.net>
<38776A8D.B7D7797C@mgfairfax.rr.com> <3877CD65.4B881DEB@robustai.net>
<h0Rd4.3875$V2.40403@sea-read.news.verio.net> <387818A7.8B1D323D@robustai.net>
<4k4e4.3912$V2.41336@sea-read.news.verio.net>
<X56e4.1428$jt5.18830@news1.online.no> <387a458f_3@news3.prserv.net>
<B_re4.2296$jt5.36524@news1.online.no> <387A5009.E5ADA7F@sandpiper.net>
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: abuse@prserv.net
X-Trace: 11 Jan 2000 13:45:00 GMT, 32.101.186.139
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,comp.ai.nat-lang,comp.ai.alife
Jim Balter wrote in message <387A5009.E5ADA7F@sandpiper.net>...
>
>Yes, it seems that there is equivocation in the use of "blind" here.
>HK was visually blind, but that doesn't mean that she was "blind to
>its real nature" in a *conceptual* sense -- in fact, there is every
>indication that she had significant clarity in that regard.
>I believe, as I've said before, that the problem is in erroneous
reification
>of sensation and experience. What is critical to "our world" is the
>*relationships* among its elements, and it is quite possible to tease
>out these relationships without having the specific sensory apparatus
>by which most of us discover them. People keep referring to the "real
>experience" as if the essence of, say, the experience of blue things is
>some sort of blue ink in our brains. But my claim is that the essence
>is the relationships among blue things and between blue things and other
>things, and that there is nothing else there *but* the relationships --
>no blue ink, no "blueness" as a *thing*. And thus it is possible,
>I believe, to obtain the *same* sensation via "analogous" means --
>"analogous" refers precisely to *relationships*; when we offer an analogy A
>to some B, we believe that A has some internal relationships
>that are the same as some internal relationships of B, and thus inferences
>about A may apply to B.
>
I largely agree with what you wrote, except for a minor detail.
I agree that analogical relationships are enough for most of sensory
mappings among distinct modalities. But the question is that this
is not entirely adequate for all situations and may impair the
performance of the entity in certain conditions.
When I see a piece of shining metal under the sun, I'm capturing
visual details of that piece that could be assembled by the mapping
of, for instance, touch sensations. Thus, a blind man could, in
principle, be able to form a "mental image" of the colors of that
metal by the use of an analogical mapping from a similarly progressive
sequence of roughness, as sensed by his/her hand. This may well be
enough to give him/her a notion of what a degrade' is. As you said,
there's no blue neuron.
But there's something which is being lost. It is the exact binding
of that degrade' to other sensory aspects. Although one can copy
most of the progressiveness of one sensory modality to another,
(and even being able to adjust growth and variation rates)
it is not possible to assemble *synchronous and bound* visions of
one object, even if the "parts" of this object (visual impression,
touch impression, etc) are individually available. Much of our
concept of "object" comes from this synchronous binding.
A blind man's world view will definitely be different than a sighted
one, although he/she is largely capable of using the *same* linguistic
symbols (words) to communicate. The fact that this disability does not
seem to affect language is not, in my opinion, a suggestion that the
blind man is able to fully compensate its lack of vision, but rather a
clue that our language is very weak when used to communicate purely
sensory aspects.
This, obviously, leads us to think about the limit of this situation,
such as when a computer with a single sense (say, audition) is (or isn't)
able to develop good communication skills (all other problems of
reasoning/perception being solved!).
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: arguments about humans and computers
Date: 11 Jan 2000 00:00:00 GMT
Message-ID: <387b33df_2@news3.prserv.net>
References: <946264351.6873.0.nnrp-07.9e989aee@news.demon.co.uk>
<84o32n$c1k$1@nntp9.atl.mindspring.net>
<386F9FE0.FB9E851B@math.okstate.edu> <84pbun$61b$1@nntp9.atl.mindspring.net>
<3870DFF4.E1EADE98@aspi.net> <84qopb$4t0$1@nntp2.atl.mindspring.net>
<387112F1.381B4B4F@aspi.net> <84srks$8ul$1@nntp3.atl.mindspring.net>
<38722337.5E36701@robustai.net> <84tfl7$i4m$1@nntp8.atl.mindspring.net>
<G4Bc4.4862$G55.62101@news1.rdc1.ab.home.com> <3872EEEF.3E8018F2@netwood.net>
<rjEc4.297$XX8.182092288@news.telia.no> <38738C30.CDF95947@robustai.net>
<TuMc4.524$0s5.12119@news1.online.no> <38739BB1.152B08C2@robustai.net>
<yYYc4.106$fTc.170817024@news.telia.no> <387627F2.5C1198F1@robustai.net>
<38776A8D.B7D7797C@mgfairfax.rr.com> <3877CD65.4B881DEB@robustai.net>
<h0Rd4.3875$V2.40403@sea-read.news.verio.net> <387818A7.8B1D323D@robustai.net>
<4k4e4.3912$V2.41336@sea-read.news.verio.net>
<X56e4.1428$jt5.18830@news1.online.no> <387a458f_3@news3.prserv.net>
<B_re4.2296$jt5.36524@news1.online.no>
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: abuse@prserv.net
X-Trace: 11 Jan 2000 13:45:03 GMT, 32.101.186.139
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,comp.ai.nat-lang,comp.ai.alife
patrik bagge wrote in message ...
>As usual it's hard to find anything to disagree with
>in your text.
>
>A little comment on the development of a childs abilities.
>I have a 5 year old and have been trying to 'track' it's
>mental development, but must confess that i already have
>lost track of many 'sources' to abilities that he posesses.
>
>As an example, this christmas he bluntly revealed the
>identity of santa, without any concern of santa's feelings.
>This achievement commands a lot of cognitive
> / connectory capacity, since the santa was well masked
>and played his role with elegance.
>
>a guess:
>1) somebody disappeared
>2) recognition of body movement
>3) recognition of partial body features
>4) recognition of voice components
>(even if the voice was 'distorted')
>
How about all the alternatives at the same time?
One of the most intriguing aspects of our brain is the way
it is able to assemble meaningful structures from incomplete
and damaged parts. This is a strong version of the "whole
greater than the sum of its parts". In fact, what's being
found to be important by neuroscience, in the last decades,
is that more important than the perception of parts is the
ability to build wholes through the coherent use of available
parts.
Our visual system, for instance, is impressively capable in
this regard. In the forest we're able to assemble the whole
image of a predator even if what we see is a collection of
images resultant from the movement of that predator behind
a bunch of leaves. The complexity of this visual scene, with
lots of fragments moving including not only parts of the body
of the predator, but also leaves moved by the wind, is
astonishing. Recently an article in Nature showed that we're
able to distinguish whole objects composed of parts that
bear no relation to each other, other than synchronism in
the temporal variations.
Thus, your kid is showing the kind of ingenuity that makes
brains special. I'm not shy in proposing that this kind of
ability is the cornerstone of intelligence. Will computers
ever be able to do that?
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: arguments about humans and computers
Date: 11 Jan 2000 00:00:00 GMT
Message-ID: <387b5ea6_2@news3.prserv.net>
References: <946264351.6873.0.nnrp-07.9e989aee@news.demon.co.uk>
<84pbun$61b$1@nntp9.atl.mindspring.net>
<3870DFF4.E1EADE98@aspi.net> <84qopb$4t0$1@nntp2.atl.mindspring.net>
<387112F1.381B4B4F@aspi.net> <84srks$8ul$1@nntp3.atl.mindspring.net>
<38722337.5E36701@robustai.net> <84tfl7$i4m$1@nntp8.atl.mindspring.net>
<G4Bc4.4862$G55.62101@news1.rdc1.ab.home.com> <3872EEEF.3E8018F2@netwood.net>
<rjEc4.297$XX8.182092288@news.telia.no> <38738C30.CDF95947@robustai.net>
<TuMc4.524$0s5.12119@news1.online.no> <38739BB1.152B08C2@robustai.net>
<yYYc4.106$fTc.170817024@news.telia.no> <387627F2.5C1198F1@robustai.net>
<38776A8D.B7D7797C@mgfairfax.rr.com> <3877CD65.4B881DEB@robustai.net>
<h0Rd4.3875$V2.40403@sea-read.news.verio.net> <387818A7.8B1D323D@robustai.net>
<4k4e4.3912$V2.41336@sea-read.news.verio.net>
<X56e4.1428$jt5.18830@news1.online.no> <387a458f_3@news3.prserv.net>
<B_re4.2296$jt5.36524@news1.online.no> <387b33df_2@news3.prserv.net>
<zMHe4.195$6um.170360832@news.telia.no>
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: abuse@prserv.net
X-Trace: 11 Jan 2000 16:47:34 GMT, 32.101.186.166
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,comp.ai.nat-lang,comp.ai.alife
Patrik Bagge wrote in message ...
>
>>Thus, your kid is showing the kind of ingenuity that makes
>>brains special. I'm not shy in proposing that this kind of
>>ability is the cornerstone of intelligence. Will computers
>>ever be able to do that?
>
>
>I'm trying to keep my optimism here, don't want to
>hear such, especially from you !
>
Let me assure you that I'm an optimist too. My question, perhaps,
should have been written as "When will we finally understand
the essential steps to make our computers intelligent?".
I only know that it will be sooner than the pessimist think and
a bit longer than the overoptimistic.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: arguments about humans and computers
Date: 11 Jan 2000 00:00:00 GMT
Message-ID: <387b5ea4_2@news3.prserv.net>
References: <387112F1.381B4B4F@aspi.net>
<84srks$8ul$1@nntp3.atl.mindspring.net> <38722337.5E36701@robustai.net>
<84tfl7$i4m$1@nntp8.atl.mindspring.net>
<G4Bc4.4862$G55.62101@news1.rdc1.ab.home.com> <3872EEEF.3E8018F2@netwood.net>
<rjEc4.297$XX8.182092288@news.telia.no> <38738C30.CDF95947@robustai.net>
<TuMc4.524$0s5.12119@news1.online.no> <38739BB1.152B08C2@robustai.net>
<yYYc4.106$fTc.170817024@news.telia.no> <387627F2.5C1198F1@robustai.net>
<38776A8D.B7D7797C@mgfairfax.rr.com>
<sdol7s49diu9lo6aig7srpk06i6e4bpoco@4ax.com> <85fdit$26c9@edrn.newsguy.com>
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: abuse@prserv.net
X-Trace: 11 Jan 2000 16:47:32 GMT, 32.101.186.166
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,comp.ai.nat-lang,comp.ai.alife
Daryl McCullough wrote in message <85fdit$26c9@edrn.newsguy.com>...
>Oliver Sparrow says...
>
>>Minds come blank, equipped with an architecture that is predisposed to
>>differentiate. Some forms of predisposition are very strong, others rather
>>weak.
>
>Why do you believe that minds come blank?
>
Why do you seem to don't believe minds come blank? ;-)
From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: arguments about humans and computers
Date: 12 Jan 2000 00:00:00 GMT
Message-ID: <387cedd1_1@news3.prserv.net>
References: <387112F1.381B4B4F@aspi.net>
<84srks$8ul$1@nntp3.atl.mindspring.net> <38722337.5E36701@robustai.net>
<84tfl7$i4m$1@nntp8.atl.mindspring.net>
<G4Bc4.4862$G55.62101@news1.rdc1.ab.home.com> <3872EEEF.3E8018F2@netwood.net>
<rjEc4.297$XX8.182092288@news.telia.no> <38738C30.CDF95947@robustai.net>
<TuMc4.524$0s5.12119@news1.online.no> <38739BB1.152B08C2@robustai.net>
<yYYc4.106$fTc.170817024@news.telia.no> <387627F2.5C1198F1@robustai.net>
<38776A8D.B7D7797C@mgfairfax.rr.com>
<sdol7s49diu9lo6aig7srpk06i6e4bpoco@4ax.com> <85fdit$26c9@edrn.newsguy.com>
<387b5ea4_2@news3.prserv.net> <85fqmn$31c0@edrn.newsguy.com>
<387b9736_4@news3.prserv.net> <85i5up$1avg@edrn.newsguy.com>
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: abuse@prserv.net
X-Trace: 12 Jan 2000 21:10:41 GMT, 32.101.186.225
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,comp.ai.nat-lang,comp.ai.alife
Daryl McCullough wrote in message <85i5up$1avg@edrn.newsguy.com>...
>Sergio says...
>
>[stuff deleted]
>
>>So intelligence (and knowledge) is an accident, a detour,
>>something not stored in genes, but "accumulated" only in the
>>environment of the entity (books, videos, papyrus, chitchat,
>>etc).
>
>I'm not sure what you mean. It seems clear that there must
>be a genetic component to intelligence (otherwise we could
>train dogs to program computers).
>
I agree with you: there must be genetic components which
influence intelligence. But it is not a direct influence!
I think that genetics influence intelligence in the same
way that the quality of the clay influences the sculpture.
A good artist with bad clay will produce an interesting,
but not brilliant, piece. A lousy artist with excellent
clay will do a regular job, never a masterpiece. Genetics,
in my vision, is comparable to clay quality and the
sculptor is the environment.
Well, it's time to refine a bit the terms we were talking
about. When one says 'blank mind', I understand that one
is saying a brain with no prior informational or specific
content. I agree that this is too much.
This does not mean that this brain doesn't carry any specific
predisposition toward some kinds of informations. After all,
we all have eyes, and the visual cortex is a clear (genetically
specified) specialization of our brain to handle that kind
of information.
Things get a bit complicated when one says that our brain
contains specializations to handle generative and symbolic
languages. We surely have some kind of special characteristic,
but my opinion is that this is *not specific* to language,
the way our visual cortex is specific to treat vision.
Even this visual cortex is not *fully* specific to vision.
Blind humans are known to reuse their visual cortex to help
in auditory and touch processing. Brains are extremely plastic.
>>Brain capacity to acquire all this stuff from its
>>environment is what appears to count as important.
>
>Well, the contrary view is that genetically our brains
>come equipped with a rough and ready rudimentary model
>of the world. Babies expect objects to have persistence,
>they expect objects with faces (eyes and mouths) to be
>more important in their lives than nonedible things without
>faces. In this view, the role of experience is to refine
>this rudimentary model, but experience doesn't create the
>model.
>
One of the commonly used examples of innate characteristics
is face recognition. Although there are some researchers that
put some doubt in the real existence of this innate trait,
lets consider for now that we really have a predisposition
toward faces. This can be traced to evolutionary advantages
over a large period of time, because babies without this
predisposition would have difficulty in finding their parents,
once lost.
Over a long period of time, this would select (priorize)
the babies with good face recognition circuitry.
So in a sense, one may say that our brain is not completely
"blank", but carry some particularities that can be traced
to that kind of special circuitry.
The question muddles a bit when one is trying to explain
higher cognitive abilities, such as language and abstract
reasoning. One thing is to have a brain capable of
learning to process these things, other is saying that
we have *specific* circuits, evolutionarily selected in
order to process those abilities. It is this latter
sense that I question.
>>There are some innate traits in humans, but they act mostly as
>>general directions to be followed, and closely related to the
>>kind of sensory processing done by other mammals.
>
>Well, I consider human intelligence to be part of the continuum
>with animal intelligence.
>
Me too! The question is that because our brains appear to have
something special in terms of 'information processing abilities',
we're able to go farther than the other animals. For me, human
cognition differs from animals *only* in the *depth* of the
information hierarchies processed. Bonobo apes demonstrate
interesting abilities to process language, but up to a
point. They don't understand phrases with too much nested
clauses.
>>The environment (culture) may turn those traits upside down.
>
>No, I don't think so. Not completely.
>
Hum.. Would you ever consider eating your enemies? Or could
you think about making sex with your children? Or killing all
neighbors with 'strange faces'? These are common practices
in several animal societies and even in more primitive
tribal groups of humans. However, it is not "allowed" in
our civilized world and we can't even think about doing
such things. The reason is purely social conditioning.
>>And these traits cannot carry predispositions or "instructions" that
>>took less than a million years to develop. One of such not
>>innate traits appears to be language, the still hotly debated
>>subject within cognitive science.
>
>I really don't know enough to make an educated pronouncement as
>to whether language is innate or not, or how long it takes for
>characteristics to become innate, or what circumstances are necessary
>for the development of innate behavioral traits. From my meager
>knowledge of such things, I tend to strongly disagree with you
>on this subject, but I humbly admit the possibility that I am
>too ignorant for my opinion to be worth much.
>
I've been studying language origins for a while now and I can
tell you that most of the cognitive scientists think the way
you do (unfortunately, even some respectable neuroscientists).
However, I have found no convincing reason so far to think
this way. I follow the minority of scientists (mostly the
connectionist guys) who find reasonable the other side of
the chasm. Besides having no strong and sound reason (all
reasons to support innateness, until today, were convincingly
rebutted), language innateness is not really necessary (Chomsky
and neo-chomskian arguments notwithstanding). We're often
amazed by the learning abilities of children, and that
"suggests" that we have something special in that regard.
Yes, we really have something special. It is a brain capable
of impressive performance in learning regularities and
generalizing them, but not specific to language. The key
to dismiss Chomskyans is to understand that language is
*not* only syntactic regularity. It's a whole bunch of
semantic regularities (things that are much, much more
important to children than syntax), things that are
usually put "under the linguistic rug" of the purely
syntactic explanations.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: arguments about humans and computers
Date: 13 Jan 2000 00:00:00 GMT
Message-ID: <387e25bc_3@news3.prserv.net>
References: <387112F1.381B4B4F@aspi.net>
<84srks$8ul$1@nntp3.atl.mindspring.net> <38722337.5E36701@robustai.net>
<84tfl7$i4m$1@nntp8.atl.mindspring.net>
<G4Bc4.4862$G55.62101@news1.rdc1.ab.home.com> <3872EEEF.3E8018F2@netwood.net>
<rjEc4.297$XX8.182092288@news.telia.no> <38738C30.CDF95947@robustai.net>
<TuMc4.524$0s5.12119@news1.online.no> <38739BB1.152B08C2@robustai.net>
<yYYc4.106$fTc.170817024@news.telia.no> <387627F2.5C1198F1@robustai.net>
<38776A8D.B7D7797C@mgfairfax.rr.com>
<sdol7s49diu9lo6aig7srpk06i6e4bpoco@4ax.com> <85fdit$26c9@edrn.newsguy.com>
<387b5ea4_2@news3.prserv.net> <85fqmn$31c0@edrn.newsguy.com>
<387b9736_4@news3.prserv.net> <85i5up$1avg@edrn.newsguy.com>
<387cedd1_1@news3.prserv.net> <85knmp$i1v@edrn.newsguy.com>
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: abuse@prserv.net
X-Trace: 13 Jan 2000 19:21:32 GMT, 32.101.186.64
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,comp.ai.nat-lang,comp.ai.alife
Daryl McCullough wrote in message <85knmp$i1v@edrn.newsguy.com>...
>Sergio says...
>
>>...there must be genetic components which
>>influence intelligence. But it is not a direct influence!
>>I think that genetics influence intelligence in the same
>>way that the quality of the clay influences the sculpture.
>
>I don't really think that analogy is apt, for several reasons.
>First of all, unlike clay (and unlike neural nets), the brain
>is not an undifferentiated mass---the brain has a complex
>structure, with definite regions that play different roles.
However, far from the more general aspect, these roles are
interchangeable, given particular development conditions.
The cerebelum cannot be used for anything else other than
coordinating movement. Hippocampus will always be associated
with learning. These are obviously genetically determined
structures.
But things are much more plastic in the cortex, where one
area may "incorporate" other that is not being used in full.
>I know that these roles are not absolutely fixed---that people
>with brain damage to one area can learn to use another area
>in a different way. But I think that there is enough general
>similarity between different people's brains that we should
>conclude that different parts of the brain are genetically
>predisposed to serve different roles.
>
I agree with this, after all there is certainly a genetic
determination that the auditory cortex is located in the
temporal lobe. But this is like clay, can be diverted to
other uses according to the "sculptor". A natively deaf
child will use parts of the auditory cortex to help
sensorimotor areas, in an effort to become more proficient
in sign language.
>Another thing wrong with the analogy is that the brain is
>not a passive recipient of knowledge---it actively seeks out
>knowledge. Babies *want* to learn. They don't need to wait
>for a sculptor to mold them.
>
I agree entirely, and this leaves me the opportunity to
refine a bit my idea of "who" is the "sculptor". My sculptor
is not only the environment. It is also the inner brain,
the brain stem, innate emotions. When I say 'brain' I
usually mean the cortex, the seat of thinking and
consciousness. This is the clay, which is molded by
environmental pressures and *also* by the internal drives
of the entity, such as emotions, desires, volitions, etc.
>Another difference is that while the clay doesn't care
>what form it takes, I think that babies *do* have a predisposition
>to want to learn certain things. As I said in another post,
>babies seem to be born interested in faces, and will study
>faces to learn how to act.
>
Babies are driven by emotional needs, which I agree are largely
genetically predisposed. Thinking is in the middle of this "war",
the desire and the environment. Brains must learn from both fronts.
During this learning, interesting things happen. I had seen
a baby smiling to a drawer, which did not resemble a face.
Someway that activated emotional responses.
>>Things get a bit complicated when one says that our brain
>>contains specializations to handle generative and symbolic
>>languages. We surely have some kind of special characteristic,
>>but my opinion is that this is *not specific* to language,
>>the way our visual cortex is specific to treat vision.
>
>I wish I knew some way to get more evidence about this
>matter. It sure seems to me that a predisposition for
>language use is innate. *Every* child with a normally
>functioning brain learns to use language.
Every child with a normally functioning brain is able to
look at a drawing of a cube and notice, from a 2D line
drawing, the 3D perspective. This requires brain power,
perhaps as much as necessary for language.
>As I pointed
>out in a post about deaf children in Nicaragua a few
>months ago, even when children are isolated (through
>deafness) from anyone who knows language, they will
>develop their own language. And not just the rudiments
>for saying "Food", "Water", "I'm mad". If they start
>with such a rudimentary language, children will invent
>enough extra structure to be able to tell stories.
>It seems to me to be pretty hard to explain this
>under the assumption that language is just another
>skill for children to learn, like arithmetic and
>riding a bicycle.
The nicaraguan children you refer were able to develop
their version of sign language because they had all
it takes to develop language: generic brain power, the
need (perhaps the urge) to communicate with each other
and an environment in which to create the symbolic
conventions.
Why language don't appear among apes? Chomsky and friends
say because they don't have the "language module".
But the brains of apes are interestingly similar to our
own. There's no fundamentally different physiological
characteristics which can be assigned to a language organ.
It is more reasonable to say that language is something
that, among other things, demand an *environment* and
an *urge* to communicate. Any of those nicaraguan children,
if grown under isolation, would not develop language
spontaneously. It is necessary interaction, mutual reference,
purpose. This is a more reasonable origin for language, and
not a specific module that cannot be justified in
evolutionary terms.
>>
>>Over a long period of time, this would select (priorize)
>>the babies with good face recognition circuitry.
>>So in a sense, one may say that our brain is not completely
>>"blank", but carry some particularities that can be traced
>>to that kind of special circuitry.
>
>Well, that's why I would say that brains *don't* start
>out blank.
>
But you have to agree that in this case one has a sound
evolutionary origin for the trait (even if this face recognition
circuitry is still under discussion).
But let me present one example you will like. It was recently
discovered that we (and close mammals) have what is called
mirror neurons. These neurons go active when one makes a certain
movement with the arm. Interestingly, these neurons *also*
activate when the subject *sees* someone else doing that same
movement with their arm. Thus, these neurons can be used as
a substrate for the understanding of the motor commands
necessary to execute a given task, just by looking someone
doing the task.
This appears to be linked to the "imitation" puzzle of newborn
infants. In the first weeks since they were born, infants are
able to imitate tongue protrusion, only by seeing her parents
doing it. This puzzled scientists for some time. The
interpretation of this event is important in our case.
The first interpretations were that the baby "consciously"
imitated the movement, because she recognized the protrusion
in the other and commanded her tongue to do the same. Obviously,
this leads to all sorts of innatist theories. The trouble
with this interpretation is that it requires that the baby
be aware of the other (which means, having a notion of self,
other, etc). It's too much for a baby.
But now with the mirror neurons one can lay a more interesting
hypothesis: that the baby don't have any kind of control over
this movement, that this is done by an innate circuit which
"mechanically" (and unconditionally) replicates the movement.
This is more reasonable, because after some weeks, babies loses
this ability, as if the circuit were "redirected" to another,
non-automatic purpose.
>>The question muddles a bit when one is trying to explain
>>higher cognitive abilities, such as language and abstract
>>reasoning. One thing is to have a brain capable of
>>learning to process these things, other is saying that
>>we have *specific* circuits, evolutionarily selected in
>>order to process those abilities. It is this latter
>>sense that I question.
>
>I think that there are more possibilities than "learned"
>versus "hard-wired". A third option is that a brain
>can be hard-wired to learn specific sorts of things.
>I think that if you try to come up with a model of
>learning, such as neural nets, or perceptrons, or
>self-programming Turing machines, or whatever, you
>will find that there is no such thing as a general-purpose
>learner. Whatever design you pick, certain things will
>be easy to learn, and other things will be hard (if
>not impossible). What I think is not necessarily that
>humans are born with specific functions for language
>processing, but that maybe their brains are designed
>so that learning such functions comes more naturally
>than learning some other things.
>
I agree entirely! In particular, it's not possible to have
a general purpose learning mechanism suitable to *all*
possible conditions present in the universe. Our brain
is surely adapted to the kind of particularities that
our world (Earth) has. But this particularization, I
suggest, does not go very deep. It affects only the
initial levels, those closer to sensory processing.
This was done, obviously, by the excruciatingly
slow process of evolution. Among all sorts of learning
mechanisms, only ours survived. But evolution is wise,
it cannot hardwire all the thing. So it concerned with
the initial levels, leaving *all* remaining levels
(language included) to be learned by the entity during
its life.
>>>I really don't know enough to make an educated pronouncement as
>>>to whether language is innate or not, or how long it takes for
>>>characteristics to become innate, or what circumstances are necessary
>>>for the development of innate behavioral traits. From my meager
>>>knowledge of such things, I tend to strongly disagree with you
>>>on this subject, but I humbly admit the possibility that I am
>>>too ignorant for my opinion to be worth much.
>>
>>I've been studying language origins for a while now and I can
>>tell you that most of the cognitive scientists think the way
>>you do (unfortunately, even some respectable neuroscientists).
>
>>However, I have found no convincing reason so far to think
>>this way.
>
>Do you think that there is another model that explains
>everything as well? I have serious problems with the
>idea of general-purpose learning ability---I just don't
>think that there is such a thing.
>
Perhaps this is the most important part of all our discussion.
Lets take a look carefully. When we examine 'mind', we see
specific modules to treat audition, others to treat vision,
others to handle planning, arithmetic, factual memory, etc.
Under this aspect, it is hard to find anything truly general.
For instance, audition is so different from vision that's
not easy to come up with general mechanisms.
This is what leads us to think about specific circuitry for
each of these tasks, which pushes us to think about strong
genetic specialization. Now lets go under the hood. The brain.
Its all neurons, right? Although there are some
physiological differences between neurons of the auditory
cortex and visual cortex, these differences are largely a
function of experience and self-organization. Reutilization of
one kind of cortex by the other is largely documented in
the literature, and supports this idea.
So what we have in our hands is, roughly, this situation:
a) We have "mental modules" that behave and interact like
implemented by specific circuitry. They suggest that the kind
of processing done by each module is specific and "cognitively
impenetrable".
b) At the same time, we know that the physical implementation
of the mind is the brain, which is constituted by neurons with
roughly similar operating principles.
I'm not saying here that we should think about silicon brains.
I'm saying that we should think about implementing the
*information processing principles* of the brain, and that
this information processing principle should be the same,
whether vision, audition, arithmetic computation, whatever.
What AI has been doing through all these years is the
implementation of minds. What connectionist people suggest is
that we should be implementing brains, in the functional sense.
If I'm looking as a connectionist here, let me dissipate this
impression. They have their share of problems and I don't follow
them to the end. But I follow their ideals of looking for
basic principles behind the 'visible behavior' of the mind.
And I cultivate the goal of finding mechanisms (partly symbolic,
partly connectionist) which is able to replicate the kind of
functional processing that the brain does.
>>
>>Yes, we really have something special. It is a brain capable
>>of impressive performance in learning regularities and
>>generalizing them, but not specific to language. The key
>>to dismiss Chomskyans is to understand that language is
>>*not* only syntactic regularity. It's a whole bunch of
>>semantic regularities (things that are much, much more
>>important to children than syntax), things that are
>>usually put "under the linguistic rug" of the purely
>>syntactic explanations.
>
>I strongly disagree with your conclusion, but I'm trying
>hard not be that combination of ignorance and arrogance
>that so infests usenet. I think that you are right that
>language abilities are more than syntax. However, what
>I consider a strong possibility is that the ability to
>formulate sophisticated, general models of the world
>*requires* an organized way to structure information.
>I think that general purpose neural nets are simply
>inadequate for higher levels of thought. So syntactic
>abilities might very well be a key ingredient making
>all higher-level thought possible. Maybe the brain
>is specialized for model-forming, and language use
>is more or less trivial given that specialization.
>Maybe language ability simply *is* a reflection
>of our model-forming ability.
>
I have nothing against what you say here. In particular,
our abilities to make models of the world and our
abilities of structuring and generalizing information
seem to be of foremost importance to our cognition.
What I say is that non-human primates also have similar
abilities, perhaps only differing in power from ours.
Given sufficient evidences that this is true, Chomsky
could, I guess, say that apes also have a language module.
I would say that there is no such thing, and that apes
differ from us mostly because of potential, not
constituency.
There are some algorithms that can run adequately in
a PC/XT with 640K of main memory. However, given the
memory and processing speed of a Pentium III, this
very same algorithm is able to go deeper in its
information processing abilities.
It turns out that this difference in "performance" is
what allows us to "cross" the symbolic barrier: we use
external symbols to represent our thoughts and this is
of foremost importance when knowledge is transferred
from distinct generations.
Regards,
Sergio Navega
From: "Sergio Navega" <snavega@attglobal.net>
Subject: Re: arguments about humans and computers
Date: 13 Jan 2000 00:00:00 GMT
Message-ID: <387dc3f9_3@news3.prserv.net>
References: <38776A8D.B7D7797C@mgfairfax.rr.com> <sdol7s49diu9lo6aig7srpk06
<387b9736_4@news3.prserv.net> <85j7eu$5fe@ux.cs.niu.edu>
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: abuse@prserv.net
X-Trace: 13 Jan 2000 12:24:25 GMT, 32.101.186.81
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy,comp.ai.nat-lang,comp.ai.alife
Neil W Rickert wrote in message <85j7eu$5fe@ux.cs.niu.edu>...
>"Sergio Navega" <snavega@attglobal.net> writes:
>
>>So intelligence (and knowledge) is an accident, a detour,
>>something not stored in genes, but "accumulated" only in the
>>environment of the entity (books, videos, papyrus, chitchat,
>>etc). Brain capacity to acquire all this stuff from its
>>environment is what appears to count as important.
>
>We generally concur in our empiricist positions. But I think you
>have overstated things here. Intelligence may be an evolutionary
>accidents, but I think it goes to far to say it is "something not
>stored in genes." I can agree that knowledge is not stored in genes,
>but our ability to acquire knowledge surely is.
>
Maybe my first paragraph had gone too far in its implication.
I'm surely not suggesting that there's no genetic origins for
intelligence, as this is demonstrably wrong. This is an old war,
that of establishing the "amount" of influence of genetic
predispositions in the individual, as opposed to the influences
of the environment.
What I was trying to imply is that, even with substantial genetic
influences, intelligence also is a function of environmental
influences. However, the range of this assertion is a bit more
than it appears at first.
Usually, people confuse knowledge with intelligence. It is obvious
that knowledge improves with experiences (at least, it should...).
But knowledge is not intelligence. I'm saying more than that.
I'm saying that one could live in a different environment and,
as a result of that, to have *physical changes* in his/her brain
that improves intelligence (regardless of knowledge).
This is obviously related to the plasticity stuff. Although this
is considered important during childhood and less in adulthood,
I see this process also happening in the latter. What it
takes is just a "pushy" environment. Granted, this environment
must push the individual not only with "intellectual challenges",
but more with "emotional" pressures. I find reasonable the
idea that a large part of the improvements in intelligence are
driven not by intellectual forces, but predominantly by emotional
challenges.
Regards,
Sergio Navega.
Back to Menu of Messages Sergio's Homepage
Any Comments? snavega@attglobal.net