Selected Newsgroup Message
From: daryl@cogentex.com (Daryl McCullough)
Subject: Induction and Evolution
Date: 27 Jul 1999 00:00:00 GMT
Message-ID: <7nkaur$2g5v@edrn.newsguy.com>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
In a different thread, called "Neurons and consciousness", I suggested
to Sergio that the process of induction in humans was analogous to the
process of evolution in nature. In this thread, I want to explore that
analogy further.
Evolution through natural selection appears to be a purposeful,
directed process. It appears as though evolution is "noticing"
patterns in the environment, and then causing species to evolve
to become better suited to those patterns. But of course, there
is no conscious process at work in evolution. What is really
happening is this: nature is constantly "tinkering" with current
designs for creatures through the process of random mutation.
This tinkering is completely *independent* of what is good or
bad for survival. As a matter of fact, *most* mutations are
detrimental to the health of the species. However, the process
of natural selection culls the bad mutations and allows only
the rare good mutations to propagate. The net effect is as if
nature modified the genome to better fit the environment.
My hypothesis is that the act of "induction" in humans works
in a similar way. It may *appear* that humans "notice" patterns
and form concepts based on the patterns they notice, but I just
don't think it is possible that the underlying mechanism could
work this way. What I think is more likely is that brains are
constantly "tinkering" with their pattern-recognition mechanisms.
Like random mutation, I don't think that this tinkering is
influenced by any notions of what is a "useful" pattern, or
even what is a "common" pattern. However, after such tinkering,
if the changed recognition mechanism proves useful, then it
will be reinforced. If it does not prove to be useful, it
will eventually fade away and be replaced by further tinkering.
Maybe it is a lack of imagination on my part, but it seems to
me that the *only* way that new concepts, new patterns, and new
abilities to recognize patterns can emerge is through tinkering
with what one already has.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Induction and Evolution
Date: 27 Jul 1999 00:00:00 GMT
Message-ID: <379df837@news3.us.ibm.net>
References: <7nkaur$2g5v@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 27 Jul 1999 18:19:35 GMT, 129.37.183.187
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7nkaur$2g5v@edrn.newsguy.com>...
>In a different thread, called "Neurons and consciousness", I suggested
>to Sergio that the process of induction in humans was analogous to the
>process of evolution in nature. In this thread, I want to explore that
>analogy further.
>
[snip]
Thanks for an interesting post, Daryl.
I agree with part of your pondering, but I find it lacking one
important component. Let me start with the agreement.
Evolution appears to operate on pretty much the principles outlined,
a mindless source of mutations and the criterion of fitness in a
niche of the environment. Only that can explain the huge diversity
of organisms on Earth, each one adapted to a specific set of
requirements. It is important to note that there isn't one
"winner species", but just winners in each one of the (huge)
amount of surviving and interdependent species.
Although this process could be thought to happen in our brain
(and Gerald Edelman with his neural darwinism was one of the
precursors), I find something important lacking.
What I find missing in your description is an explanation of *why*
brains appear to perform better the *more* they know. This is a
special kind of system, because its performance improves the
more the system is loaded. Most computational systems of today
(such as databases) exibit exactly the opposite behavior: they're
more sluggish the more you put information in them. They take more
time to find some relevant information, they have lots of indexes to
search. As the size grows, databases become less able to efficiently
tackle certain problems, the opposite of our brain.
Thus what is amazing in our brain is that we become *more*
productive and efficient the more we know. This does not seem
to be the result of aimless tinkering. Such a process would
always "start it all over again", spending equal processing
energy in random alternatives. This is not efficient!
Our brain appears to use a different process, it seems to be the
result of *efficient* tinkering, exactly the kind that uses *again*
the methods that worked well in the *past*. Most of the time,
this is just what is necessary to solve the problem. This reminds
me of induction.
You started your message saying that you wanted to further explore
the evolution analogy. Analogical reasoning is the kind of thing
that appears to be involved in such mechanisms. We often reuse
things that worked well in the past, and very often this is enough
to solve our "problems" of the moment. Frequently, we have to
"adjust" one part or another of the analogy to suit the model
to new circumstances, but this is kind of inductive in its core.
However, when this reuse don't work, when we're incapable of solving
the current problem with anything we know, then I agree, we start
tinkering with random mutations.
But guess what: once we find one of these "mutations" that is
valuable, we try to find a way to "understand" it, to fit it into
our models, in order to allow efficient resolution of future problems.
This may be a difficult task, may take years in some cases. This
is the kind of "knowledge" that I ascribe to experts, when we
compare to novices. It is the kind of difference that we find
between a recently graduated professional and one having years
of "road experience".
It is here, also, that I see the greatest difference of intelligence
among people. Those who are able to deeply "embody" the random
discoveries into intertwined and coherent structures are the ones
with greater intelligence (they will be more efficacious in the
future), while those who remain in surface aspects only, are not.
In this regard, a person who knows (by rote memorization) a lot
of facts *is not* considered intelligent (computer equivalent: CYC).
A person who "understands" a lot of facts can be considered more
intelligent (computer equivalent: ???).
Now guess what can happen if we could duplicate such processes in
computers. The sky is the limit. Can't wait for this...
Regards,
Sergio Navega.
From: Henning Strandin <henning@travellab.com>
Subject: Re: Induction and Evolution
Date: 27 Jul 1999 00:00:00 GMT
Message-ID: <379E0C4A.D3E4EA00@travellab.com>
Content-Transfer-Encoding: 7bit
References: <7nkaur$2g5v@edrn.newsguy.com> <379df837@news3.us.ibm.net>
X-Sender: s-762427@d212-151-250-116.swipnet.se
X-Accept-Language: en
Content-Type: text/plain; charset=us-ascii
X-Complaints-To: news-abuse@swip.net
X-Trace: nntpserver.swip.net 933104931 212.151.250.116 (Tue, 27 Jul 1999 21:48:51 MET DST)
Organization: A Customer of Tele2
MIME-Version: 1.0
NNTP-Posting-Date: Tue, 27 Jul 1999 21:48:51 MET DST
Newsgroups: comp.ai.philosophy
Sergio Navega wrote:
>
> What I find missing in your description is an explanation of *why*
> brains appear to perform better the *more* they know. This is a
> special kind of system, because its performance improves the
> more the system is loaded. Most computational systems of today
> (such as databases) exibit exactly the opposite behavior: they're
> more sluggish the more you put information in them. They take more
> time to find some relevant information, they have lots of indexes to
> search. As the size grows, databases become less able to efficiently
> tackle certain problems, the opposite of our brain.
(I have a feeling that you really should have a thorough understanding
of NNs to answer this, but I'll take my chances :-)
What exactly do you mean by that the performance improves? That it
solves problems faster or better? Or both? Assume that the amount of
memory that the brain has 'reserved' is fairly constant. It doesn't know
what parts of the brain contains useful information and what parts
don't, so searching for a match always takes the same amount of time.
But the more useful information it has stored, the more successful it
will be in finding a match. Also, the more ready made "solution
fragments" it has stored, the easier it becomes to combine them into
something that works. This applies if you assume random recombination of
existing fragments. It seems to me that the lack of sluggishness can be
explained by assuming that there is no increase in searchable memory,
only in useful information in those memory cells.
> Thus what is amazing in our brain is that we become *more*
> productive and efficient the more we know. This does not seem
> to be the result of aimless tinkering. Such a process would
> always "start it all over again", spending equal processing
> energy in random alternatives. This is not efficient!
If we assume that useful solution fragments (symbols for relations and
objects?) have a longer lifespan than useless ones, and that finding a
solution involves random recombination of these fragments, and that
several fragments can solidify into higher order fragments (which seems
consistent with the NN idea to me), this all makes very good sense.
<snip>
> However, when this reuse don't work, when we're incapable of solving
> the current problem with anything we know, then I agree, we start
> tinkering with random mutations.
I want to state again that I think that random recombination and
mutation are two different classes of change, and that they require
separate explanations. How does new memetic (or lower level) material
get introduced. Or do higher order solution fragments "dissolve" into
their lower order parts when they don't work well enough, so that more
new combinations become possible? This would make "mutations"
unnecessary, I think (i.e. no new material is introduced).
The rest of your post seems to agree with this idea(?).
--
"The world will little note nor long remember what we say here"
- A. Lincoln
Henning Strandin (henning@travellab.com)
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Induction and Evolution
Date: 28 Jul 1999 00:00:00 GMT
Message-ID: <379f0fae@news3.us.ibm.net>
References: <7nkaur$2g5v@edrn.newsguy.com> <379df837@news3.us.ibm.net>
<379E0C4A.D3E4EA00@travellab.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 28 Jul 1999 14:11:58 GMT, 200.229.243.110
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Henning Strandin wrote in message <379E0C4A.D3E4EA00@travellab.com>...
>Sergio Navega wrote:
>>
>> What I find missing in your description is an explanation of *why*
>> brains appear to perform better the *more* they know. This is a
>> special kind of system, because its performance improves the
>> more the system is loaded. Most computational systems of today
>> (such as databases) exibit exactly the opposite behavior: they're
>> more sluggish the more you put information in them. They take more
>> time to find some relevant information, they have lots of indexes to
>> search. As the size grows, databases become less able to efficiently
>> tackle certain problems, the opposite of our brain.
>
>(I have a feeling that you really should have a thorough understanding
>of NNs to answer this, but I'll take my chances :-)
>What exactly do you mean by that the performance improves? That it
>solves problems faster or better? Or both?
Often both. But sometimes, better may imply more time. You know the
case where somebody is able to devise a reasonable solution very fast,
but one expert may take a little bit longer, and come up with an
extraordinary solution. In some cases, the expert takes that additional
time to check for things that didn't pay off, giving the impression
that the novice "knows more", because he/she answered first. But in
fact, the expert evaluated much more, and could have found deeper
answers.
>Assume that the amount of
>memory that the brain has 'reserved' is fairly constant. It doesn't know
>what parts of the brain contains useful information and what parts
>don't, so searching for a match always takes the same amount of time.
>But the more useful information it has stored, the more successful it
>will be in finding a match. Also, the more ready made "solution
>fragments" it has stored, the easier it becomes to combine them into
>something that works. This applies if you assume random recombination of
>existing fragments. It seems to me that the lack of sluggishness can be
>explained by assuming that there is no increase in searchable memory,
>only in useful information in those memory cells.
>
The question is that the brain *is not* a constant memory space. It
is extremely dynamic, reinforcing connections (and often growing new
ones) on a real time basis. This does not happen with traditional
artificial NN architectures. Random recombination of existing
fragments will always impose a terrible price for even the most
simple thoughs. This is not what appear to happen with our brain. In
several behavioral tests where response time is measured, we often
find that some answers demand more time while others are very fast.
There is something that influences (positively) our answers to
problems based on previous knowledge. Randomness plays an important
part, but it feeds this "selective" mechanism, that gets it right
more often than a blind process.
>> Thus what is amazing in our brain is that we become *more*
>> productive and efficient the more we know. This does not seem
>> to be the result of aimless tinkering. Such a process would
>> always "start it all over again", spending equal processing
>> energy in random alternatives. This is not efficient!
>
>If we assume that useful solution fragments (symbols for relations and
>objects?) have a longer lifespan than useless ones, and that finding a
>solution involves random recombination of these fragments, and that
>several fragments can solidify into higher order fragments (which seems
>consistent with the NN idea to me), this all makes very good sense.
>
I could agree that ANNs are closer to what we wish from such a
mechanism than, say, a purely symbolic architecture. But they carry
their load of problems too. One historically difficult problem of NNs
is the effect of new training over old ones: often the old information
is "forgotten" and is substituted by the new. This is not what happens
in our brains, we remember the old (replaced) stuff, we even remember
it to *come back* to it if the new doesn't happen to be all that good.
There are other problems with NNs, as the question of complexity of
learning and fixed architecture.
><snip>
>
>> However, when this reuse don't work, when we're incapable of solving
>> the current problem with anything we know, then I agree, we start
>> tinkering with random mutations.
>
>I want to state again that I think that random recombination and
>mutation are two different classes of change, and that they require
>separate explanations. How does new memetic (or lower level) material
>get introduced. Or do higher order solution fragments "dissolve" into
>their lower order parts when they don't work well enough, so that more
>new combinations become possible? This would make "mutations"
>unnecessary, I think (i.e. no new material is introduced).
>
>The rest of your post seems to agree with this idea(?).
>
What we're calling mutations here is probably associated with neural
noise, a subject very often neglected by computational neuroscientists.
In my vision, the effect of neural noise is so that most small variations
don't "survive" the test of other areas (in charge of verifying the
validity of the "idea"). This method of validating what is good and
what is not determines what grows into a conscious thought (emergent
from "intuition") and what is just discarded. The big, big issue here
is that this "mechanic of criticism" is not fixed: it improves with
time, as new experiences are lived. Other very, very important aspect,
in my opinion, is that this mechanism is *perceptual*: it *recognizes*
good things, it is not good at directly generating new things.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: Induction and Evolution
Date: 28 Jul 1999 00:00:00 GMT
Message-ID: <7nn6vr$10vp@edrn.newsguy.com>
References: <7nkaur$2g5v@edrn.newsguy.com> <379df837@news3.us.ibm.net>
<379E0C4A.D3E4EA00@travellab.com> <379f0fae@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>The question is that the brain *is not* a constant memory space. It
>is extremely dynamic, reinforcing connections (and often growing new
>ones) on a real time basis. This does not happen with traditional
>artificial NN architectures. Random recombination of existing
>fragments will always impose a terrible price for even the most
>simple thoughs.
Well, I don't know how our brains *actually* do it, but perhaps
the recombination can be made more efficient by indexing the
fragments by situation type. Then, when confronted with a new
situation, we would first try recombining the fragments that
were marked as relevant for this situation type.
Of course, this gets us back to the problem that we started
with: how do the generalizations of "situation type" arise
to start with? To be able to recognize a situation type and
use it for indexing purposes, the type must already exist
(or else, be composed out of pre-existing types). So the
indexing doesn't really solve the problem, but perhaps
it could be used to bootstrap a more general solution from
a partial solution.
That is, perhaps there are atomic pattern fragments that
are either innate or generated randomly. So these atomic
fragments are not influenced by experience. But then,
these fragments are used in two ways: (1) They can be
combined to make larger fragments, and (2) they can be
used as indices to make future recombination more efficient.
The point is that you can't explain purposeful behavior
in terms of purposeful behavior. Eventually, if intelligent
behavior is to be explained, it must be explained in terms
of *nonintelligent* behavior. So pattern generation and
recognition must be explained in terms that *don't* involve
pattern recognition or generation.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Induction and Evolution
Date: 28 Jul 1999 00:00:00 GMT
Message-ID: <379f4bb5@news3.us.ibm.net>
References: <7nkaur$2g5v@edrn.newsguy.com> <379df837@news3.us.ibm.net>
<379E0C4A.D3E4EA00@travellab.com> <379f0fae@news3.us.ibm.net>
<7nn6vr$10vp@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 28 Jul 1999 18:28:05 GMT, 129.37.182.215
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7nn6vr$10vp@edrn.newsguy.com>...
>Sergio says...
>
>>The question is that the brain *is not* a constant memory space. It
>>is extremely dynamic, reinforcing connections (and often growing new
>>ones) on a real time basis. This does not happen with traditional
>>artificial NN architectures. Random recombination of existing
>>fragments will always impose a terrible price for even the most
>>simple thoughs.
>
>Well, I don't know how our brains *actually* do it, but perhaps
>the recombination can be made more efficient by indexing the
>fragments by situation type. Then, when confronted with a new
>situation, we would first try recombining the fragments that
>were marked as relevant for this situation type.
>
That's indeed a good way to start. Cognitive scientists call this
indexing of similar fragments as "categorization". It does not
seem limited to group "information", but also to collect
"strategies" (which may also mean "motor patterns", such as the
ones we use to grab a ball that is thrown at us). The act of
adjusting our hands to grab that ball involves the *learning*
of the correct "feedback loops" between our vision and motor
commands to our limbs. Babies don't have that stuff ready "from
the factory". They have to learn it.
>Of course, this gets us back to the problem that we started
>with: how do the generalizations of "situation type" arise
>to start with? To be able to recognize a situation type and
>use it for indexing purposes, the type must already exist
>(or else, be composed out of pre-existing types). So the
>indexing doesn't really solve the problem, but perhaps
>it could be used to bootstrap a more general solution from
>a partial solution.
>
The doubt of pre-existing types vanishes in the moment we
perceive that *we are* provided with innate comparison
mechanisms. It's only that this mechanism actuates in a very,
very low level of information, not the high level that some
nativists propose.
Language is, in my way of seeing it, not provided by these
innate mechanisms. But the initial levels of our auditory
processing, way before any phonological perceptual mechanism,
is present when babies are born. From that initial level,
babies start to *notice regularities* in the phonological
aspects of utterances of his/her parents. These regularities
are self-bootstrapping, they allow the baby to learn more and
more regularities. This will constitute the substrate
in which all spoken language comprehension of the adult
will stand.
Interestingly, this ability to understand native languages
reduces with age. An adult will have difficulties to listen
and utter a foreign language, with different phonemes.
Coincidentally or not, brain plasticity is at its peek
during 2-5 years of age. This is very suggesting.
>That is, perhaps there are atomic pattern fragments that
>are either innate or generated randomly. So these atomic
>fragments are not influenced by experience. But then,
>these fragments are used in two ways: (1) They can be
>combined to make larger fragments, and (2) they can be
>used as indices to make future recombination more efficient.
>
>The point is that you can't explain purposeful behavior
>in terms of purposeful behavior. Eventually, if intelligent
>behavior is to be explained, it must be explained in terms
>of *nonintelligent* behavior. So pattern generation and
>recognition must be explained in terms that *don't* involve
>pattern recognition or generation.
>
Pattern recognition is an intrinsic functional property of
our neural mechanism. Remember the hebbian mechanism, one of
the (proposed) ideas of how neurons work? This is a mechanism
that is able to recognize certain patterns of spikes. It was
"devised" to be sensitive to these certain patterns. "Certain"
patterns may sound too specific, giving our brain a
"special purposeness".
Yes, that's it, in a sense our brain is specialized in the
recognition of a limited subset of the information content
that is "out there" in the universe. We could say that our
brain recognizes only some of the potentially infinite ways
to interpret the physical signals that are among what is
complete chaos (noise) and complete regularity (the tick
of X-ray from a binary star).
Why we have two eyes instead of only one? Why we have developed
those funny curves in our ears? Why we sense temperature and
pressure but not electromagnetic radiation? The answer to these
questions is the same answer to the question: "Who designed
this innate comparison mechanism?". Charles Darwin, we're
proud of you.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Induction and Evolution
Date: 31 Jul 1999 00:00:00 GMT
Message-ID: <37a30df5@news3.us.ibm.net>
References: <7nkaur$2g5v@edrn.newsguy.com> <379E05CC.40B25A39@travellab.com>
<7nmk41$kim$1@nnrp1.deja.com> <7nn6ah$vnq@edrn.newsguy.com>
<7nrt05$5fr$1@nnrp1.deja.com> <7nsake$2qhp@edrn.newsguy.com>
<7nupi7$43i$1@nnrp1.deja.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 31 Jul 1999 14:53:41 GMT, 129.37.183.227
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
jddescript_deja@my-deja.com wrote in message
<7nupi7$43i$1@nnrp1.deja.com>...
>[addressed to Daryl McCullough]
>I reread your introduction to this thread and believe I now know why we
>couldn't communicate before. You aren't asking what is induction, how
>it works, how to learn and practice it because you probably are happy
>with your answers to these questions. These are the questions that an
>understanding of Ayn Rand's theory of concepts would answer and which I
>was trying for by example. I now hear you asking how can you program a
>machine with random mechanisms, like neural nets, to approximate human
>induction. I have to say the answer is you can't! This part of
>intelligence can't be programed into a machine and particularly not
>with random mechanisms. The reason is that the numbers are
>overwhelmingly against it. Only people who don't understand how to
>count cascading decision trees would think it possible. You hear people
>say that with enough monkeys typing long enough any Shakespearian work
>can be produced. This blind faith in randomness [ I'm not saying you
>have it but as I rehear the question the implication might be there ]
>is touching but totally wrong. The number of decisions, senses,
>feelings,....pictures(pixels) that a human has processed by the age of
>5, if counted properly, becomes numbers like the number of hydrogen
>atoms in the visible universe and engulfs any possible computer in the
>forseeable future. This is the process that determines a person's
>personality(human caring) and thus their inducive capabilities.
Oh, dear. I understand the core of your doubts but I'd like to tell
you that things don't work this way. For a simple way to see it,
try to recall all the apples you've seen in your life. You may
have the notion that all the images, tastes and touch impressions
of these apples are, one way or another, stored in your memory.
That would compose the picture you were talking about, that our
brain is a remarkable machine that couldn't be "simulated" even
by the most powerful computers we will ever design.
But if this idea were true, the memory capacity of our brain would
be exhausted in a matter of hours. There isn't enough molecules
in our brain to store the astonishing amount of data captured by
our senses.
What our brain does is, then, to (inductively) categorize things.
We don't "store" the image of each apple we've seen in our lives,
we store the essential characteristics of "apples" in general and
only some particularities of very specific instances (such as the
one we ate during grandma's birthday, 3 weeks ago). With time, even
this specific instances vanishes (or worse, are substituted by false
memories), with most of the associated neurochemical components
being reused to store new (and more important) things.
When we finally discover how to functionally duplicate this mechanism
in computers, they will be much, much more intelligent than we are.
Regards,
Sergio Navega.
Back to Menu of Messages Sergio's Homepage
Any Comments? snavega@attglobal.net