Selected Newsgroup Message
From: "Sergio Navega" <snavega@ibm.net>
Subject: The Baby in a Vat
Date: 04 May 1999 00:00:00 GMT
Message-ID: <372eee6c@news3.us.ibm.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 4 May 1999 12:56:12 GMT, 200.229.242.148
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
A horde of mean aliens have kidnaped a pregnant woman. Being
mean as they were, they decided to make an awful experiment.
With mommy kept dormant, they surgically opened her womb
and extracted the brain of the foetus. They did it in a so
thoughtful manner as to preserve the life of the foetus
and the brain. They put the brain in a vat, in their ship,
hidden in a secret place on Earth, close to the city where
the woman lived. They have established a high speed radio
link among all the nerve fibers to the corresponding sensory
inputs and motor outputs of the foetus (vision, audition,
touch, proprioception, etc). They closed the womb with the
foetus inside and a filling in place of the brain and put
back the pregnant woman on Earth.
The baby was born and lived a normal life. Nobody never
suspected that she had no brain, only that filling with enough
electronics to transmit back and forth every single impulse
to and from its little brain in the aliens's laboratory.
The child developed, went to school, graduated and then became
a lawyer (the aliens, even being devilish, were horrified
with what they saw!).
What the aliens didn't tell to anybody was the technique they
used. The implants in the baby's body converted each signal
received into a set of symbols that were used to represent
the kind of impulse received. For instance, when the baby
touched a plastic ball, a multiple stream of symbols flowed
through the communication channel:
...;;AAACDDEAAA..AA..AADDDIIUUWWSSSASS....S...AASSEESSF..
.SSA...DDDASSSSS..DDDEIEIIII...IISAAAS.IIIOOWWI.....SSA..
Similar symbols flowed from the brain to the corresponding
motor outputs of the body of the baby. Peeking into those
symbols nobody would understand what the baby was doing.
But the brain of the baby did!
Now tell me, John Searle, tell me how is it possible that the
*whole cognition* of that baby was able to flow through a channel
in a purely syntactic form. Tell me *why* we would not obtain
the same result, if in place of that brain, we put a
sufficiently powerful computer, the epitome symbol processor.
How can you justify that we don't extract cognition from
syntax? How is it that you don't accept semantics emerging
from mere syntax?
It is time to see that we are just that: machines specialized
in the extraction of semantics from arbitrary syntax provided
by our senses, "designed" by evolution.
Sergio Navega.
[Based on Daniel Dennett's "Brain in a Vat"]
From: rick@kana.stanford.edu
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <7gsr3f$fm9$1@nnrp1.deja.com>
References: <372eee6c@news3.us.ibm.net>
X-Http-Proxy: 1.1 x13.dejanews.com:80 (Squid/1.1.22) for client 198.243.73.168
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Thu May 06 19:38:24 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)
> Now tell me, John Searle, tell me how is it possible that the
> *whole cognition* of that baby was able to flow through a channel
> in a purely syntactic form. Tell me *why* we would not obtain
> the same result, if in place of that brain, we put a
> sufficiently powerful computer, the epitome symbol processor.
> How can you justify that we don't extract cognition from
> syntax? How is it that you don't accept semantics emerging
> from mere syntax?
Thes Searle camp doesn't deny that part of cognition is symbolic,
just that it is not completely symbolic. Your scenario is only
partly symbolic. The "inner powerful computer" is not viable according to
Searle.
In some respect, you advocate the "homonucleous" solution in a perverse way.
Instead of a man inside a man inside a man ...
you have a "sufficiently powerful computer".
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or
Start Your Own
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <37320591@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gsr3f$fm9$1@nnrp1.deja.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 6 May 1999 21:11:45 GMT, 129.37.182.222
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
rick@kana.stanford.edu wrote in message <7gsr3f$fm9$1@nnrp1.deja.com>...
>
>> Now tell me, John Searle, tell me how is it possible that the
>> *whole cognition* of that baby was able to flow through a channel
>> in a purely syntactic form. Tell me *why* we would not obtain
>> the same result, if in place of that brain, we put a
>> sufficiently powerful computer, the epitome symbol processor.
>> How can you justify that we don't extract cognition from
>> syntax? How is it that you don't accept semantics emerging
>> from mere syntax?
>
>Thes Searle camp doesn't deny that part of cognition is symbolic,
>just that it is not completely symbolic. Your scenario is only
>partly symbolic. The "inner powerful computer" is not viable according
to
>Searle.
My scenario is almost entirely symbolic. It is not just in the
"input side", the side that translates physical quantities to
symbolic entities (Pylyshyn's transducers).
>
>In some respect, you advocate the "homonucleous" solution in a perverse
way.
>Instead of a man inside a man inside a man ...
>you have a "sufficiently powerful computer".
>
I don't see it that way. You have just one computer, I don't need
nothing more than a single computer doing its traditional symbolic
shuffling. There's nothing else "behind" this level. You may argue
that we would need the presence of a "designer", that established
the syntactic ground in which this computer operates. I would say
that the "designer" is mother nature, and the desigh was (in fact,
still is) being refined by natural selection.
Regards,
Sergio Navega.
From: karr@best.com
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <7gqrhj$n8m$1@nnrp1.deja.com>
References: <372eee6c@news3.us.ibm.net>
X-Http-Proxy: 1.1 x14.dejanews.com:80 (Squid/1.1.22) for client 208.247.15.29
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Thu May 06 01:33:39 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.0 (compatible; MSIE 5.0; Windows 95; DigExt)
In article <372eee6c@news3.us.ibm.net>,
"Sergio Navega" <snavega@ibm.net> wrote:
> A horde of mean aliens have kidnaped a pregnant woman. Being
> mean as they were, they decided to make an awful experiment.
> With mommy kept dormant, they surgically opened her womb
> and extracted the brain of the foetus. They did it in a so
> thoughtful manner as to preserve the life of the foetus
> and the brain. They put the brain in a vat, in their ship,
> hidden in a secret place on Earth, close to the city where
> the woman lived. They have established a high speed radio
> link among all the nerve fibers to the corresponding sensory
> inputs and motor outputs of the foetus (vision, audition,
> touch, proprioception, etc). They closed the womb with the
> foetus inside and a filling in place of the brain and put
> back the pregnant woman on Earth.
............
>
> It is time to see that we are just that: machines specialized
> in the extraction of semantics from arbitrary syntax provided
> by our senses, "designed" by evolution.
>
> Sergio Navega.
> [Based on Daniel Dennett's "Brain in a Vat"]
>
An assumption here is that one can encapsulate ALL the relevant information
that is communicated between the brain and nervous system into a set of
symbols. While this would be a task of overwhelming complexity, let's suppose
it's possible.
Even so, just how is it that "radio signals" (or, to be generous, ANY
possible
system of electronic communication) would be able to create the
neurotransmitters and other physical entities needed by the nerves and muscles
to do their actual work? The assumption seems to be that the "physical"
stuff
just doesn't matter--that any physical system is arbitrarily replaceable by
anything that incorporates the same "syntax." This seems analogous, for
example, to saying that a blueprint of a house is equivalent to the house
itself, since it contains the same information.
Ron Karr
karr@best.com
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or
Start Your Own
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <3731a629@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 6 May 1999 14:24:41 GMT, 166.72.21.71
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
karr@best.com wrote in message <7gqrhj$n8m$1@nnrp1.deja.com>...
>In article <372eee6c@news3.us.ibm.net>,
> "Sergio Navega" <snavega@ibm.net> wrote:
>> A horde of mean aliens have kidnaped a pregnant woman. Being
>> mean as they were, they decided to make an awful experiment.
>> With mommy kept dormant, they surgically opened her womb
>> and extracted the brain of the foetus. They did it in a so
>> thoughtful manner as to preserve the life of the foetus
>> and the brain. They put the brain in a vat, in their ship,
>> hidden in a secret place on Earth, close to the city where
>> the woman lived. They have established a high speed radio
>> link among all the nerve fibers to the corresponding sensory
>> inputs and motor outputs of the foetus (vision, audition,
>> touch, proprioception, etc). They closed the womb with the
>> foetus inside and a filling in place of the brain and put
>> back the pregnant woman on Earth.
>
>............
>>
>> It is time to see that we are just that: machines specialized
>> in the extraction of semantics from arbitrary syntax provided
>> by our senses, "designed" by evolution.
>>
>> Sergio Navega.
>> [Based on Daniel Dennett's "Brain in a Vat"]
>>
>
>An assumption here is that one can encapsulate ALL the relevant information
>that is communicated between the brain and nervous system into a set of
>symbols. While this would be a task of overwhelming complexity, let's suppose
>it's possible.
>
>Even so, just how is it that "radio signals" (or, to be generous, ANY
possible
>system of electronic communication) would be able to create the
>neurotransmitters and other physical entities needed by the nerves and muscles
>to do their actual work? The assumption seems to be that the
"physical" stuff
>just doesn't matter--that any physical system is arbitrarily replaceable by
>anything that incorporates the same "syntax." This seems analogous,
for
>example, to saying that a blueprint of a house is equivalent to the house
>itself, since it contains the same information.
>
>Ron Karr
>karr@best.com
>
The optic nerve that emerges from the eyes carry action potentials.
These are just pulses. The shape of those pulses and their voltages
don't seem to be relevant, only their presence and timing. So what I
have proposed is something on the lines of this:
___|_____|||__||___|||||||____|__||_|||____||__|___||||_||__|||_||
.. A ... C . B .. F ... A .B . C ...B ..A .. D . B . C . B
The upper line are the spikes from the optic nerve, the lower line is
the output of a "mysterious" spike to symbol converter.
The result is this:
..A...C.B..F...A.B.C...B..A..D.B.C.B
I say that from this sequence we can obtain dozens of relevant
informations.
Obviously, the big problem here is to find out *what* is the coding
structure that does not distort what is significant (should I
approximate three consecutive pulses by C or as AAA?). The big question
is then to understand what is relevant here, what is meaningful. Is it
the pulse rate? Is it the spike timing? Is it the average over a
specified time?
What I'm saying is that whatever it is, once we discover this coding
strategy then it should be possible to concoct a symbolic
representation of this, to allow computers to process it as a
bunch of meaningless symbols. From those symbols we will extract
the *semantics*, only because the semantics is embedded on the
pattern of occurrences of these symbols. What I said in that baby
in a vat post is that our entire vision of the world seems to be
the semantics that we extract from these patterns of occurrence.
Put on the same room the following people: on one side, the symbolicists
Jerry Fodor and Zenon Pylyshyn. On the other side, the connectionists
Paul Smolensky and James McClelland. All of them bright and intelligent
researchers. Whose side will win under a fair discussion? This is my
question. Both camps have strong points, both raise points about the
brain that have significant importance. What I'm trying to find is
a solution that allows *both* sides to be satisfied. That solution,
I'm thinking out loud, is something that maintains the symbolic nature
and benefits of productivity, systematicity, etc, but at the same time
is noise resistant and allows inductive and associative capacities.
Regards,
Sergio Navega.
From: iadmontg@undergrad.math.uwaterloo.ca (Ian)
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <3731dd5c.49344858@news.netcom.ca>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<3731a629@news3.us.ibm.net>
X-Complaints-To: abuse@netcom.ca
X-Trace: tor-nn1.netcom.ca 926011715 207.181.94.215 (Thu, 06 May 1999 13:28:35 EDT)
Organization: Netcom Canada
Reply-To: iadmontg@undergrad.math.uwaterloo.ca
NNTP-Posting-Date: Thu, 06 May 1999 13:28:35 EDT
Newsgroups: comp.ai.philosophy
On Thu, 6 May 1999 10:38:48 -0300, "Sergio Navega" <snavega@ibm.net>
wrote:
>What I'm saying is that whatever it is, once we discover this coding
>strategy then it should be possible to concoct a symbolic
>representation of this, to allow computers to process it as a
>bunch of meaningless symbols. From those symbols we will extract
>the *semantics*, only because the semantics is embedded on the
>pattern of occurrences of these symbols. What I said in that baby
>in a vat post is that our entire vision of the world seems to be
>the semantics that we extract from these patterns of occurrence.
That is correct. Thanks to the uncertainty principle, all input to
the human nervous system can be "discretized" to some symbolic
representation without loss of useful information (you just pick a
level of accuracy for the discretization which puts the rounding error
below the barrier of quantum uncertainty). So yes, 100% of the input
and output between our brain and the world is perfectly equivalent to
a very complex set of symbols. We have no sensory method of access to
the world that cannot be perfectly represented as a symbol stream.
>Put on the same room the following people: on one side, the symbolicists
>Jerry Fodor and Zenon Pylyshyn. On the other side, the connectionists
>Paul Smolensky and James McClelland. All of them bright and intelligent
>researchers. Whose side will win under a fair discussion? This is my
>question. Both camps have strong points, both raise points about the
>brain that have significant importance. What I'm trying to find is
>a solution that allows *both* sides to be satisfied. That solution,
>I'm thinking out loud, is something that maintains the symbolic nature
>and benefits of productivity, systematicity, etc, but at the same time
>is noise resistant and allows inductive and associative capacities.
The symbolic vs. non-symbolic camp doesn't necessarily disagree as
much as you think. Connectionist machines are, for example, in
reality equivalent to some symbol-manipulator even though in theory
they are not. Someone who holds that the brain is describable as a
connectionist system implicitly holds that it is describable as a
symbol-manipulation system. The difference is the proper level of
abstraction - if the brain "really" works in the manner a
connectionist system does, then it's useful to describe it directly as
one, rather than trying to describe it as a symbol manipulator
structured in such a way as to exactly implement a connectionist
machine. So they spend most of their time arguing over what the brain
is "most like" in some high-level sense, even though at the lowest
level of its operations it could technically be described as either.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <3731f47e@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<3731a629@news3.us.ibm.net> <3731dd5c.49344858@news.netcom.ca>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 6 May 1999 19:58:54 GMT, 166.72.21.142
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Ian wrote in message <3731dd5c.49344858@news.netcom.ca>...
>On Thu, 6 May 1999 10:38:48 -0300, "Sergio Navega" <snavega@ibm.net>
>wrote:
>
>>What I'm saying is that whatever it is, once we discover this coding
>>strategy then it should be possible to concoct a symbolic
>>representation of this, to allow computers to process it as a
>>bunch of meaningless symbols. From those symbols we will extract
>>the *semantics*, only because the semantics is embedded on the
>>pattern of occurrences of these symbols. What I said in that baby
>>in a vat post is that our entire vision of the world seems to be
>>the semantics that we extract from these patterns of occurrence.
>
>That is correct. Thanks to the uncertainty principle, all input to
>the human nervous system can be "discretized" to some symbolic
>representation without loss of useful information (you just pick a
>level of accuracy for the discretization which puts the rounding error
>below the barrier of quantum uncertainty). So yes, 100% of the input
>and output between our brain and the world is perfectly equivalent to
>a very complex set of symbols. We have no sensory method of access to
>the world that cannot be perfectly represented as a symbol stream.
>
Thanks, Ian, for your message, it is not easy to find someone who
agrees with such a viewpoint. I think I understood your reference
to the uncertainty principle, although I guess somebody out there
may say that the level of noise may occasionally be greater than
one established discretization barrier. So what appears as a
relevant question is "what if, to allow the generation of a steady
symbol stream, this discretization barrier would have to be
variable? How is this supposed to work?" This question could be
posed by those who want to start the analysis from a different point.
This different point would lead to the assumption that symbolic
methods will fail because of noise and, thus, what should be used
in these initial levels would have to be a process able to discover
statistical regularities in clusters of apparently uncorrelated
data. Such methods would naturally be able to overcome the problems
of noise introduced by that discretization.
Frequently here in c.a.p. this statistical method appears and the
arguments are very strong. It is this doubt raised by the proponents
of statistical methods that leads me to raise even *another* question,
trying to defend symbolic methods.
What if this discretization barrier is fixed and, when subject to
noisy inputs, it generates, yes, noise in the symbolic stream?
What could we do to process these "noisy" symbol streams? Are
they really worthless?
I haven't seen so far any approach that tried to take into
consideration these noisy symbols. Perhaps because it is not
very easy to figure out what "noisy symbols" really mean.
Apparently, everybody thinks that noise in symbols turns them into
useless sequences of rubbish. It is this idea that I'm trying to
review. The starting point is the similarity that we can detect
in several instances of symbols. Each stream may be totally
different but may keep structural similarities that can be
exploited. By purely symbolic methods.
Regards,
Sergio Navega.
From: "Gary Forbis" <forbis@accessone.com>
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <925985791.266.66@news.remarQ.com>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
X-Priority: 3
X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2014.211
X-Complaints-To: newsabuse@remarQ.com
X-Trace: 925985791.266.66 F4ET54IMA800DD12BC usenet1.supernews.com
Organization: Posted via RemarQ Communities, Inc.
X-MSMail-Priority: Normal
NNTP-Posting-Date: Thu, 06 May 1999 10:16:31 GMT
Newsgroups: comp.ai.philosophy
<karr@best.com> wrote in message news:7gqrhj$n8m$1@nnrp1.deja.com...
> An assumption here is that one can encapsulate ALL the relevant information
> that is communicated between the brain and nervous system into a set of
> symbols. While this would be a task of overwhelming complexity, let's suppose
> it's possible.
Umm... In a finite and well defined set of symbols.
Not only must the set of symbols be finite but the symbols must have semantic
content. The semantic content is set by the world/body/transmission signal
interface.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <3731a626@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 6 May 1999 14:24:38 GMT, 166.72.21.71
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Gary Forbis wrote in message <925985791.266.66@news.remarQ.com>...
><karr@best.com> wrote in message news:7gqrhj$n8m$1@nnrp1.deja.com...
>> An assumption here is that one can encapsulate ALL the relevant
>information
>> that is communicated between the brain and nervous system into a set of
>> symbols. While this would be a task of overwhelming complexity, let's
>suppose
>> it's possible.
>
>Umm... In a finite and well defined set of symbols.
>
>Not only must the set of symbols be finite but the symbols must have
>semantic
>content. The semantic content is set by the world/body/transmission signal
>interface.
>
>
I can agree with the finiteness of the set of symbols. But what do
you mean by these symbols needing semantic content? This is
exactly what I'm trying to rebutt: that there is a level of
syntax (maybe we should call it by other word rather than "syntax")
that the semantic is only its pattern of occurrence, and nothing
more. Where do you see the need for an external semantics here?
Regards,
Sergio Navega.
From: "Gary Forbis" <forbis@accessone.com>
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <926017401.071.98@news.remarQ.com>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
X-Priority: 3
X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2014.211
X-Complaints-To: newsabuse@remarQ.com
X-Trace: 926017401.071.98 F4ET54IMA8052D12BC usenet1.supernews.com
Organization: Posted via RemarQ Communities, Inc.
X-MSMail-Priority: Normal
NNTP-Posting-Date: Thu, 06 May 1999 19:03:21 GMT
Newsgroups: comp.ai.philosophy
Sergio Navega <snavega@ibm.net> wrote in message
news:3731a626@news3.us.ibm.net...
>
> Gary Forbis wrote in message <925985791.266.66@news.remarQ.com>...
> ><karr@best.com> wrote in message news:7gqrhj$n8m$1@nnrp1.deja.com...
> >> An assumption here is that one can encapsulate ALL the relevant
> >information
> >> that is communicated between the brain and nervous system into a set of
> >> symbols. While this would be a task of overwhelming complexity, let's
> >suppose
> >> it's possible.
> >
> >Umm... In a finite and well defined set of symbols.
> >
> >Not only must the set of symbols be finite but the symbols must have
> >semantic
> >content. The semantic content is set by the world/body/transmission signal
> >interface.
>
> I can agree with the finiteness of the set of symbols. But what do
> you mean by these symbols needing semantic content? This is
> exactly what I'm trying to rebutt: that there is a level of
> syntax (maybe we should call it by other word rather than "syntax")
> that the semantic is only its pattern of occurrence, and nothing
> more. Where do you see the need for an external semantics here?
Without claiming the symbols have the same semantic content every time
used one would have a hard claiming they were symbols. One doesn't
necessarily need to know what the symbols mean when one starts but
one needs to know they have meaning.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <3732058f@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 6 May 1999 21:11:43 GMT, 129.37.182.222
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Gary Forbis wrote in message <926017401.071.98@news.remarQ.com>...
>Sergio Navega <snavega@ibm.net> wrote in message
>news:3731a626@news3.us.ibm.net...
>>
>> Gary Forbis wrote in message <925985791.266.66@news.remarQ.com>...
>> ><karr@best.com> wrote in message news:7gqrhj$n8m$1@nnrp1.deja.com...
>> >> An assumption here is that one can encapsulate ALL the relevant
>> >information
>> >> that is communicated between the brain and nervous system into a set
of
>> >> symbols. While this would be a task of overwhelming complexity,
let's
>> >suppose
>> >> it's possible.
>> >
>> >Umm... In a finite and well defined set of symbols.
>> >
>> >Not only must the set of symbols be finite but the symbols must have
>> >semantic
>> >content. The semantic content is set by the world/body/transmission
>signal
>> >interface.
>>
>> I can agree with the finiteness of the set of symbols. But what do
>> you mean by these symbols needing semantic content? This is
>> exactly what I'm trying to rebut: that there is a level of
>> syntax (maybe we should call it by other word rather than "syntax")
>> that the semantic is only its pattern of occurrence, and nothing
>> more. Where do you see the need for an external semantics here?
>
>Without claiming the symbols have the same semantic content every time
>used one would have a hard claiming they were symbols. One doesn't
>necessarily need to know what the symbols mean when one starts but
>one needs to know they have meaning.
>
What I'm proposing is that the symbols in this hypothesis do not carry
any intrinsic meaning. They don't mean nothing to the receiving side,
because the receiving side is "blank". They carry only similarities
with the surrounding exemplars. In other words, you could have two
entirely different sequences of symbols "meaning" exactly the same
thing, if they carry the same regularity, the same law of formation,
the same redundancies. The meaning is within the shape of this
stream of symbols.
Regards,
Sergio Navega.
From: "Gary Forbis" <forbis@accessone.com>
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <926034865.145.60@news.remarQ.com>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
X-Priority: 3
X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2014.211
X-Complaints-To: newsabuse@remarQ.com
X-Trace: 926034865.145.60 F4ET54IMA8161D12BC usenet1.supernews.com
Organization: Posted via RemarQ Communities, Inc.
X-MSMail-Priority: Normal
NNTP-Posting-Date: Thu, 06 May 1999 23:54:25 GMT
Newsgroups: comp.ai.philosophy
Sergio Navega <snavega@ibm.net> wrote in message
news:3732058f@news3.us.ibm.net...
> Gary Forbis wrote in message <926017401.071.98@news.remarQ.com>...
> >Without claiming the symbols have the same semantic content every time
> >used one would have a hard claiming they were symbols. One doesn't
> >necessarily need to know what the symbols mean when one starts but
> >one needs to know they have meaning.
>
> What I'm proposing is that the symbols in this hypothesis do not carry
> any intrinsic meaning. They don't mean nothing to the receiving side,
> because the receiving side is "blank". They carry only similarities
> with the surrounding exemplars. In other words, you could have two
> entirely different sequences of symbols "meaning" exactly the same
> thing, if they carry the same regularity, the same law of formation,
> the same redundancies. The meaning is within the shape of this
> stream of symbols.
OK, I'm dense. How does one distinguish between what could be and what is.
For instance, do:
I sat on a cold chair.
and
Harry spat in the hot spring.
have the same regularity, law of formation, and redundancies?
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 07 May 1999 00:00:00 GMT
Message-ID: <3732ddeb@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 7 May 1999 12:34:51 GMT, 166.72.29.162
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Gary Forbis wrote in message <926034865.145.60@news.remarQ.com>...
>
>Sergio Navega <snavega@ibm.net> wrote in message
>news:3732058f@news3.us.ibm.net...
>> Gary Forbis wrote in message <926017401.071.98@news.remarQ.com>...
>> >Without claiming the symbols have the same semantic content every time
>> >used one would have a hard claiming they were symbols. One doesn't
>> >necessarily need to know what the symbols mean when one starts but
>> >one needs to know they have meaning.
>>
>> What I'm proposing is that the symbols in this hypothesis do not carry
>> any intrinsic meaning. They don't mean nothing to the receiving side,
>> because the receiving side is "blank". They carry only similarities
>> with the surrounding exemplars. In other words, you could have two
>> entirely different sequences of symbols "meaning" exactly the same
>> thing, if they carry the same regularity, the same law of formation,
>> the same redundancies. The meaning is within the shape of this
>> stream of symbols.
>
>OK, I'm dense. How does one distinguish between what could be and what is.
>
>For instance, do:
> I sat on a cold chair.
>and
> Harry spat in the hot spring.
>have the same regularity, law of formation, and redundancies?
>
What you've proposed is a symbol stream situated in a very high
position in relation to what I was addressing. Below this level
there's a whole "iceberg body" that should have already been
understood by the entity in order to comprehend your phrases.
Any regularity that we find in such exemplars will give us some
information about the surface structure of natural language (syntax)
and some levels of semantics (not very deep).
What I'm proposing is exactly the other way: we start by finding
regularities in low, bottom level of that iceberg and build
consecutively higher levels of regularities and rule structures.
[use fixed fonts]:
/\
<--- Your example phrases
/ \
/ \
/ \
<--- Level of intermediate rules (text below)
/ \
/
\
/
\
/
\
/
\
/
\
<--- Low level regularities (sensory)
In this pyramid, language uses some regularities and laws of formation
that denote a structure of grammar. It is here that Chomsky is at
home: generative grammars. But this level cannot survive by itself,
if it does not refer to the lower levels, which contains other
structures and regularities. Unfortunately, this intermediate level
is not easy to be consciously observed, which adds to the problems
of investigation that we have. Below that level there is a bunch of
things close to sensory signals, and these things ground the
structures of this intermediate level.
As an attempt to give an example, I will try to develop what appears
to be below the concept "apple", following this pyramid structure.
The leftmost column are the structures, at the right some comments:
"apple"
Tip
of iceberg, highest level
[round, spherical]
[red, occasionally green] Intermediate level, structures
made
[smells this way XXXX] of low level
concepts
[tastes this way YYYY]
[also tastes like ZZZZ]
_/_
/ \ [typical red spectrum]
Lowest level, structures
| | [YYYY patterns of taste]
close to sensory signals
\___/ .....
Each level may use the very same mechanisms of tentative rule
formation, trying to group things that follow a certain regularity.
Is this architecture tenable? Could cognition use the very
same mechanisms in the lower level as well as in higher levels?
I see the problem this way: the mechanism may be the same in
low/high levels or it may be different.
If it is different, then we should find some differences among
the mechanisms proposed by linguists and those found by
neuroscientists.
If it is the same, then eventually we will find a *unique*
mechanism, able to explain *both* sides of the story simultaneously.
So what is the nature of this mechanism?
Is it a statistical method, such as suggested by Bill Modlin?
Could be, the arguments are strong. It appears to be plausible at
the neuron level. There are several studies in the statistical
behavior of language (Charniak is an example), which is what
shows a possible idea for that high level.
Is it a rule based, symbolic grouping method? Could be, there are
several works related to language as rule structures. What I'm
attempting to see is if these ideas also work at lower levels.
Because if they work, then we would have a unique mechanism to
explain it all.
Regards,
Sergio Navega.
From: "Gary Forbis" <forbis@accessone.com>
Subject: Re: The Baby in a Vat
Date: 07 May 1999 00:00:00 GMT
Message-ID: <926090259.936.94@news.remarQ.com>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
X-Priority: 3
X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2014.211
X-Complaints-To: newsabuse@remarQ.com
X-Trace: 926090259.936.94 F4ET54IMA8009D12BC usenet1.supernews.com
Organization: Posted via RemarQ Communities, Inc.
X-MSMail-Priority: Normal
NNTP-Posting-Date: Fri, 07 May 1999 15:17:39 GMT
Newsgroups: comp.ai.philosophy
Sergio Navega <snavega@ibm.net> wrote in message
news:3732ddeb@news3.us.ibm.net...
> Gary Forbis wrote in message <926034865.145.60@news.remarQ.com>...
> >
> >Sergio Navega <snavega@ibm.net> wrote in message
> >news:3732058f@news3.us.ibm.net...
> >> Gary Forbis wrote in message <926017401.071.98@news.remarQ.com>...
> >> >Without claiming the symbols have the same semantic content every time
> >> >used one would have a hard claiming they were symbols. One doesn't
> >> >necessarily need to know what the symbols mean when one starts but
> >> >one needs to know they have meaning.
> >>
> >> What I'm proposing is that the symbols in this hypothesis do not carry
> >> any intrinsic meaning. They don't mean nothing to the receiving side,
> >> because the receiving side is "blank". They carry only
similarities
> >> with the surrounding exemplars. In other words, you could have two
> >> entirely different sequences of symbols "meaning" exactly the same
> >> thing, if they carry the same regularity, the same law of formation,
> >> the same redundancies. The meaning is within the shape of this
> >> stream of symbols.
> >
> >OK, I'm dense. How does one distinguish between what could be and what
is.
> >
> >For instance, do:
> > I sat on a cold chair.
> >and
> > Harry spat in the hot spring.
> >have the same regularity, law of formation, and redundancies?
>
> What you've proposed is a symbol stream situated in a very high
> position in relation to what I was addressing. Below this level
> there's a whole "iceberg body" that should have already been
> understood by the entity in order to comprehend your phrases.
I've not made myself clear.
> Any regularity that we find in such exemplars will give us some
> information about the surface structure of natural language (syntax)
> and some levels of semantics (not very deep).
As I understand it, you are considering this situation at all levels.
I've used the example above in order to understand your position.
I could have written:
Do:
aababbc
and:
xxrxrry
have the same regularity, law of formation, and redundancies?
It seems to me you've snuck in semantics by even refering to symbols.
How do you look for regularities in low level signals unless you assume
the signals have content expressed as regularities?
Consider the code wheel where the letters are shifted the number of letters
equal to their position in the message. That is, "abcde" gets translated
to
"aaaaa." You may not know if such a code is being used.
Do:
abddfgi
and
xxrxrry
have the same regularity, law of formation, and redundancies?
....
> As an attempt to give an example, I will try to develop what appears
> to be below the concept "apple", following this pyramid structure.
> The leftmost column are the structures, at the right some comments:
>
> "apple"
Tip
of iceberg, highest level
>
>
> [round, spherical]
> [red, occasionally green] Intermediate level,
structures made
> [smells this way XXXX] of low level
concepts
> [tastes this way YYYY]
> [also tastes like ZZZZ]
>
>
> _/_
> / \ [typical red spectrum]
Lowest level, structures
> | | [YYYY patterns of taste]
close to sensory signals
> \___/ .....
You see, when I read what you write it looks like you're saying the
symbols have no intrinsic meaning, yet it appears to me you can't
look for redundancies unless you assume there is a consistency of
meaning whenever the symbol appears. You may cover this under
your "law of formation" language. I may be misusing the word
"semantic content." I consider this equivalent to what I am proposing
you may mean by "law of formation." If this is correct then the symbols
are given their semantic content by the way they are formed.
Maybe by "intrinsic meaning" you mean "well defined and known a priori
meaning."
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 07 May 1999 00:00:00 GMT
Message-ID: <3733348b@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 7 May 1999 18:44:27 GMT, 129.37.182.251
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Gary Forbis wrote in message <926090259.936.94@news.remarQ.com>...
>Sergio Navega <snavega@ibm.net> wrote in message
>
>> Any regularity that we find in such exemplars will give us some
>> information about the surface structure of natural language (syntax)
>> and some levels of semantics (not very deep).
>
>As I understand it, you are considering this situation at all levels.
>I've used the example above in order to understand your position.
>I could have written:
>
>Do:
> aababbc
>and:
> xxrxrry
>have the same regularity, law of formation, and redundancies?
>
>It seems to me you've snuck in semantics by even refering to symbols.
>How do you look for regularities in low level signals unless you assume
>the signals have content expressed as regularities?
>
Thanks, Gary, for keeping this thread alive. What you asked is
probably among the key questions. I'll try to rephrase your question,
splitting it in two, please check if this is what you meant:
a) How do I know 'what to use' in order to detect regularities in
a low level signal?
b) How do I know that the meaningful content of a signal is embedded
in regularities in that signal?
Item a), I will try to answer later.
I'll start with b)
I don't know how to justify this. I have only some clues.
I don't see any other way other than assuming that we have chaos
and we have order. From chaos alone, we can't extract nothing.
From order alone, we can't extract nothing. But it seems that we
*can* extract something if chaos and order alter in time. I
hypothesize that we are able to perceive things from this
chaos/order dance. This leads me to postulate the most basic
and fundamental principles of intelligence:
1) Perception of repetition
2) Perception of anomaly
Item 1) appears when we start noticing things that repeat in time.
The moment in which we perceive this repetition is important,
it grabs our attention. After some time, this is no longer
important.
Item 2) appears when suddenly, after a lot of repetitions, we
perceive something that is not repeating. Our attention is
again grabbed by this occurrence.
When we move to a new house, the first night will be strange. We
will be focused in each new "click" and "plick" that we hear.
After some days, that would be normal and we would not care.
>Consider the code wheel where the letters are shifted the number of letters
>equal to their position in the message. That is, "abcde" gets
translated
to
>"aaaaa." You may not know if such a code is being used.
>
>Do:
> abddfgi
>and
> xxrxrry
>have the same regularity, law of formation, and redundancies?
>
>....
>
That's exactly the kind of example I needed.
Lets turn our attention for a while to animal vision.
Lateral inhibition in the visual cortex columns happen to assemble
edge detectors. We have specialized areas able to detect movement.
We have specialized areas in visual cortex to process colors.
All this is what we can call "innate feature detectors", machinery
that does a first level processing of the signal and transform
it into another signal, full of the *important regularities*.
Evolution "designed" these mechanisms, because they were
important for our survival.
Back to your example, imagine that this special organism, in
charge of decoding the code wheel that you've proposed, had a
lot of innate, pre-wired and primitive mechanisms. Here's a simple
list of them:
a) Method that recognizes all letters of the alphabet
b) Method to get the alphabetic successor of a letter
c) Method to get the alphabetic predecessor of a letter
d) Method to compare two letters for equality
e) Method to compare two letters for inequality
f) Method to return with the level of alphabetical proximity
of two letters
g) Method to count numbers
g) and others.
Somebody designed these primitive methods. As I said, on animals
and humans, the "designer" was nature. Our task is to devise a
mechanism that, using only these primitive operations, discover
the code by a process of learning. So the code wheel is the "world"
of this computer and its "senses" sense exemplars.
Initial exemplars are these:
Input Output
abcde aaaaa
adfig acdeb
... ...
The process starts by randomly selecting one of the primitive
operations and applying to the exemplars furnished (our sensory
signal). Does that reveal anything regular? Remember that
we're looking for things that allows us to make reocurring
things.
After thousands of unsuccessful experiments, the program will note
that when it takes the predecessor of the second character, it
produces the correct character of the output. This occurrence
alone is not much, but as it *repeats* when using other exemplars,
the system starts to assign to that method a greater probability
of being right (the program kept one hypothesis and using
new evidences "tested" that hypothesis; the initial discovery
was fed by random exploration).
This same process happens with all other characters of the string.
The whole computation is very demanding and it must be done by
a bunch of parallel processes.
However, after some time, you will end up with a lot of *probable*
methods, rules that have emerged because they worked well several
times. Then, when processing new exemplars, the random process
of selection will *pick more often* the methods that worked well
in the past (it will think "inductively", pretending that what
worked well in the past will work well now). And more: the
methods that worked well would be analogically reused to
conceive the solution of the next step, so by the third or
fourth character the program would have perceived the
"law of formation" of the code wheel (it knows about counting
and a parallel process would detect this "regularity": that
all the times the character was "predecessed" a number of
times equal to its position).
Maybe this whole process I've described could have been done
by a neural network of suitable architecture (although I
have some doubts about this). The difference is that this
method becomes *progressively more efficient* in time. The
"training" time of this method may be comparable to that of
an ANN, but only in the initial stages. After that, this method
will give more and more privileges to the best methods, while
ANNs will always have to calculate everithing from the beginning.
This progressive efficacy in solving a problem is one of
the greatest advantages of learning.
All this story coincides with the way we appear to handle
an unknown problem by the first time. We struggle, fight and
burn our neurons but, after some time, we grasp some points
that make us more effective in solving the problem. After
some of time, we may become an expert, that solves the problem
*immediately*, in the blink of an eye. We have, at that time,
cracked the "code wheel" of the problem.
Regards,
Sergio Navega.
From: modlin@concentric.net
Subject: Re: The Baby in a Vat
Date: 08 May 1999 00:00:00 GMT
Message-ID: <7h1rcu$6c@journal.concentric.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
Organization: Concentric Internet Services
Reply-To: modlin@concentric.net
Newsgroups: comp.ai.philosophy
In <3733348b@news3.us.ibm.net>, "Sergio Navega" <snavega@ibm.net>
writes:
>Gary Forbis wrote in message <926090259.936.94@news.remarQ.com>...
>>Sergio Navega <snavega@ibm.net> wrote in message
>>
>>> Any regularity that we find in such exemplars will give us some
>>> information about the surface structure of natural language (syntax)
>>> and some levels of semantics (not very deep).
>>
>>As I understand it, you are considering this situation at all levels.
>>I've used the example above in order to understand your position.
>>I could have written:
>>
>>Do:
>> aababbc
>>and:
>> xxrxrry
>>have the same regularity, law of formation, and redundancies?
Sergio, allow me to interject a response to Gary...
To developed human perception there certainly seems to be a similarity in
pattern.
But the similarity is abstract, a matter of counting and rhythm. I don't
think we'd notice it at first, not at the early levels of analysis and
certainly not from two examples. Any algorithm that begins by looking
directly for patterns of that sort will fall apart under the weight of a
combinatorial explosion of possibilities. I think we need to build up to
recognizing patterns like this through many levels of more primitive
functions.
If those patterns recur frequently in the data we might notice the
recurring duplicaton of "a", "b", "x" and "r", and
generate 4 new signals
triggered for these short runs. Call them A,B,X, and R.
Relaying the information through another layer of mutually-inhibiting
signal generators, the unpaired instances may get new signals of their own,
a', b', x', and r'.
Then we might notice that {A,a',B,b',c} tend to show up at the same time,
and generate a new signal P for that cluster, probably triggered when 3 or
4 of the set appears in any order within a few moments of each other.
Similarly we might detect the presence of some subset of {X,x',R,r',y'} and
generate Q.
What happens next depends on the behavior of the data... Do P and Q
associate with something else? What about a,b,c,a',b',x',r',A,B,X, and R?
Do they form other groups? What can we predict about their occurrences?
If we do find associations for P or Q it may eventually turn out that it is
useful to distinguish them more carefully, for example to attend to the
ordering of their components and to distinguish the presence of all
elements in the right order. And at some level we might notice a
fine-grained structural similarity in the rhythm of the sequences.
But with noisy inputs we can't start out being so picky... we're looking
for broad regularities to provide context for a more detailed examination.
In any case, the question is ill-formed. A particular set of signals
doesn't have a specific "law of formation" in itself. It may fit any
number of patterns at once, and which ones we perceive depend heavily on
our experience with the signals involved and the context in which they are
presented.
>> It seems to me you've snuck in semantics by even refering to symbols.
>> How do you look for regularities in low level signals unless you assume
>> the signals have content expressed as regularities?
Calling the signals "symbols" is misleading. Perhaps "arbitrary
tokens"
would be better. They are distinguisable events in an alphabet dictated
by the architecture of the system in which they occur. The particular
selection of signals observed at any given time is influenced in some
manner by unknown processes, some of which may be external to the system.
If there are regularities in the signals, they reflect some regularity in
the unknown processes which caused them to appear... any particular
instance of any pattern may arise from chance combination among random
variables, but multiple recurrence of any identifiable pattern presumably
has a reason, somewhere. Noticing distinguishable recurring groupings
among the accessible signals is the only way of getting a handle on the
behavior of whatever may be causing them.
Reqularities in patterns of signals are directly observable, without need
for assumptions as to content or meaning of the signals being observed.
Meaning is derivative, not assumed. P comes to mean that Q will probably
follow and that there is a Z somewhere in the neighborhood, because that is
what we observe to happen. That's all there is, there is no other more
fundamental meaning of meaning to be discovered.
>> Consider the code wheel where the letters are shifted the number of
>> letters equal to their position in the message. That is, "abcde"
gets
>> translated to "aaaaa." You may not know if such a code is being
used.
If our inputs were encrypted so that surface regularities were always
carefully obscured, we'd never learn anything. That's what encryption is
for: to hide meanings.
Fortunately Nature is indifferent rather than Machiavalean, and makes no
deliberate effort to systematically obscure the messages we receive through
our senses.
The code wheel spins, but lazily. It stays in the same place for
milliseconds or seconds at a time, and when it moves it affects many
channels in parallel so that we can detect redundancies in the simultaneous
changes in different signals. It tends to return to similar positions at
different times and to step through the same sequences over longer periods,
so that eventually we can unravel the pattern of its spinning, and learn to
predict when abcde will show up as aaaaa.
Bill Modlin
From: "Gary Forbis" <forbis@accessone.com>
Subject: Re: The Baby in a Vat
Date: 10 May 1999 00:00:00 GMT
Message-ID: <926343438.704.70@news.remarQ.com>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net>
X-Priority: 3
X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2014.211
X-Complaints-To: newsabuse@remarQ.com
X-Trace: 926343438.704.70 F4ET54IMA8050D12BC usenet1.supernews.com
Organization: Posted via RemarQ Communities, Inc.
X-MSMail-Priority: Normal
NNTP-Posting-Date: Mon, 10 May 1999 13:37:18 GMT
Newsgroups: comp.ai.philosophy
I'm sorry for the deletions. I'm not trying to avoid anything, just focus
on
particular points so I may understand.
<modlin@concentric.net> wrote in message
news:7h1rcu$6c@journal.concentric.net...
> In <3733348b@news3.us.ibm.net>, "Sergio Navega"
<snavega@ibm.net> writes:
> >Gary Forbis wrote in message <926090259.936.94@news.remarQ.com>...
> >> It seems to me you've snuck in semantics by even refering to symbols.
> >> How do you look for regularities in low level signals unless you assume
> >> the signals have content expressed as regularities?
>
> Calling the signals "symbols" is misleading. Perhaps "arbitrary
tokens"
> would be better. They are distinguisable events in an alphabet dictated
> by the architecture of the system in which they occur. The particular
> selection of signals observed at any given time is influenced in some
> manner by unknown processes, some of which may be external to the system.
A linguist might be able to straighten this out for me.
Something I remember is tracks in the woods are a sign of a bear's passing
but it is not a symbol of a bear's passing. I believe semantics deals with the
meaning of signs as well as the meaning of symbols.
> If there are regularities in the signals, they reflect some regularity in
> the unknown processes which caused them to appear...
You see, this looks like semantics to me. By postulating that regularities in
the signals (And I hesitate to call them signals since this implies information
is carried by variations in the media) reflect regularities in the unknown processes
it seems to me meaning has been snuck in.
Can there be syntactical information without some level of semantic content?
Don't use constraints fall under semantic information? For instance,
"I are happy" is not grammatically correct. The syntactic error is due
to a part
of the semantic content of the words "I" and "are," in particular, the
quantifier portion of their semantic content.
OK, we want to say some detectable variation is a signal. Further we want
to say regularities are signs of regularities elsewhere and deal with them
symbolically. How do we do this without assuming the detectable variations
have semantic content?
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 10 May 1999 00:00:00 GMT
Message-ID: <373758b5@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net> <926343438.704.70@news.remarQ.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 10 May 1999 22:07:49 GMT, 166.72.29.253
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Gary Forbis wrote in message <926343438.704.70@news.remarQ.com>...
>
>Can there be syntactical information without some level of semantic content?
>Don't use constraints fall under semantic information? For instance,
>
>"I are happy" is not grammatically correct. The syntactic error is due
to a
>part
>of the semantic content of the words "I" and "are," in particular,
the
>quantifier portion of their semantic content.
>
"I are happy" is syntactically incorrect, but semantically correct.
It is the same as "<pronoun> <to be> <attribute>", which is
valid
semantics.
"Colorless green ideas sleep furiously" is a famous phrase by Chomsky
that is syntactically correct but semantically incorrect.
I still don't understand what is your main point.
Regards,
Sergio Navega.
From: modlin@concentric.net
Subject: Re: The Baby in a Vat
Date: 10 May 1999 00:00:00 GMT
Message-ID: <7h85q1$stc@journal.concentric.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net> <926343438.704.70@news.remarQ.com>
Organization: Concentric Internet Services
Reply-To: modlin@concentric.net
Newsgroups: comp.ai.philosophy
In <926343438.704.70@news.remarQ.com>, Gary Forbis writes:
>I'm sorry for the deletions. I'm not trying to avoid anything, just focus
>on particular points so I may understand.
Perfectly reasonable.
> Forbis, previously:
>
>> It seems to me you've snuck in semantics by even refering to symbols.
>> How do you look for regularities in low level signals unless you
>> assume the signals have content expressed as regularities?
> Modlin:
>
>> Calling the signals "symbols" is misleading. Perhaps
"arbitrary tokens"
>> would be better. They are distinguisable events in an alphabet dictated
>> by the architecture of the system in which they occur. The particular
>> selection of signals observed at any given time is influenced in some
>> manner by unknown processes, some of which may be external to the
>> system.
> Forbis:
> A linguist might be able to straighten this out for me.
>
> Something I remember is tracks in the woods are a sign of a bear's passing
> but it is not a symbol of a bear's passing. I believe semantics deals
> with the meaning of signs as well as the meaning of symbols.
Ok... I take the word loosely to refer to anything to do with
"meanings"
and their mapping to other things. I don't know what a linguist would say.
> Modlin:
>
> If there are regularities in the signals, they reflect some regularity
> in the unknown processes which caused them to appear...
> Forbis:
>
> You see, this looks like semantics to me. By postulating that
> regularities in the signals (And I hesitate to call them signals since
> this implies information is carried by variations in the media) reflect
> regularities in the unknown processes it seems to me meaning has been
> snuck in.
Well, it is a basis for a discussion of the semantics of the signals.
It is an explanation of why we might be interested in regularities...
we care about them because the probably mean something, they tell us
something about something other than themselves, namely the unknown
processes that generated them. So working out algorithms to discover
regularities is a step toward discovering semantics.
But it isn't being "snuck in", it is not part of the description of
the algorithms for finding regularities. We find regularities by looking
at the behavior of the signals themselves, without regard to any possible
meanings behind that behavior.
The point of the exercise is eventually to discover semantics, but we
aren't assuming we know any semantics to start with.
> Forbis:
>
> Can there be syntactical information without some level of semantic
> content? Don't use constraints fall under semantic information?
> For instance,
>
> "I are happy" is not grammatically correct. The syntactic error is
due
> to a part of the semantic content of the words "I" and "are," in
> particular, the quantifier portion of their semantic content.
Hmm. I'm not sure of the technicalities here, but if you can tell that
this sequence is wrong just by looking at the tokens without knowing
what any of it means then I think it would be called a syntactic rule,
not a semantic one. But I'm not sure why it matters...?
> OK, we want to say some detectable variation is a signal. Further we want
> to say regularities are signs of regularities elsewhere and deal with them
> symbolically. How do we do this without assuming the detectable
> variations variations have semantic content?
I'm not sure what is bothering you here. I think you are mixing levels...
the assumptions are part of our meta-discussion, but not part of the
mechanisms we are discussing.
We (or at least I) don't want to deal with anything "symbolically".
I just want my algorithms to discover regularities in the events to
which they have access.
I do hope and expect that this will lead to the emergence of symbol
processing and understanding and a lot of other things... but I'm
not designing anything at a symbolic processing level. That level
has to emerge on its own.
???
Somehow I don't think this got us a lot closer to understanding each
other.... care to try asking again?
Bill Modlin
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 11 May 1999 00:00:00 GMT
Message-ID: <37382c47@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net> <926343438.704.70@news.remarQ.com>
<7h85q1$stc@journal.concentric.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 11 May 1999 13:10:31 GMT, 129.37.183.90
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
modlin@concentric.net wrote in message
<7h85q1$stc@journal.concentric.net>...
>[big snip]
>We (or at least I) don't want to deal with anything "symbolically".
>I just want my algorithms to discover regularities in the events to
>which they have access.
>
>I do hope and expect that this will lead to the emergence of symbol
>processing and understanding and a lot of other things... but I'm
>not designing anything at a symbolic processing level. That level
>has to emerge on its own.
>
Bill, I find this thought reasonable, it really may be the way
symbolic cognition emerges naturally in our brain. But it can
also be wrong! Maybe there's something else in order to allow
this symbolic processing to appear by itself. What worries me
is that we may discover the need of this "something else" after
putting a lot of effort in another direction.
So my strategy is also to "think backwards". To see what happens
in the symbolic world and try to find a connection between
this world and the primitive sensory processing side, just
for the sake of getting a glimpse. Finding anything relevant
in this "connection" may give us a clue about the nature of
this emergence.
Regards,
Sergio Navega.
From: ohgs@chatham.demon.co.uk (Oliver Sparrow)
Subject: Re: The Baby in a Vat
Date: 11 May 1999 00:00:00 GMT
Message-ID: <373ad56c.1712164@news.demon.co.uk>
Content-Transfer-Encoding: 7bit
X-NNTP-Posting-Host: chatham.demon.co.uk:158.152.25.87
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net> <926343438.704.70@news.remarQ.com>
Content-Type: text/plain; charset=us-ascii
X-Complaints-To: abuse@demon.net
X-Trace: news.demon.co.uk 926410868 nnrp-02:9659 NO-IDENT
chatham.demon.co.uk:158.152.25.87
MIME-Version: 1.0
Newsgroups: comp.ai.philosophy
"Gary Forbis" <forbis@accessone.com> wrote:
>Something I remember is tracks in the woods are a sign of a bear's passing
>but it is not a symbol of a bear's passing.
I had not heard that before: splendid. I have also not followed this
thread, having been put off by the title. This said, it seems to be
following one of the most fruitful issues that c.a.p addresses: how
does a system, as yet unformed as a system, partition itself so as to
be able to {do something}.
I suggest that the curly brackets may be where the important bodies
are buried. 'Doing something' is actually (conceptually) lots of
different things:
Assign partitions and centres of weight to
external data.
Assign a key partition, which marks the
external-internal divide.
Build a system of regularities which allows
assignment, as above.
Build regularities to parse and use partitions
and data events.
... plus a host of extras, analogous to s/ware
over above OS.
... some of which are hardwired or early,
primitive imperatives.
Bill Modlin (whose mind I have already read incorrectly in public once
this month) seems to focus on the very first of these. The second is,
perhaps, a late arrival at the Cognition Ball, but an important one.
The third takes us to fascinating territory. It is fascinating insofar
as what is happening is a system building itself. That is, one starts
with simple regularities or with none, and the structure acts upon
itself to create order. The grammar of ordering includes - implicitly,
of course - instructions for modifying its own nature.
The bear prints become symbols, or anyway have the potential to take
on an agency within the system that transcends their origins as data.
They start as modulations of assorted systems of neural clumping that
in their turn flag what an observer would call patches and outlines
(etcetera), but they end as evocation of webs of association amongst
things assigned as centres of weight, behind partitions. Further, they
- these data or a spread of data, or internal parsing of what has been
established - alter the nature of these partitions.
Data thus change the grammar or ordering. The changed grammar alters
what is done with the data. Information systems (perhaps alone) have
this capacity for self-surgery, in that they alter what they are, and
thus alter the grounds on which auto-surgery is performed, and thus
change further...
And what is changed is twofold: not merely changes affected by data
that has been stashed away from which an unchanged system now draws
new conclusions and action, but a change in *what the information
system is*. That is, the basic framework of description that an
external observer would need to employ is being acted upon when the
system performs auto surgery.
Novel writers sometimes use the conceit of a character who is both in
the novel, but also the author of it. James Branch Cabel went one
further, including the god who created the author, who wrote about
both the way in which the god came into existence as a result of
actions within the novel but also about the god creating the author
who wrote the novel that created the god. (Jurgen, 1900-ish?)
The fourth category does the same thing, but grounded in data and a
pre-established (if evolving) framework. This conceals what is
happening.
The remaining two categories (to which many more can be added, of
course) tend to generate much heat and little light. One can only
usefully discuss them, I suggest, when one has a good handle of how
natural intelligence performs tasks 1-4.
_______________________________
Oliver Sparrow
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 10 May 1999 00:00:00 GMT
Message-ID: <3736db27@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 10 May 1999 13:12:07 GMT, 200.229.240.116
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
modlin@concentric.net wrote in message <7h1rcu$6c@journal.concentric.net>...
>In <3733348b@news3.us.ibm.net>, "Sergio Navega"
<snavega@ibm.net> writes:
>>Gary Forbis wrote in message <926090259.936.94@news.remarQ.com>...
>>>Sergio Navega <snavega@ibm.net> wrote in message
>>>
>>>> Any regularity that we find in such exemplars will give us some
>>>> information about the surface structure of natural language (syntax)
>>>> and some levels of semantics (not very deep).
>>>
>>>As I understand it, you are considering this situation at all levels.
>>>I've used the example above in order to understand your position.
>>>I could have written:
>>>
>>>Do:
>>> aababbc
>>>and:
>>> xxrxrry
>>>have the same regularity, law of formation, and redundancies?
>
>Sergio, allow me to interject a response to Gary...
>
>To developed human perception there certainly seems to be a similarity in
>pattern.
>
>But the similarity is abstract, a matter of counting and rhythm. I don't
>think we'd notice it at first, not at the early levels of analysis and
>certainly not from two examples. Any algorithm that begins by looking
>directly for patterns of that sort will fall apart under the weight of a
>combinatorial explosion of possibilities. I think we need to build up to
>recognizing patterns like this through many levels of more primitive
>functions.
>
>If those patterns recur frequently in the data we might notice the
>recurring duplicaton of "a", "b", "x" and "r",
and generate 4 new signals
>triggered for these short runs. Call them A,B,X, and R.
>
>Relaying the information through another layer of mutually-inhibiting
>signal generators, the unpaired instances may get new signals of their own,
>a', b', x', and r'.
>
>Then we might notice that {A,a',B,b',c} tend to show up at the same time,
>and generate a new signal P for that cluster, probably triggered when 3 or
>4 of the set appears in any order within a few moments of each other.
>Similarly we might detect the presence of some subset of {X,x',R,r',y'} and
>generate Q.
>
>What happens next depends on the behavior of the data... Do P and Q
>associate with something else? What about a,b,c,a',b',x',r',A,B,X, and R?
>Do they form other groups? What can we predict about their occurrences?
>[snip]
Bill, thanks for your intervention, it was on track with what I was
thinking. A relevant point, then, appears. I know that you propose
to implement this process as a network of pre-built nodes with
modified hebbian adjustment of weights. It's clear that your process
should work. But what I propose is different, although similar in
some principles.
I propose that each snippet of input that appears to repeat be
transformed into a node and that the link among these nodes
(also the creation of these links to neighboring nodes)
be adjusted by some modified-hebbian mechanism, as you suggest.
This is like a network of connected nodes in which *each node* is
made of *relevant parts* of the sensory input. The nodes are not
just a summing point, they have *internal structure* and they
accumulate to "internal references", "symbols" just like your
a', x', etc., which are also nodes. The same process may be applied
over those accumulated nodes, *just as if they were input signals*,
and this can be done because they are not just nodes, they have
structure, they have data. Then, we start to recognize similarity
in the accumulated similarities, exactly the kind of bootstraping
process that we agree must happen.
Why I think of this structure instead of a pre-built network that
only adjusts weights? Because this appears to be significant if
we take into consideration the self-organizing and plastic
nature of our brain. I think we are misled into thinking that
our brain is born with a predetermined network structure. What
appears to happen is the creation and destruction of links
as experiences accumulate. Destruction of unused links, in
particular, is *very relevant*. This make me think of a
different situation:
Our brain may behave just like a self-growing, "initially empty"
set of nodes and links. As experiences accumulate, the architecture
of the network is molded, developed, in terms of the number of
nodes and links among them.
We may be misled by the fact that we have a lot of neurons
initially. But in practice, what may be happening is the functional
equivalent of a "seed" that continually grows into a fully
branched tree. The "shape" of that tree (architectural organization)
is something that was built as a function of inputs. The nodes
are fruits, with content and structure.
This also allows a multitude of parallel, different processes to
run over those "nodes with structure", like that of spreading
activation (one stimulus "fires" one node that allows this
activation to traverse in several parallel fronts).
These ideas obviously need fleshing out, and that's why I urge
you to publish your texts so we can exchange more on this matter.
Regards,
Sergio Navega.
From: modlin@concentric.net
Subject: Re: The Baby in a Vat
Date: 10 May 1999 00:00:00 GMT
Message-ID: <7h85ls$stc@journal.concentric.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net> <3736db27@news3.us.ibm.net>
Organization: Concentric Internet Services
Reply-To: modlin@concentric.net
Newsgroups: comp.ai.philosophy
In <3736db27@news3.us.ibm.net>, "Sergio Navega" writes:
> Bill, thanks for your intervention, it was on track with what I was
> thinking. A relevant point, then, appears. I know that you propose
> to implement this process as a network of pre-built nodes with
> modified hebbian adjustment of weights. It's clear that your process
> should work. But what I propose is different, although similar in
> some principles.
>
> I propose that each snippet of input that appears to repeat be
> transformed into a node and that the link among these nodes
> (also the creation of these links to neighboring nodes)
> be adjusted by some modified-hebbian mechanism, as you suggest.
>
> This is like a network of connected nodes in which *each node* is
> made of *relevant parts* of the sensory input. The nodes are not
> just a summing point, they have *internal structure* and they
> accumulate to "internal references", "symbols" just like your
> a', x', etc., which are also nodes. The same process may be applied
> over those accumulated nodes, *just as if they were input signals*,
> and this can be done because they are not just nodes, they have
> structure, they have data. Then, we start to recognize similarity
> in the accumulated similarities, exactly the kind of bootstraping
> process that we agree must happen.
>
Sergio, I'm having trouble seeing the difference that you
clearly think you are making...
You suggest that a snippet of recurrent input is transformed into a
node. I assume that a "snippet" is here some identifiable combination
or pattern of signals currently available?
Presumably this means that you have some mechanism for noticing
that some combination of signals repeats itself.
Then, after you notice this circumstance, you allocate a node which
has inputs from the parts of the combination constituting the pattern
which it will represent.
This new node will thereafter generate a new signal whenever it finds
the right input signals. Its new signal can then be noticed to be part
of still more combinations, and so on.
How is that different?
I said we allocate a bunch of nodes with initially undefined functions.
They each look for a recurring identifiable combination or pattern
of existing signals, and each starts generating a new signal to
represent that combination once they find it.
We both wind up with nodes that generate an output when some snippet
of input is observed, we both wind up recognizing further combinations
including the newly-created signals once they become available.
The only difference that I can see is that I'm explaining the algorithm
for finding recurrent snippets and building it in to the function of
the node itself, while you are treating the "find a snippet" function
as a separate operation which you leave unspecified.
I like my way better. Since my nodes find their own snippets they
know how to adjust them as more evidence arrives... they can start out
with a rough definition and refine it over time. In your description,
with a separate process discovering the combinations, you have to get
it right from the first.
You emphasise the destruction of unused links, contrasting that with
adjustment of weights. You also mention creating new links.
I think you are inventing differences where none exist.
How is "destroying a link" different from setting the weight of a link
to an insignificant value?
How is "creating a link" different from increasing the weight of a
previously insignificant link to the point where is is effective?
"Creating links" and "destroying links" are just limit-cases of
weight adjustment.
When you work out your algorithms for creating and destroying links,
you'll find that they are nothing more or less than somehow keeping
track of the relationships between two nodes, and changing the
weight of the connections between them in accordance with your
observations. Which is just what weight adjustment is all about.
You said destroying links was most important. But I'm sure when
you think it over you will agree that there is no important difference
between disconnecting two nodes by physically destroying the link
and disconnecting them by setting the link weight to 0. If you
want to physically destroy the links, fine... just destroy
links that have been weight-0 for a long time. That's what really
happens in the brain... we start with perhaps 200,000 links per node,
and after a few years wind up with only an average of 20,000 or
so, keeping only the most active 10%. The rest atrophy from
disuse.
I have an impression that you are bothered by the fact that in my
description we can only "create" (increase the weight of) links which
exist from the beginning, whereas you imagine that you can create
links arbitrarily, connecting any nodes as the need is perceived.
But the problem is that you can't perceive the need for a link
between two nodes without somehow bringing the signals from those
nodes together at some point: noticing the need for a link requires
the pre-existence of a link to monitor the relationship betwen the
two nodes.
In small nets, you can imagine that all nodes start out connected
to all other nodes.
But with trillions of nodes, full connection is impossible. You
simply can't have trillions-squared synapses. They won't fit.
So you have to make do with 200,000 to choose from for each node.
Bill Modlin
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 11 May 1999 00:00:00 GMT
Message-ID: <37382c55@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net> <3736db27@news3.us.ibm.net>
<7h85ls$stc@journal.concentric.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 11 May 1999 13:10:45 GMT, 129.37.183.90
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
modlin@concentric.net wrote in message
<7h85ls$stc@journal.concentric.net>...
>In <3736db27@news3.us.ibm.net>, "Sergio Navega" writes:
>
>> Bill, thanks for your intervention, it was on track with what I was
>> thinking. A relevant point, then, appears. I know that you propose
>> to implement this process as a network of pre-built nodes with
>> modified hebbian adjustment of weights. It's clear that your process
>> should work. But what I propose is different, although similar in
>> some principles.
>>
>> I propose that each snippet of input that appears to repeat be
>> transformed into a node and that the link among these nodes
>> (also the creation of these links to neighboring nodes)
>> be adjusted by some modified-hebbian mechanism, as you suggest.
>>
>> This is like a network of connected nodes in which *each node* is
>> made of *relevant parts* of the sensory input. The nodes are not
>> just a summing point, they have *internal structure* and they
>> accumulate to "internal references", "symbols" just like your
>> a', x', etc., which are also nodes. The same process may be applied
>> over those accumulated nodes, *just as if they were input signals*,
>> and this can be done because they are not just nodes, they have
>> structure, they have data. Then, we start to recognize similarity
>> in the accumulated similarities, exactly the kind of bootstraping
>> process that we agree must happen.
>>
>
>Sergio, I'm having trouble seeing the difference that you
>clearly think you are making...
>
Don't be troubled, I have the same feeling...but apparently there
is something different. What follows should be taken as my process
of "thinking out loud" (this is really a bad thing to do here
in c.a.p., people often criticize points that are temporary; but
I like to use c.a.p. as a mechanism of brainstorming).
>You suggest that a snippet of recurrent input is transformed into a
>node. I assume that a "snippet" is here some identifiable combination
>or pattern of signals currently available?
>
Exactly.
>Presumably this means that you have some mechanism for noticing
>that some combination of signals repeats itself.
>
You will not like this (:-) but, yes, there's something to notice
things that repeat. Needless to say, I'm assuming that these are
innate (sheesh).
>Then, after you notice this circumstance, you allocate a node which
>has inputs from the parts of the combination constituting the pattern
>which it will represent.
>
Like that. With some subtle changes regarding where the node takes
its information from. I'm trying to put here the figure of the
short-term memory, something that accumulates a recent story of
inputs in a way to allow the discovery of temporally distributed
patterns.
>This new node will thereafter generate a new signal whenever it finds
>the right input signals. Its new signal can then be noticed to be part
>of still more combinations, and so on.
>
>How is that different?
>
I'm almost agreeing that the end result may be pretty much the same,
but the mechanism is more "symbolic" and less numeric. I'll try
to give an example later.
>I said we allocate a bunch of nodes with initially undefined functions.
>They each look for a recurring identifiable combination or pattern
>of existing signals, and each starts generating a new signal to
>represent that combination once they find it.
>
>We both wind up with nodes that generate an output when some snippet
>of input is observed, we both wind up recognizing further combinations
>including the newly-created signals once they become available.
>
Yes, but there's a difference in the way we look to nodes. I'm trying
to see nodes as small "feature detectors". They accumulate "typical"
entries which are compared with the current signal. They try to
generalize the entries to some extent, keeping only the common
part.
>The only difference that I can see is that I'm explaining the algorithm
>for finding recurrent snippets and building it in to the function of
>the node itself, while you are treating the "find a snippet" function
>as a separate operation which you leave unspecified.
>
It is unspecified but it is easily specifiable in test runs. And
the presence of this function is biologically plausible (the
dreaded innate feature discovery mechanisms).
>I like my way better. Since my nodes find their own snippets they
>know how to adjust them as more evidence arrives... they can start out
>with a rough definition and refine it over time.
I have no doubts that your process is more general than mine and I'm
interested in discussing your approach to any level of detail that
you want. But I have a serious constraint, that of producing something
practical, to go to the "finish line", even if only to discover that
I should have done differently. In fact, that's my main reason to
"jump over" some details: I want to cross the finish line, legless,
armless, just one eye left from the battle with the computer, but
with a better understanding of the whole problem. Then, I may
return to the beginnings again.
>In your description,
>with a separate process discovering the combinations, you have to get
>it right from the first.
>
Indeed. But I think that there exists several sets of mechanisms, all
of them leading to useful results. It is more a question of finding
one set which allows good combinations. Maybe with my example this
will become clearer.
>You emphasise the destruction of unused links, contrasting that with
>adjustment of weights. You also mention creating new links.
>
>I think you are inventing differences where none exist.
>
>How is "destroying a link" different from setting the weight of a link
>to an insignificant value?
>
>How is "creating a link" different from increasing the weight of a
>previously insignificant link to the point where is is effective?
>
>"Creating links" and "destroying links" are just limit-cases of
>weight adjustment.
>
In terms of destroying links, our approaches are really similar.
I should have said previously that not only links seem to be destroyed
but also unused nodes. This allows one to reclaim memory space occupied
by that node to be used to new and more significant ones. The effect,
on the long term, is the creation of an architecture that is not
a function of our initial specifications, but whose shape is strongly
directed by the experiences that the system had.
This have the important feature of making the system optimal to solve
the kind of problems it will face. The resultant architecture "freezes"
the kinds of operations that must be processed with maximum speed.
This is the result of an adaptation to solve the kinds of patterns
that the system is likely to encounter. And this is obviously more
important just on the initial levels, those that have a *terrible*
problem of processing an enourmous amount of data in a very
short time. Instead of spending time looking always for correlations,
the system will learn how to become more effective in extracting
the typical patterns.
>
>I have an impression that you are bothered by the fact that in my
>description we can only "create" (increase the weight of) links which
>exist from the beginning, whereas you imagine that you can create
>links arbitrarily, connecting any nodes as the need is perceived.
>
I'm seriously thinking on a way to solve this, obviously restricted
to local areas.
>But the problem is that you can't perceive the need for a link
>between two nodes without somehow bringing the signals from those
>nodes together at some point: noticing the need for a link requires
>the pre-existence of a link to monitor the relationship betwen the
>two nodes.
>
That's the big problem, but I'm trying to see if there is a way
to solve it. After all, in our brain dendritic spines grow between
neurons that fire often in tandem. There should be a biochemical
mechanism driving this process in our brain. What could be a
functionally similar process in our nodes?
>In small nets, you can imagine that all nodes start out connected
>to all other nodes.
>
>But with trillions of nodes, full connection is impossible. You
>simply can't have trillions-squared synapses. They won't fit.
>So you have to make do with 200,000 to choose from for each node.
>
I agree, this appears to be the greatest problem. But what amazes
me is that the brain is able to do that. Some neurons have only
100 connections, while others have 100.000. Somehow the brain
"knows" how to detect the need for a new link and then it grows
this link. I'm wondering how could this be replicated.
Now for a practical example that I'm trying to solve.
It is long but easy to follow and highly speculative.
A machine to discover arithmetic operations
-------------------------------------------
The goal is to build a program that is able to discover arithmetic
operations informed only by examples. All knowledge should be
derived from induction from examples and interaction with a
knowledgeable operator.
Fundamental operations:
a) Recognition of equality
b) Recognition of inequality
c) Recognition of space to separate tokens
d) Memory of 3 to 7 previous entries
e) Ability to count from 1 to 4 (although don't know any digits)
Simulated operation:
This is a simulation of what the program should accomplish
(Lines starting with '<' are answers of the system,
starting with // are comments, all others are inputs
from the operator, ? denotes doubt, ! denotes positive
reinforcement, # denotes negative reinforcement)
// entry of some tokens to memory
// definition of digits, system answers with doubt
// because it does not know entries
1
< 1 ?
2
< 2 ?
3
< 3 ?
4
< 4 ?
5
< 5 ?
6
< 6 ?
7
< 7 ?
8
< 8 ?
9
< 9 ?
// Other token definitions
succ
< succ ?
+
< + ?
=
< = ?
// Start of "teaching"
// system does not understand
1 succ 2
< 1 succ 2 ?
// system does not understand
2 succ 3
< 2 succ 3 ?
// pattern completion, reinforcement of good answer
1 succ
< 1 succ 2
!
// pattern completion with more certainty, reinforcement
2 succ
< 2 succ 3 !
!
// By the same process, teach all successors from 1 to 9
....
....
// Now start to teach operations
// system does not understand
1 = 1
< 1 = 1 ?
// system tries to see what remained constant from previous
// entries, accepts entry without complaining, reinforcement
// of that good answer
2 = 2
< 2 = 2
!
// Question pattern completion, reinforcement of good answer
3 =
< 3 = 3
!
// Teaching of "false", system does not understand
2 = 1 #
< 2 = 1 # ?
// System recognizes that what is redundant between this
// entry and previous is the inequality (comparison), receives
// positive reinforcement
3 = 1 #
< 3 = 1 # !
!
// time to try operation, system is confused
1 + 1 = 2
< 1 + 1 = 2 ?
// Again, confused
2 + 1 = 3
< 2 + 1 = 3 ?
// Again, confused, note that second operand is 1 in all cases
3 + 1 = 4
< 3 + 1 = 4 ?
// system recognizes that (among several other things discovered)
// what was constant in the last series of interactions was the
// fact that applying succ to the first will give the last element;
// this discovery is the result of random attempts and only the
// attempt that produced reasonable results remained
4 + 1 = 5
< 4 + 1 = 5 !
!
// big question as pattern completion: could it answer correctly?
5 + 1 =
< 5 + 1 = 6 !
!!
// the programmer now falls from his chair and smiles!
// now he turns the night teaching the rest of addition
// which consists in putting 2 as the second operand and
// making the system discover that the second term is the
// number of times that succ is applied over the first
// parameter
As can be seen, this system depends entirely on interaction.
It is interaction that gives the system the "path" to the knowledge.
The act of recognizing similarities is an auxiliary process in
turning the results of interactions into knowledge.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 14 May 1999 00:00:00 GMT
Message-ID: <373c171b@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net> <3736db27@news3.us.ibm.net>
<7h85ls$stc@journal.concentric.net> <373cf290.9173590@news.demon.co.uk>
<7hel13$p6b@journal.concentric.net> <373fdc0d.5704650@news.demon.co.uk>
<7hh0m0$pre@journal.concentric.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 14 May 1999 12:29:15 GMT, 200.229.240.113
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
modlin@concentric.net wrote in message
<7hh0m0$pre@journal.concentric.net>...
>[snip]
>It is unfortunate that the word "teacher" has been corrupted in the world
>of machine learning to often mean "trainer", but I would have thought you
>could keep it straight since just one post ago you were waxing eloquent
>about the marvelous order emergent from stringlet processing, with no
>trainer in sight.
>
>The role of a real teacher is to supply examples from which lessons may be
>learned, not to critique the results of the learning.
>
>Obviously there may be local artifacts that don't generalize, if there were
>not, we wouldn't need an algorithm to ferret out the ones that do. But the
>point of our discussion, or so I thought, was that certain algorithms can
>indeed sucessfully discriminate "good learning", which generalizes, from
>"bad learning", which does not.
>
I have some doubts regarding the correct role of the "teachers". Lets
first generalize this word to mean not only "the one who lectures us",
but also to include our world, when subject to our deliberate and
intentional querying (through experimental interaction). In this
regard, the world also teaches us things, showing the result of
our intended actions. This is important.
The role of the "teacher" is not only to provide relevant examples,
but also to act as arbiter in cases of doubt. One essential assumption
that I see implied in your previous paragraphs is that the system
is able to find good and bad artifacts, and so it is able to
generalize the good results.
What I'm posing as an additional concern is the fact that very often
we don't find only one good artifact, but *lots* of potentially
good artifacts. Then, the system must somehow decide which of the
"good" answers to keep, which ones should be chosen to be generalized.
I have reasons to believe that this is the root of the (often
unconscious) process of interaction, the way children do.
I think that it is up to the "teacher" (world experiments included)
to not only furnish exemplars, but also to help us in that task
of "disambiguation".
Regards,
Sergio Navega.
From: modlin@concentric.net
Subject: Re: The Baby in a Vat
Date: 14 May 1999 00:00:00 GMT
Message-ID: <7hh81j$rtg@journal.concentric.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net> <3736db27@news3.us.ibm.net>
<7h85ls$stc@journal.concentric.net> <373cf290.9173590@news.demon.co.uk>
<7hel13$p6b@journal.concentric.net> <373fdc0d.5704650@news.demon.co.uk>
<7hh0m0$pre@journal.concentric.net> <373c171b@news3.us.ibm.net>
Organization: Concentric Internet Services
Reply-To: modlin@concentric.net
Newsgroups: comp.ai.philosophy
In <373c171b@news3.us.ibm.net>, Sergio Navega writes:
>MODLIN:
> It is unfortunate that the word "teacher" has been corrupted in the world
> of machine learning to often mean "trainer", but I would have thought you
> could keep it straight since just one post ago you were waxing eloquent
> about the marvelous order emergent from stringlet processing, with no
> trainer in sight.
>
> The role of a real teacher is to supply examples from which lessons may
> be learned, not to critique the results of the learning.
>
> Obviously there may be local artifacts that don't generalize, if there
> were not, we wouldn't need an algorithm to ferret out the ones that do.
> But the point of our discussion, or so I thought, was that certain
> algorithms can indeed sucessfully discriminate "good learning", which
> generalizes, from "bad learning", which does not.
>NAVEGA:
> I have some doubts regarding the correct role of the "teachers". Lets
> first generalize this word to mean not only "the one who lectures us",
> but also to include our world, when subject to our deliberate and
> intentional querying (through experimental interaction). In this
> regard, the world also teaches us things, showing the result of
> our intended actions. This is important.
Certainly the world is our primary teacher. Whether we are doing any
"deliberate querying" or not. The world continuously shows us examples
of how things fit together, from which we learn categories of things
which fit together in predictable ways. Our explicit experimentation
is only a tiny part of this, coming into play only after we have
abstracted some aspects of the picture to the point where we can
postulate some explicit manipulable connection to be investigated.
A deliberate human teacher presents examples chosen to display a particular
relationship in an especially clear form, thus guiding the student to
discovery of the things the teacher wishes to teach. But the process by
which the student learns is the same, whether from carefully selected
examples provided by another person or from a hodge-podge of experience
provided by the world in general.
>NAVEGA:
> The role of the "teacher" is not only to provide relevant examples,
> but also to act as arbiter in cases of doubt. One essential assumption
> that I see implied in your previous paragraphs is that the system
> is able to find good and bad artifacts, and so it is able to
> generalize the good results.
I think perhaps you have things backward... the sole criterion for a
"good" result is that it does generalize. We don't find "good"
results
first, and then generalize them. We find things that generalize, which
by definition are the "good" things.
>NAVEGA:
> What I'm posing as an additional concern is the fact that very often
> we don't find only one good artifact, but *lots* of potentially
> good artifacts. Then, the system must somehow decide which of the
> "good" answers to keep, which ones should be chosen to be generalized.
> I have reasons to believe that this is the root of the (often
> unconscious) process of interaction, the way children do.
>
> I think that it is up to the "teacher" (world experiments included)
> to not only furnish exemplars, but also to help us in that task
> of "disambiguation".
This isn't an additional concern, it is the only concern. Furnishing
exemplars is the only reasonable way to help us in the task of
disambiguation. We find lots of artifacts in a small set of examples, the
good ones are those which can be found in a large set of examples. In
other words, the good ones are the ones that do in fact generalize. We
don't freely choose things to generalize, we look to see which actually do
generalize.
Such choices are always tentative, subject to revision as we see more
examples. From the examples available we make the best guess we can about
how to extrapolate the relationships observed to other cases, and use that
guess to guide us. But as we see more examples we adjust our understanding
to a better fit to the whole set. Sometimes a minor tweak or additional
qualification suffices, other times we have to resurect one of those other
artifacts that we didn't focus on at first, and rearrange our whole
understanding in a major paradigm shift. But regardless of the scale,
learning is a matter of adjusting our models to fit the examples we see.
Bill Modlin
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 14 May 1999 00:00:00 GMT
Message-ID: <373c6333@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net> <3736db27@news3.us.ibm.net>
<7h85ls$stc@journal.concentric.net> <373cf290.9173590@news.demon.co.uk>
<7hel13$p6b@journal.concentric.net> <373fdc0d.5704650@news.demon.co.uk>
<7hh0m0$pre@journal.concentric.net> <373c171b@news3.us.ibm.net>
<7hh81j$rtg@journal.concentric.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 14 May 1999 17:53:55 GMT, 129.37.182.252
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
modlin@concentric.net wrote in message
<7hh81j$rtg@journal.concentric.net>...
>In <373c171b@news3.us.ibm.net>, Sergio Navega writes:
>
>>MODLIN:
>> It is unfortunate that the word "teacher" has been corrupted in the
world
>> of machine learning to often mean "trainer", but I would have thought
you
>> could keep it straight since just one post ago you were waxing eloquent
>> about the marvelous order emergent from stringlet processing, with no
>> trainer in sight.
>>
>> The role of a real teacher is to supply examples from which lessons may
>> be learned, not to critique the results of the learning.
>>
>> Obviously there may be local artifacts that don't generalize, if there
>> were not, we wouldn't need an algorithm to ferret out the ones that do.
>> But the point of our discussion, or so I thought, was that certain
>> algorithms can indeed sucessfully discriminate "good learning", which
>> generalizes, from "bad learning", which does not.
>
>
>>NAVEGA:
>> I have some doubts regarding the correct role of the "teachers". Lets
>> first generalize this word to mean not only "the one who lectures us",
>> but also to include our world, when subject to our deliberate and
>> intentional querying (through experimental interaction). In this
>> regard, the world also teaches us things, showing the result of
>> our intended actions. This is important.
>
>Certainly the world is our primary teacher. Whether we are doing any
>"deliberate querying" or not. The world continuously shows us examples
>of how things fit together, from which we learn categories of things
>which fit together in predictable ways. Our explicit experimentation
>is only a tiny part of this, coming into play only after we have
>abstracted some aspects of the picture to the point where we can
>postulate some explicit manipulable connection to be investigated.
>
I agree that prior to any act of interaction one must be capable
of acquiring and correlating the inputs received through the senses.
What I'm doubting is that our efforts of explicit experimentation
are responsible for only a tiny part of the whole process. My
doubt comes from the idea that there is a great number of high
level regularities that make sense in a particular situation and
only interaction could reduce this number to a relatively manageable
chunk.
>A deliberate human teacher presents examples chosen to display a particular
>relationship in an especially clear form, thus guiding the student to
>discovery of the things the teacher wishes to teach. But the process by
>which the student learns is the same, whether from carefully selected
>examples provided by another person or from a hodge-podge of experience
>provided by the world in general.
>
It is strange the way I agree with what you say but still find
something missing. The fundamental ability of learning is, obviously,
within the realm of each student. But the rhythm, the efficacy with
which this student learns depends on a great deal of the interaction
it is able to elicit. A human teacher will only ease the learning
of some aspect if he/she is able to perceive what are the points in
doubt of that particular student. This perception can only happen
if the student is able to *show* its doubts, which is what I'm
calling interacting with the teacher. The image that comes to
my mind when I think of this is a teacher lecturing arithmetic
to a child: in this case, it is easy to see the "interaction in
action".
>>NAVEGA:
>> The role of the "teacher" is not only to provide relevant examples,
>> but also to act as arbiter in cases of doubt. One essential assumption
>> that I see implied in your previous paragraphs is that the system
>> is able to find good and bad artifacts, and so it is able to
>> generalize the good results.
>
>I think perhaps you have things backward... the sole criterion for a
>"good" result is that it does generalize. We don't find
"good" results
>first, and then generalize them. We find things that generalize, which
>by definition are the "good" things.
>
You're right, the good criterion is the generalization, but this
does not say much about the existence of a lot of possible ways to
generalize. I think that it is not usual to have just one optimum
candidate for the best solution, but a *set* of good candidates with
similar likeliness. Interaction appears to be the process which
the organism may use to reduce this number in a very short time.
>>NAVEGA:
>> What I'm posing as an additional concern is the fact that very often
>> we don't find only one good artifact, but *lots* of potentially
>> good artifacts. Then, the system must somehow decide which of the
>> "good" answers to keep, which ones should be chosen to be generalized.
>> I have reasons to believe that this is the root of the (often
>> unconscious) process of interaction, the way children do.
>>
>> I think that it is up to the "teacher" (world experiments included)
>> to not only furnish exemplars, but also to help us in that task
>> of "disambiguation".
>
>This isn't an additional concern, it is the only concern. Furnishing
>exemplars is the only reasonable way to help us in the task of
>disambiguation. We find lots of artifacts in a small set of examples, the
>good ones are those which can be found in a large set of examples. In
>other words, the good ones are the ones that do in fact generalize. We
>don't freely choose things to generalize, we look to see which actually do
>generalize.
>
Yes, I agree, but we're talking here about "the good ones". What if
"the good ones" are hundreds of possibilities? I'm evaluating the
effect that a *single* interaction proposed by the organism may
reduce this number to tens or less. Then, the whole process would
be cumulatively more efficient. It is not that this would be
impossible with just an observer's position, I think it could be
done that way too. What I'm saying is that using interaction as part
of the "loop" may accelerate the learning of the meaningful
correlations to the point of allowing the organism the perception
of things that, otherwise, would take too much time. But this
is not the only advantage of interaction, in my viewpoint.
>Such choices are always tentative, subject to revision as we see more
>examples. From the examples available we make the best guess we can about
>how to extrapolate the relationships observed to other cases, and use that
>guess to guide us. But as we see more examples we adjust our understanding
>to a better fit to the whole set. Sometimes a minor tweak or additional
>qualification suffices, other times we have to resurect one of those other
>artifacts that we didn't focus on at first, and rearrange our whole
>understanding in a major paradigm shift. But regardless of the scale,
>learning is a matter of adjusting our models to fit the examples we see.
>
I'm sure this process is meaningful. What I'm afraid is that it's
not enough. What if there are lessons to be learned because of our
efforts of interaction? With this I mean that maybe there are
important correlations to obtain from our motor attempts of
interaction. This learning will, obviously, be captured by the
very same mechanisms of covariant clusterization that you're
talking. But this learning will be put under the same "umbrella"
of our motor actions, being inextricably associated with it and
in tandem with the results produced. This may be an important
item to the development of higher level concepts, up to human
language.
For an example, try to see what is our concept of the word
"push". It is very difficult to think about this word without
thinking in a full sensorimotor action. So what I think is that
behind this word there is not only the perception of regularities
associated with the vision of an object being pushed, but *also*
all the patterns that we used to drive our muscles in order to
produce that action. These patterns and their association with
the perceived visual regularities are the elements that I'm
proposing to ground the rest of cognition (well, not exactly me,
but Piaget and a bunch of other researchers who think that way).
It appears that the rest of our higher level concepts emerge
from these basic actions.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 17 May 1999 00:00:00 GMT
Message-ID: <37403584@news3.us.ibm.net>
References: <3731a626@news3.us.ibm.net> <926017401.071.98@news.remarQ.com>
<3732058f@news3.us.ibm.net> <926034865.145.60@news.remarQ.com>
<3732ddeb@news3.us.ibm.net> <926090259.936.94@news.remarQ.com>
<3733348b@news3.us.ibm.net> <7h1rcu$6c@journal.concentric.net>
<3736db27@news3.us.ibm.net> <7h85ls$stc@journal.concentric.net>
<373cf290.9173590@news.demon.co.uk> <7hel13$p6b@journal.concentric.net>
<373fdc0d.5704650@news.demon.co.uk> <7hh0m0$pre@journal.concentric.net>
<3746cbbc.4954922@news.demon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 May 1999 15:28:04 GMT, 129.37.183.189
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Oliver Sparrow wrote in message <3746cbbc.4954922@news.demon.co.uk>...
>modlin@concentric.net wrote:
>
>>Nonsense. The muddy thinking common in this area is in dire need of a bit
>>of bleach. Let's find out which bits of the wash are colorfast.
>
>Oh Dear. Bad hair day.
>
>Behind the semantics of 'trainers' and 'teachers', embedded principles
>and implicit ordering, I wanted to draw attention to the view that
>regularities, when discovered and however firmly discovered, are not
>enough in themselves. Something, be it an external user, a market
>force, mutual competition, criteria - whatever - is needed to define
>the useful regularities from the less useful ones. This is the
>'grounding' debate under a slightly new hat, of course.
Good point, Oliver! Let me see if this turns out to be what I was
saying about interaction. I was saying that perception of regularity
alone could be insufficient, because I think often one will have
lots of almost equally probable clusters. This could only be
solved by the establishment of contexts (which will filter out most
of the clusters, leaving possibly a few or just one) or it will
be solved by interaction, where the entity proposes things to
the world in order to perceive some additional regularities provoked
by this action on the previously discovered clusters.
My guess is that the perception of this new set of regularities
will add a lot to the causal models that the organism devises,
allowing its performance to soar.
>
>This implies that the widget that finds regularities will need to
>calibrate those regularities for their usefulness, quite aside from
>their (for example) statistical properties. Strong statistics may
>conceal important data. The regular may be low on useful content.
Indeed. I would add the component of reinforcement/inhibition. One
regularity that is not strong in statistical terms may be raised
to the category of important if it is acquired with a strong
reinforcement.
Regards,
Sergio Navega.
From: modlin@concentric.net
Subject: Re: The Baby in a Vat
Date: 18 May 1999 00:00:00 GMT
Message-ID: <7hr44l$7v9@journal.concentric.net>
References: <3731a626@news3.us.ibm.net> <926017401.071.98@news.remarQ.com>
<3732058f@news3.us.ibm.net> <926034865.145.60@news.remarQ.com>
<3732ddeb@news3.us.ibm.net> <926090259.936.94@news.remarQ.com>
<3733348b@news3.us.ibm.net> <7h1rcu$6c@journal.concentric.net>
<3736db27@news3.us.ibm.net> <7h85ls$stc@journal.concentric.net>
<373cf290.9173590@news.demon.co.uk> <7hel13$p6b@journal.concentric.net>
<373fdc0d.5704650@news.demon.co.uk> <7hh0m0$pre@journal.concentric.net>
<3746cbbc.4954922@news.demon.co.uk>
Organization: Concentric Internet Services
Reply-To: modlin@concentric.net
Newsgroups: comp.ai.philosophy
In <3746cbbc.4954922@news.demon.co.uk> Oliver Sparrow writes:
> Behind the semantics of 'trainers' and 'teachers', embedded principles
> and implicit ordering, I wanted to draw attention to the view that
> regularities, when discovered and however firmly discovered, are not
> enough in themselves. Something, be it an external user, a market
> force, mutual competition, criteria - whatever - is needed to define
> the useful regularities from the less useful ones.
An external influence is effective in distinguishing among regularities
only to the extent that it can be seen to be itself part of a detectable
regularity.
Nothing can be learned from a market force, competition, criterion, or
whatever, unless that "whatever" is observed to be correlated with
something else.
For a merely trainable system, the "something else" must be something with
immediate significance. A system is trained to react in certain ways by
stimulating its reward and punishment detectors at times correlated with
its actions.
A trainable system defines "useful" regularities as detectable correlations
between its own actions and perceived reward or punishment.
An intelligent system also learns from correlations. The difference is that
it learns from correlations among arbitrary signals, not just among those
with current significance.
An intelligent system defines "useful" regularities as those which are
informative in a general sense, not just those which are informative about
conditions of currently known significance.
Useful regularities extend to more cases than less useful ones: usefulness
is generality. A useful regularity is one which gives information about
situations beyond those from which it is derived. A useless regularity is
one which exists only in the data which defines it.
The evolutionary point of the exercise is that an open search for general
associations is often more effective at finding rewards and avoiding
punishments than search specifically directed toward those goals. Cues
which have no individual direct correlation with danger or food, and thus
would not be learned by a search guided by those goals, are discovered to
form clusters with other correlated things and ultimately some of those
clusters or their derivatives turn out to be useful in a direct way.
Some things can only be found indirectly.
As it happens, the same undirected search for pattern which on balance
helps us to survive also leads to discovery of other kinds of patterns and
the emergence of consciousness of self... but that's for a later
discussion, after we get past the sheer mechanics of the process.
>
This
is the
> 'grounding' debate under a slightly new hat, of course.
Not that I can see. All the relationships we find are well grounded in the
data from which they derive, and in turn provide grounding for further
abstractions derived from them. If you really mean something by this
throw-away remark about grounding, you'll have to explain yourself more
fully.
> This implies that the widget that finds regularities will need to
> calibrate those regularities for their usefulness, quite aside from
> their (for example) statistical properties. Strong statistics may
> conceal important data. The regular may be low on useful content.
But again: for purposes of intelligence, usefulness _is_ a statistical
property, a measure of how much information a regularity gives us about
relationships among elements of our experience.
Strong statistics and regularities allow us to explain a lot of variance
and factor it out, which then allows us to see whatever might have been
hidden under or behind it, or riding on it as modulation of a carrier.
The point is not to find something strong and stop, but to use what one
finds as a stepping-stone to more discoveries. A strong regularity is by
definition useful: it allows us to predict many things, allows us to know
what to expect as a result of that regularity, and in the process allows us
to notice when some other factor intervenes... which we could not do until
we first noticed the dominant patterns.
> Assume that the stringlets are strung, that the visual field has
> sorted light from dark, that the Victorian stamps are now piled
> separately from Edwardian ones. The discriminant engine that is
> performing this task has found a way of segmenting the field. This may
> be a poor set of maps (as perceived by broader criteria), or it may be
> a poor way of discerning a good map, similarly perceived. The system
> cannot know anything of this as it stands. The figure of merit that it
> is using - let us say, of statistical explanatory power - defines what
> 'looks good'.
>
> Adding more data is likely to dig the system deeper into the mire if
> it has settled on a local optimum. There are, of course, more or less
> clever techniques for mapping the surface which the figure or merit
> defines. This can find a good optimum and has a reasonable chance of
> finding, if not proving, a global optimum if enough crunch is thrown
> at a reasonably simple structure. It will, however, be an optimum that
> is predicated upon the figure of merit. How optimal is this?
There is no more relevant figure of merit for information than that it be
highly informative, which measure subsumes both local significance and wide
applicability. The criterion is widespread rather than local regularity.
Adding more data gives us a wider view, allowing us to see local optima in
perspective, to see how to adjust the definitions we first derived to fit
local conditions to apply more broadly. Those functions which don't
generalize, which turn out to be local artifacts, simply fail to propagate
to more comprehensive levels.
Your comments are relevant to conventional statistical analysis in a fixed
space, but not to a recursive search through an expanding space in which
new dimensions are continually unfolding and previous ones are
simultaneously being collapsed.
> There has been a great deal of guff written about 'dancing on the edge
> of chaos'. Pattern recognition is, however, poised between lock-up
> upon strong but useless signals and unsolvable, over-complex shambles.
I don't think I've encountered that phrase... certainly I've never used it
myself. My first reaction was negative, but on reflection perhaps one
could mean it in a way that isn't just guff. Could you point me to some
instances of its use?
In any case, I must strongly disagree with your characterization of pattern
recognition, if it is intended to apply to the methods I've been attempting
to explain. That you can say this tells me that you have not yet heard me:
I am telling you about an approach which doesn't lock up, and which nibbles
away at complexity in bite-size chunks.
> Good pattern recognition probably combines many different ways of
> recognising, so as to find a solution that satisfies all and which
> keeps most 'on the edge' rather than in the pits of local optima.
> This may be mutual teaching. To achieve it, one needs the equivalent
> of an institutional structure within the pattern recognising engine.
> How is this democracy of perception to be managed, fused, kept
> flexible? These are, of course, technical questions which admit of
> technical solutions. Behind this, however, lies the guiding mind,
> designing these 'institutions'; or - if we are feeling undemocratic,
> setting at least some of the criteria and figures of merit.
>
> Could an untaught system come to generate its own institutions? Of
> course, we seem to have done it. But there have been 30 billion of us
> since H. sapiens arose, and about 250,000 years of institutional
> growth. And under this, around 500 million years of architectural
> development in cognitive systems of any complexity. The letters N and
> P suggest themselves.
>_______________________________
>
>Oliver Sparrow
You've mixed quite a few levels in this closing ramble, and I'm finding it
difficult to know how to respond... I'll just stab at a few of the bits.
Avoiding local optima is a difficulty only for processes defined to seek
local optima. I think I've covered that sufficiently.
"Behind this lies the guiding mind..." With all your talk of
emergence,
you seem not to have heard your own lectures. This is after all CAP where
our purpose is the explication of the emergence of mind from mechanism.
But when I tell you of a mechanism from which it will emerge, you counter
that I must first have a guiding mind for the mechanism to work?
Do you really think that we are endowed at conception with a guiding mind?
I find that quite unlikely. I think mind emerges from the processing of
experience.
Or perhaps you refer only to the collective mind of societal structure?
Could a single isolated human mind generate 500 million years worth of
structure? Hardly. Neither will my machine. Could a single mind
interacting with society decipher enough of its workings to participate in
it? It does seem to happen more often than not.
NP? Well, ordinary intelligence does generally permit a salesman to find
some sort of servicable route, theory or no. I'll settle for that...
Bill Modlin
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 18 May 1999 00:00:00 GMT
Message-ID: <37416787@news3.us.ibm.net>
References: <3731a626@news3.us.ibm.net> <926017401.071.98@news.remarQ.com>
<3732058f@news3.us.ibm.net> <926034865.145.60@news.remarQ.com>
<3732ddeb@news3.us.ibm.net> <926090259.936.94@news.remarQ.com>
<3733348b@news3.us.ibm.net> <7h1rcu$6c@journal.concentric.net>
<3736db27@news3.us.ibm.net> <7h85ls$stc@journal.concentric.net>
<373cf290.9173590@news.demon.co.uk> <7hel13$p6b@journal.concentric.net>
<373fdc0d.5704650@news.demon.co.uk> <7hh0m0$pre@journal.concentric.net>
<3746cbbc.4954922@news.demon.co.uk> <7hr44l$7v9@journal.concentric.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 18 May 1999 13:13:43 GMT, 166.72.21.120
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
modlin@concentric.net wrote in message
<7hr44l$7v9@journal.concentric.net>...
>
>Useful regularities extend to more cases than less useful ones: usefulness
>is generality. A useful regularity is one which gives information about
>situations beyond those from which it is derived. A useless regularity is
>one which exists only in the data which defines it.
>
This is compelling, but didn't quite satisfied me. Apparently, what's
being proposed is reasonable, it is an equation where usefulness is
directly proportional to the generality and nothing more. What is not
entirely clear is that usefulness is really *only* a function of
generality. Maybe this is what Oliver is questioning: what is the
effect that reinforcements have on usefulness (a single occurrence
of a certain experience may carry a very powerful reinforcement,
in a way to allow its generalization, raising usefulness).
I have to admit that this issue is not easy and have not settled in
my mind. One may propose, for instance, that there are ways to
reinforce a certain aspect just by the "strength" with which the
information is delivered to the organism. That would make this
reinforcement as a new term in that equation, adding its effect
to that of generality.
But one may also argue that the "strength" with which something
is perceived by an organism is a function of its prior experiences
(meaning previous experiences molded the "strength detector"),
which will then make all the thing again a function of only
generality. So far it still remains an open issue for me.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 13 May 1999 00:00:00 GMT
Message-ID: <373aebcc@news3.us.ibm.net>
References: <372eee6c@news3.us.ibm.net> <7gqrhj$n8m$1@nnrp1.deja.com>
<925985791.266.66@news.remarQ.com> <3731a626@news3.us.ibm.net>
<926017401.071.98@news.remarQ.com> <3732058f@news3.us.ibm.net>
<926034865.145.60@news.remarQ.com> <3732ddeb@news3.us.ibm.net>
<926090259.936.94@news.remarQ.com> <3733348b@news3.us.ibm.net>
<7h1rcu$6c@journal.concentric.net> <3736db27@news3.us.ibm.net>
<7h85ls$stc@journal.concentric.net> <373cf290.9173590@news.demon.co.uk>
<7hel13$p6b@journal.concentric.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 13 May 1999 15:12:12 GMT, 129.37.182.125
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
modlin@concentric.net wrote in message
<7hel13$p6b@journal.concentric.net>...
>[big snip]
>I've tried before to explain the mechanisms which lead to this
>functionality, but have gotten only non-sequitors in response: apparently
>it is too difficult to see the connection between the trees I describe and
>the forest of functionality which emerges.
>
Your method may be bottom-up, but I think sometimes you have to explain
it top-down.
>So let's start with the forest. Pick some aspect of stringlet processing.
>Perhaps then we can talk about how that is accomplished under my approach.
>
I don't know nothing about stringlets, I'll await for Oliver.
But I have quite a bunch of "forest topics" that I'd like
to see addressed. Let me start by two:
a) Memory
How is the figure of memory represented? Sometimes our memories
are driven ("fired") by external impulses. This represents no
problem. But how to explain those memories that pop up in
our mind without no external clue? (for example, the memory
that we left an unpaid bill yesterday). This acts like if we
had an internal "stimulus generator" which forces us to
reconsider some situations. How this could work in your
architecture?
b) Binding
This is one of the greatest problems of neuroscience and cognitive
science of today. How perceptions from several distinct areas
bind together? In particular, dynamic binding is a problem for
connectionist systems (symbolic systems seem to solve this more
easily). Dynamic binding is the one used when subjects are asked
to identify red and diagonal lines in a figure containing dozens
of horizontal, vertical, diagonal lines with assorted colors.
I have more, but these are enough to start.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <37320575@news3.us.ibm.net>
References: <3731ecdf.27075305@news.demon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 6 May 1999 21:11:17 GMT, 129.37.182.222
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Robin Faichney wrote in message <3731ecdf.27075305@news.demon.co.uk>...
>In article <3731a62d@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
>writes
>>Robin Faichney wrote in message <3731497e.3167460@news.demon.co.uk>...
>>>
>>>You've only extended the I/O channels. Cognition remains natural.
>>
>> The
>>"semantics" that flow through the radio link when in the virtual
>>world could be sequences very similar in nature to the ones that
>>flow when in the natural world.
>
>Don't understand this concept of "semantics that flow". Please
explain.
First, note that I used "semantics", in quotes. What I said is
the flow of syntax that carry their own semantics. How is this
possible? How is it possible to have syntax without a priori
conventions between the sender and receiver? Only when these
conventions *emerge* from the intelligent *interpretation* of the
signal.
Suppose you received the following signal from a radiotelescope
aiming at Andromeda galaxy:
| | | |
Each vertical sign is a tone, an audible burst that modulates the
incoming RF signal. Your ability to listen make you recognize a
sequence which is repeating. What follows is this:
| || | || | || |
Based on what you've seen previously, you understand that the
two pulses close (||) appear to be relevant in some way. Next:
| || | | || | | | || | | | | || | | | | |
This one is harder. But we're intelligent, so after some
experimentation and comparison with previous messages, we
find that, between each sequence (||) there is a growing
number of isolated signals (1, 2, 3...). It is time
to ponder that maybe (||) acts here as a kind of "delimiter"
of some sort and that the previous message was probably
counting. One more:
||| || ||| || ||| || |||
This could be strange, but since (||) is being assumed to
be a delimiter, we could say that this message is
introducing a new symbol (|||). We don't know what this
symbol means. Then:
|||| || |||| || |||| || ||||
Another apparently easy message, introducing another
symbol (||||). Now comes one really difficult message:
| ||| | | |||| | | |
Gee. What does that mean? Can't figure out by this
single occurrence. Later, we receive:
| | ||| | | | |||| | | | | |
Still don't have a clue. I'm confused just like a child
that grabbed a ball for the first time and seems to be
absorbed by the examination of the object. Then:
| | | ||| | | | | |||| | | | | | | |
Wait a minute! There's a *similarity* among the last three
messages. They seem to be:
[something] ||| [something] |||| [something]
We're starting to form tentative *rules*, trying to see
regularities in this thing. To assemble that rule, we're using
not only the similarities we've found in the last three
messages, but also the fact that we've discovered that (|||)
and (||||) were something like delimiters, "symbols" of their
own. Now we go back to those examples (by means of recalling
in our memory or by being subject again to the experience of
receiving these messages) and discover that the messages
can be translated to:
1 + 2 = 3
2 + 3 = 5
3 + 4 = 7
I have discovered a mapping between concepts only because
of the regularities of occurrence.
Obviously, what our brain receives from nature is not a
contrived sequence of symbols like that. Neither what we
discover are "translations" to internal symbols that
we already know. My whole example was just a possible
interpretation of what could be our communication with a
lien intelligences.
In the case of our brains, we don't have such concepts as
adition, subtraction, symbolic delimiters. But we are born
with *some* innate concepts. We can compare colors
for similarity. We can detect movements, etc.
What I propose is that these innate and basic capabilities
are the foundation which we use to interpret the stream of
information that we receive from the world. And that by
using a process analogous to the one I've sketched (perception
of similarity, use of previously acquired notions, formation
of tentative rules, etc), we may construct higher-level
concepts that are used to support even higher concepts.
We start with sequences of similarities among pulses and
end up with the concept of an "apple".
The starting point, in this hypothesis, is the interpretation
of the "semantics" carried by this "syntax" of pulses.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 06 May 1999 00:00:00 GMT
Message-ID: <3731a62d@news3.us.ibm.net>
References: <3731497e.3167460@news.demon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 6 May 1999 14:24:45 GMT, 166.72.21.71
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Robin Faichney wrote in message <3731497e.3167460@news.demon.co.uk>...
>In article <372eee6c@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
>writes
>>
>>Now tell me, John Searle, tell me how is it possible that the
>>*whole cognition* of that baby was able to flow through a channel
>>in a purely syntactic form.
>
>You've only extended the I/O channels. Cognition remains natural.
Your short comment is thought-provoking. But there are questions
that still need to be addressed.
1) Suppose that what feeds the brain of the baby is a sequence
of symbols that comes from a computer running a virtual environment.
The brain will, apparently, develop cognition related to this
virtual world. The brain will evolve to handle the specifics of
that environment, even if it is essentially different than ours.
We may suppose that the brain will act intelligently in this
contrived world.
2) Suppose that we put another computer in the place of that brain,
under the same virtual world conditions. It is not unwise to say
that, if this computer is built with a suitable AI program, that
it will perform intelligently in the virtual environment.
3) Now put the previous computer not in a virtual world, but
in the real world. There's nothing to allow us to think that it
will not be intelligent. It will be naturally intelligent.
So what I'm questioning here is that "natural cognition" is not
essentially different from contrived, "virtual cognition". The
"semantics" that flow through the radio link when in the virtual
world could be sequences very similar in nature to the ones that
flow when in the natural world. So what is the difference?
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 07 May 1999 00:00:00 GMT
Message-ID: <3732ddf0@news3.us.ibm.net>
References: <3732a5ba.2633073@news.demon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 7 May 1999 12:34:56 GMT, 166.72.29.162
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Robin Faichney wrote in message <3732a5ba.2633073@news.demon.co.uk>...
>In article <37320575@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
>writes
>>
>>What I propose is that these innate and basic capabilities
>>are the foundation which we use to interpret the stream of
>>information that we receive from the world. And that by
>>using a process analogous to the one I've sketched (perception
>>of similarity, use of previously acquired notions, formation
>>of tentative rules, etc), we may construct higher-level
>>concepts that are used to support even higher concepts.
>>We start with sequences of similarities among pulses and
>>end up with the concept of an "apple".
>
>Now you've explained it, it's what I've taken forgranted about as long as I've
>thought about such things. But I still don't know how you get from that to
"we
>might as well replace the baby's brain with a tiny, yet powerful computer".
>--
Well, it's not that tiny...
You see that the process I've been sketching does not depend of any
characteristic of the brain. The brain is highly parallel and the
computer is a single processor, serial machine. However, the computer
may be easily put to work in a pseudo-parallel way through multithreading.
There's no fundamental difference that will prevent the computer
from acting like a brain. It is just a question of knowing what to
do with the data. Given enough "horsepower" and a suitable program,
we should be able to have the functional equivalent of a brain.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: The Baby in a Vat
Date: 07 May 1999 00:00:00 GMT
Message-ID: <373344e0@news3.us.ibm.net>
References: <3732e6e2.19301118@news.demon.co.uk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 7 May 1999 19:54:08 GMT, 129.37.183.39
Organization: Intelliwise Research and Training
Newsgroups: comp.ai.philosophy
Robin Faichney wrote in message <3732e6e2.19301118@news.demon.co.uk>...
>In article <3732ddf0@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
>writes
>>You see that the process I've been sketching does not depend of any
>>characteristic of the brain. The brain is highly parallel and the
>>computer is a single processor, serial machine. However, the computer
>>may be easily put to work in a pseudo-parallel way through multithreading.
>>There's no fundamental difference that will prevent the computer
>>from acting like a brain. It is just a question of knowing what to
>>do with the data. Given enough "horsepower" and a suitable program,
>>we should be able to have the functional equivalent of a brain.
>
>OK, now I see what you're getting at, but you seem to be missing the
>main point of the Chinese Room, which was to divorce competence from
>understanding. The Room behaves as if it understood, yet does not. OK,
>so there's the Systems Reply, and I've argued here at some length about
>that in the past, but Searle would have absolutely no problem with a
>computer that was perfectly competent, because that's what the Chinese
>Room as he describes it is.
I guess the great problem is that Searle's vision of intelligence
is something like "an organism that knows all answers".
I don't put the things like that. The fact that somebody knows a
lot does not say very much about he/she being intelligent or not.
It only gives a parameter as to how close this person is of an
encyclopedia. The chinese room is just that: an interactive
encyclopedia.
Intelligence is the kind of behavior presented by those who
still don't know it all. Intelligence is something that's related
to the speed and breadth and discovery ability with which an entity
learns and reuses this knowledge to solve new problems. It is not
a static process, it is something that must be evaluated dynamically.
Regards,
Sergio Navega.
Back to Menu of Messages Sergio's Homepage
Any Comments? snavega@attglobal.net