Selected Newsgroup Message
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Letter to the editor
Date: 18 Jan 1999 00:00:00 GMT
Message-ID: <36a33f87.0@news3.ibm.net>
References: <77sven$4uf@ux.cs.niu.edu> <77trt6$bek$1@nnrp1.dejanews.com>
<77tsv0$5ob@ux.cs.niu.edu> <77uj8g$ui4$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 18 Jan 1999 14:04:55 GMT, 166.72.29.245
Organization: SilWis
Newsgroups: comp.ai.philosophy
hans.moravec@cmu.edu wrote in message <77uj8g$ui4$1@nnrp1.dejanews.com>...
>rickert@cs.niu.edu (Neil Rickert):
>> [Unstructured area mobile utility robots becoming widespread
>> in the next decade] might be. I expect that they will be
>> considered too mechanistic to count as intelligent.
>
>[snip]
>On a logarithmic and development-time scale, robot controllers
>today are more than halfway between the simplest nervous systems
>and human-scale brains. They covered that first half distance
>in about 30 years, and the development slope is actually steeper
>now than in those first 30 years.
>
Although I'm sympathetic to Moravec's position, I guess I also
concur with Rickert's vision. The way toward this better-than-us
electromechanical hominid may be smooth in terms of hardware
progression but I'm afraid we're missing something *very*
important. Although I diverge with Neil's opinions on other matters,
I think he raised a pretty important point, and this is the
distinction between natural and artificial intelligence.
I'll invite him to talk about this, if he will.
Just to exemplify what I'm thinking about this subject, lets
compare today's robots with, for instance, cockroaches. During
their lives, these insects practically don't learn anything at
all, they survive because they have been "designed" by evolution,
which means, only the significant selections remained. So,
they carry a great deal of "evolutionary knowledge". Given a
critical change in Earth's environmental conditions, and
the cockroaches will probably be able to "find a way out"
(through successive generations, obviously).
When we come to analyze robots, we will find that *we*,
humans, are doing the part nature did for cockroaches:
we are the elements that are "designing" them, correcting
and adjusting their mechanisms and software, telling
them what to learn and how to store this information. This
may be a good strategy to produce human-equivalent pieces of
equipment, but I have doubts that it can be *more* than that
(or even good exemplars of the former, for that matter).
To allow robots to "inherit the Earth" (gee, I'm not so
anxious for this :-) we've got to *get out* of the circuit.
And no, I'm not suggesting genetic mechanisms.
With this I mean devise software/hardware that *do not depend*
on us, that allows the robot to grow by itself, not only
in terms of hardware (self-repair or self-construction),
but also *in terms of learning*. The latter aspect, however,
is the essential one.
Maintaining a hardware dependency of the robots on us may
be even valuable (we can have some problems with them). But
they cannot depend on us for learning, if we want them to
be *more* than we are. And this is one of the fundamental
benefits of having them around (which could mean, elements
able to contribute to our welfare with new and creative ideas).
How is this supposed to happen? For all I know today,
there's only one way: robots must learn from basic
sensorimotor patterns. I'm not concerned here with
adaptive movement and coordination of limbs in a 3D
space. I know we're advancing pretty fast on this
aspect. I'm concerned with *cognition from real world
experiences*. We have huge theoretical problems here.
We don't have this link yet. For me, this is our *greatest*
weakness: we do not understand how sensorimotor knowledge grows
into higher level cognition. We do not know why we humans
have a so intellectualized and creative mind and apes don't,
even being *very* similar in terms of sensory and motor
capabilities.
If I was asked to subsume this thought in a single phrase
(its not perfect, I warn), I'd say: we don't know how to
transplant some of Jean Piaget's ideas to robots.
Unless we solve this problem, I think we can wait for 20,
30, 100 or 200 years and we will not have robots the way
we're thinking.
Our main concern today should be how this process work,
and let Intel, AMD and others concern about the packing
density of ICs.
Regards,
Sergio Navega.
From: Hans Moravec <hpm@cmu.edu>
Subject: Re: Letter to the editor
Date: 18 Jan 1999 00:00:00 GMT
Message-ID: <36A3F631.41C6@cmu.edu>
Content-Transfer-Encoding: 7bit
References: <77sven$4uf@ux.cs.niu.edu> <77trt6$bek$1@nnrp1.dejanews.com>
<77tsv0$5ob@ux.cs.niu.edu> <77uj8g$ui4$1@nnrp1.dejanews.com>
<36a33f87.0@news3.ibm.net>
Content-Type: text/plain; charset=us-ascii
Organization: Robotics Institute, Carnegie Mellon University
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
"Sergio Navega" <snavega@ibm.net>:
> When we come to analyze robots, we will find that *we*,
> humans, are doing the part nature did for cockroaches:
> we are the elements that are "designing" them, correcting
> and adjusting their mechanisms and software, telling
> them what to learn and how to store this information. This
> may be a good strategy to produce human-equivalent pieces of
> equipment, but I have doubts that it can be *more* than that
> (or even good exemplars of the former, for that matter).
> To allow robots to "inherit the Earth" (gee, I'm not so
> anxious for this :-) we've got to *get out* of the circuit.
> And no, I'm not suggesting genetic mechanisms.
> With this I mean devise software/hardware that *do not depend*
> on us, that allows the robot to grow by itself, not only
> in terms of hardware (self-repair or self-construction),
> but also *in terms of learning*. The latter aspect, however,
> is the essential one.
But as the robots (through our agency) gain more and more
human capability, that can take over more and more of our
functions, eventually including the design and improvement
of robots, and the acquisition of scientific and technical
knowledge that underlies improving designs. So, they
ultimately end up being their own builders and designers,
no longer needing us, by your scenario.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Letter to the editor
Date: 19 Jan 1999 00:00:00 GMT
Message-ID: <36a4af62.0@news3.ibm.net>
References: <77sven$4uf@ux.cs.niu.edu> <77trt6$bek$1@nnrp1.dejanews.com>
<77tsv0$5ob@ux.cs.niu.edu> <77uj8g$ui4$1@nnrp1.dejanews.com>
<36a33f87.0@news3.ibm.net> <36A3F631.41C6@cmu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 19 Jan 1999 16:14:26 GMT, 166.72.21.85
Organization: SilWis
Newsgroups: comp.ai.philosophy
Hans Moravec wrote in message <36A3F631.41C6@cmu.edu>...
>"Sergio Navega" <snavega@ibm.net>:
>> When we come to analyze robots, we will find that *we*,
>> humans, are doing the part nature did for cockroaches:
>> we are the elements that are "designing" them, correcting
>> and adjusting their mechanisms and software, telling
>> them what to learn and how to store this information. This
>> may be a good strategy to produce human-equivalent pieces of
>> equipment, but I have doubts that it can be *more* than that
>> (or even good exemplars of the former, for that matter).
>> To allow robots to "inherit the Earth" (gee, I'm not so
>> anxious for this :-) we've got to *get out* of the circuit.
>> And no, I'm not suggesting genetic mechanisms.
>> With this I mean devise software/hardware that *do not depend*
>> on us, that allows the robot to grow by itself, not only
>> in terms of hardware (self-repair or self-construction),
>> but also *in terms of learning*. The latter aspect, however,
>> is the essential one.
>
>But as the robots (through our agency) gain more and more
>human capability, that can take over more and more of our
>functions, eventually including the design and improvement
>of robots, and the acquisition of scientific and technical
>knowledge that underlies improving designs. So, they
>ultimately end up being their own builders and designers,
>no longer needing us, by your scenario.
I agree, this can really happen, but only if we fill
the missing link. I'm concerned with this link because
without it, in 20 or 40 years, we may not have an
intelligent robot, we may have an ape-like robot.
In my opinion, what we're missing now is a
bridge between our sensory and acting abilities in
space with our high-level cognitive abilities. Without
that link, I don't find it possible the future we're
envisioning. Let me further clarify what I mean.
Take Rodney Brooks' Cog. It is a marvelous approach,
even if conceived with such uncommon ideas of intelligence
without representation. What will Cog be like in, say,
20 years? What is the possibility that Cog will "evolve"
to human-like? I think that, without a concern on
high-level cognitive abilities origin, Cog will have
*the same* probability of being human as we had, when we
were "sculptured" by nature: we are an *exception* where the
rules are the apes, whales, giraffes, dogs, cats.
All these animals have sub-human intelligence,
although all of them have, definitely, some kind
of sensorimotor intelligence. It is much more
probable that Cog becomes an ape than a human.
That, obviously, will not reduce Cog's utility.
We can be freed of a lot of mechanical, dangerous
or tedious work with the proper use of Cog's offsprings.
But to improve our mental achievements, to go
farther than we've got using our minds, Cogs must
be built with a clear concern with that "middle level":
where sensorimotor knowledge transforms into
abstract, symbolic and creative thought.
Regards,
Sergio Navega.
From: Hans Moravec <hpm@cmu.edu>
Subject: Re: Letter to the editor
Date: 19 Jan 1999 00:00:00 GMT
Message-ID: <36A5065F.41C6@cmu.edu>
Content-Transfer-Encoding: 7bit
References: <77sven$4uf@ux.cs.niu.edu> <77trt6$bek$1@nnrp1.dejanews.com>
<77tsv0$5ob@ux.cs.niu.edu> <77uj8g$ui4$1@nnrp1.dejanews.com>
<36a33f87.0@news3.ibm.net>
Content-Type: text/plain; charset=us-ascii
Organization: Robotics Institute, Carnegie Mellon University
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Sergio Navega" <snavega@ibm.net>:
> But to improve our mental achievements, to go
> farther than we've got using our minds, Cogs must
> be built with a clear concern with that "middle level":
> where sensorimotor knowledge transforms into
> abstract, symbolic and creative thought.
There is an explicit scenario for that in the
transition between third and fourth generation
universal robots in chapter 4 of "Robot".
My overall approach is very un-Cog-like,
based instead on a steady, market driven,
accretion of functionality in mass market
utility robots.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Letter to the editor
Date: 20 Jan 1999 00:00:00 GMT
Message-ID: <36a5edfd.0@news3.ibm.net>
References: <77sven$4uf@ux.cs.niu.edu> <77trt6$bek$1@nnrp1.dejanews.com>
<77tsv0$5ob@ux.cs.niu.edu> <77uj8g$ui4$1@nnrp1.dejanews.com>
<36a33f87.0@news3.ibm.net> <36A5065F.41C6@cmu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 20 Jan 1999 14:53:49 GMT, 166.72.21.230
Organization: SilWis
Newsgroups: comp.ai.philosophy
Hans Moravec wrote in message <36A5065F.41C6@cmu.edu>...
>Sergio Navega" <snavega@ibm.net>:
>> But to improve our mental achievements, to go
>> farther than we've got using our minds, Cogs must
>> be built with a clear concern with that "middle level":
>> where sensorimotor knowledge transforms into
>> abstract, symbolic and creative thought.
>
>There is an explicit scenario for that in the
>transition between third and fourth generation
>universal robots in chapter 4 of "Robot".
>My overall approach is very un-Cog-like,
>based instead on a steady, market driven,
>accretion of functionality in mass market
>utility robots.
I didn't read Ch. 4 yet, so I owe it a try before
commenting more. I agree that the functionality of
future robots will be shaped by market forces (after
all, everything industrialized today is subject to
the same forces). But one can't put a functionality
into a design unless one knows how to do that.
That's my primary concern. Obviously, we will be
able to discover how to do it (I'm an optimist!).
Regards,
Sergio Navega.
From: Hans Moravec <hpm@cmu.edu>
Subject: Re: Letter to the editor
Date: 20 Jan 1999 00:00:00 GMT
Message-ID: <36A64CEE.41C6@cmu.edu>
Content-Transfer-Encoding: 7bit
References: <77sven$4uf@ux.cs.niu.edu> <77trt6$bek$1@nnrp1.dejanews.com>
<77tsv0$5ob@ux.cs.niu.edu> <77uj8g$ui4$1@nnrp1.dejanews.com>
<36a33f87.0@news3.ibm.net>
Content-Type: text/plain; charset=us-ascii
Organization: Robotics Institute, Carnegie Mellon University
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Sergio Navega <snavega@ibm.net>:
> ... But one can't put a functionality
> into a design unless one knows how to do that ...
That's completely wrong! Lots of things are
discovered by accident, for instance when customers
abuse a product for something it wasn't intended
for and discover a neat new functionality in
hindsight, which can then be optimized in a
new round of product development.
Natural evolution certainly put a lot of
functionality into us with even having a mechanism
for knowing how to do it in advance.
Robots can get to where we are via short-sighted
but persistent tinkering on our part, just as we got
there through nature's blind but persistent tinkering.
That said, I think, in broad-brush terms, my chapter
4 scenario, which uses a physical/cultural/psychological
simulation of the world that the robot keeps updated
and tuned from experience as a glue between sense data
and reasoning abstractions, is a plausible approach.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Letter to the editor
Date: 21 Jan 1999 00:00:00 GMT
Message-ID: <36a71a1f.0@news3.ibm.net>
References: <77sven$4uf@ux.cs.niu.edu> <77trt6$bek$1@nnrp1.dejanews.com>
<77tsv0$5ob@ux.cs.niu.edu> <77uj8g$ui4$1@nnrp1.dejanews.com>
<36a33f87.0@news3.ibm.net> <36A64CEE.41C6@cmu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 21 Jan 1999 12:14:23 GMT, 166.72.21.5
Organization: SilWis
Newsgroups: comp.ai.philosophy
Hans Moravec wrote in message <36A64CEE.41C6@cmu.edu>...
>Sergio Navega <snavega@ibm.net>:
>> ... But one can't put a functionality
>> into a design unless one knows how to do that ...
>
>That's completely wrong! Lots of things are
>discovered by accident, for instance when customers
>abuse a product for something it wasn't intended
>for and discover a neat new functionality in
>hindsight, which can then be optimized in a
>new round of product development.
>
On a second reading, I guess you're right. But
part of my point does have a value. One will not
discover nothing by accident if one isn't prepared
to *recognize* that something different (and important)
happened. And one will not make good use of it unless
one have a way to propagate this discovery to adequate
causal models. And one needs intelligence to do that.
I want to distinguish here the "difference" between
a man and a monkey. The former is able to "discover"
fire and make good use of it, including warming oneself.
But the latter, no matter if exposed thousands of times
to the very same event, will not "perceive" it.
My concern, then, is that we have a great chance of
producing a "monkey-robot", unless we understand very
well what's behind this mechanism of perception and
use of perception to develop knowledge. That's why
I believe we have a serious problem yet to solve:
to understand how that mechanism works to allow it
to make a difference in our robots.
This is not to say that nobody is doing this kind
of research. Cognitive Psychologists and Cognitive
Neuroscientists are advancing very fast. But one
way or another, their goal is to understand the
human mind. I don't know of much AI researchers
concerned with that kind of understanding. And
AI guys are the ones supposed to make intelligent
machines.
>Natural evolution certainly put a lot of
>functionality into us with even having a mechanism
>for knowing how to do it in advance.
>Robots can get to where we are via short-sighted
>but persistent tinkering on our part, just as we got
>there through nature's blind but persistent tinkering.
>
That's right, although I'm afraid nature had a terrible
advantage over us: it had a lot of time.
>That said, I think, in broad-brush terms, my chapter
>4 scenario, which uses a physical/cultural/psychological
>simulation of the world that the robot keeps updated
>and tuned from experience as a glue between sense data
>and reasoning abstractions, is a plausible approach.
And I agree, I'm also optimistic. My fundamental concern
is that we didn't make much progress on that "detail"
of understanding the essential principles of intelligence.
We don't know what it means to be intelligent and we're
acting like if we do (just check, for instance, the
proceedings of AAAI 98, 97, or any other year). We're
acting as if we had a perfectly established ground,
but I don't think we have it. This is, in my opinion,
dangerous, although fixable.
Regards,
Sergio Navega.
Back to Menu of Messages Sergio's Homepage
Any Comments? snavega@attglobal.net