Selected Newsgroup Message

Dejanews Thread

From: juola@mathcs.duq.edu (Patrick Juola)
Subject: Re: Hahahaha
Date: 12 Feb 1999 00:00:00 GMT
Message-ID: <7a1brd$lbv$1@quine.mathcs.duq.edu>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net>
Organization: Duquesne University, Pittsburgh PA  USA
Newsgroups: comp.ai,comp.ai.philosophy

In article <36c42795@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net> wrote:
>John Ownby wrote in message
>Ask *any* AI researcher what are the practical
>applications of AI in use today and they will limit themselves
>to reciting the old stuff about neural nets in credit checking or
>stock trade prediction, expert systems in hospitals and
>configuration/maintenance enterprises, or speech recognition
>packages, or natural language access to databases. With few
>different things, all acclaimed AI applications are old hat.
>I even doubt that they can really be considered intelligent.

Of course, this "old stuff" is actually relatively new; NNs
in credit checking are less than ten years old.  Expert systems
on the desktop are new since the mid-80's, and so forth.

You'd be hard-pressed to find another field that reinvents itself
so thoroughly that someone can claim that something is "old hat"
fewer than ten years after the basic technology is introduced.
Even in computer science, people are still running N-th generation
copies of CP/M atop bloated 8088 CPUs and communicating via the
protocols developed by the Arpanet in the late 1960s.

And of course you no longer consider them intelligent.  But as
soon as you give a definition, I can point stuff out to you.

>In my vision, intelligent software is the one that *improves*
>its performance automatically, the more it runs (that's exactly
>the opposite of what happens with most of the softwares
>classified as AI), just like a baby when growing to adult.

Oh, you mean, like neural networks in finance.

>Not only improvement with *fixed*, previously designed algorithms
>(like neural nets in finance). But an improvement in which the
>software, by itself, detects some ways to perform better, to
>solve problems with methods analogized from past situations,
>to develop new heuristics and test it in the "mind's eye".

Oh.  You mean like neural networks in finance.

>
>It should learn from the user, even without its awareness.

Oh.  You mean like neural networks in finance.

>It should perceive things that the user does and attempt to
>correlate with its current status (sort of "sensory perceptions").

Oh.  You mean like neural networks in finance.

>One AI software should, after some time, "know" how its user
>works and proceed doing things for the user.

Oh.  You mean like neural networks in finance.

> Reasonable things,
>because it should have learned what is "reasonable".

Oh.  You mean like neural networks in finance.

        -kitten

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 15 Feb 1999 00:00:00 GMT
Message-ID: <36c82325@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c2f260.0@seralph9> <9B71BD6813034408.BB015143F38755BC.44E8787AFF1D28B0@library-proxy.airnews.net> <36c42795@news3.us.ibm.net> <7a1brd$lbv$1@quine.mathcs.duq.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 15 Feb 1999 13:37:41 GMT, 200.229.240.162
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Patrick Juola wrote in message <7a1brd$lbv$1@quine.mathcs.duq.edu>...
>In article <36c42795@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
wrote:
>>John Ownby wrote in message
>>Ask *any* AI researcher what are the practical
>>applications of AI in use today and they will limit themselves
>>to reciting the old stuff about neural nets in credit checking or
>>stock trade prediction, expert systems in hospitals and
>>configuration/maintenance enterprises, or speech recognition
>>packages, or natural language access to databases. With few
>>different things, all acclaimed AI applications are old hat.
>>I even doubt that they can really be considered intelligent.
>
>Of course, this "old stuff" is actually relatively new; NNs
>in credit checking are less than ten years old.  Expert systems
>on the desktop are new since the mid-80's, and so forth.
>

That's old stuff, in my comparison with the rest of the computer
industry. Computers are, today, ubiquitous. They are applied in
an exponentially growing number of applications. They have
permeated all areas of our society, thousands of different
fields. However, successful AI applications in use are restricted
to 4 or 5 areas which have not changed significantly in the
last decade. Even in each of these 5 areas, its presence is
far from ubiquitous. Few enterprises (compared on a global scale)
uses it. I can't believe that intelligence is an area
lacking potential applications. How to explain the restricted
use of current AI? I say that we don't have intelligent systems,
its all "fool's gold".

Take 6 months of future time, and computers will have evolved
in new and significant directions, with older applications
growing significantly in terms of functionality and new ones
being introduced. AI is not one application within that paradigm.
It is not developing at the same pace. It is evolving in a
different stream, much slower. We are missing something very
important that's keeping AI on the floor.

>You'd be hard-pressed to find another field that reinvents itself
>so thoroughly that someone can claim that something is "old hat"
>fewer than ten years after the basic technology is introduced.
>Even in computer science, people are still running N-th generation
>copies of CP/M atop bloated 8088 CPUs and communicating via the
>protocols developed by the Arpanet in the late 1960s.
>

The only reason we still use those "inherited" oldies is because
of compatibility. If that were not an issue, we'd had thrown out
all that in the garbage long ago. AI (practical side, obviously)
is still with the same applications of a decade ago. I see no
practical examples of machine "intelligence" that goes beyond
"expert systems" and "neural networks", things that, by themselves,
I reluct to call intelligent.

>
>And of course you no longer consider them intelligent.  But as
>soon as you give a definition, I can point stuff out to you.
>

I'll not risk myself at a definition, because in complex concepts
such as intelligence, justice, honor, compassion, one would
commit an error in trying to define it rigidly. Suffice it to
say that I don't know of any system able to learn by itself,
discover new things *and* use the methods and heuristics discovered
in a *totally new domain*, *by itself*.

Am I asking for something too difficult here? Any 5 year old child
is able to do that. NO computer on Earth is able to do that
(obviously, it's not the computer's fault :-).

General machine intelligence does not exist today. What exists today
are generally intelligent designers (that's we) who "fool" the
laymen into thinking that that particular program is intelligent.
Rubbish.

>
>>In my vision, intelligent software is the one that *improves*
>>its performance automatically, the more it runs (that's exactly
>>the opposite of what happens with most of the softwares
>>classified as AI), just like a baby when growing to adult.
>
>Oh, you mean, like neural networks in finance.
>

In a certain way, NN can do that in a limited form. What's missing
then? Should we compare its performance with an expert human
analyst? The human expert was not born as such. He/She learned,
absorbed information and grasped the essential aspects.

Why would an expert human analyst present better overall
performance? I say that the human expert uses its general
intelligence to find correlations that *transcends* the
mere numerical information processed by NN. He is able to
compare and correlate several distinct areas, create new
concepts, experiment with "what if" situations and use them
in a cohesive manner to achieve its goal. Please, don't
tell me that NN can also do "what if" simulations. In this
case, you always need one human at the keyboard to tell what
to do. I know the human is intelligent.

A NN, the way it's being used today, will never grow into
an expert in its "area of expertise". Even in a hundred
years of experience. Because a NN (the way constructed today)
does not have enough horsepower to do it and because we
do not know (yet) how to do that.

>>Not only improvement with *fixed*, previously designed algorithms
>>(like neural nets in finance). But an improvement in which the
>>software, by itself, detects some ways to perform better, to
>>solve problems with methods analogized from past situations,
>>to develop new heuristics and test it in the "mind's eye".
>
>Oh.  You mean like neural networks in finance.

You sure should know something about NN in finance that I don't
know. I never heard of nothing of this kind. NN are generalizers
that depend a lot on human intervention to select the problem
to attack and to interpret its results. It is a little bit more
than a tool which a human operator uses to find interesting things.
Much like an eyeglass.

>
>>
>>It should learn from the user, even without its awareness.
>
>Oh.  You mean like neural networks in finance.
>
>>It should perceive things that the user does and attempt to
>>correlate with its current status (sort of "sensory perceptions").
>
>Oh.  You mean like neural networks in finance.
>
>>One AI software should, after some time, "know" how its user
>>works and proceed doing things for the user.
>
>Oh.  You mean like neural networks in finance.
>
>> Reasonable things,
>>because it should have learned what is "reasonable".
>
>Oh.  You mean like neural networks in finance.
>

You definitely know of something about NN in finance that transcends
my knowledge. For me, NNs are cumbersome, difficult and slow to train
and occasionally presents "false hits". Any 5000 node NN will put a
Pentium II to its knees.

Besides, having much less than 10 K nodes, any NN in use today is
significantly more stupid than the usual "household" cockroach.

I may look a little bit bitter in my comments, but my goal is just
to keep fresh an attitude: we do not know yet how this thing we
call intelligence works. In Kuhnian terms, we've got to be searching
a paradigm shift. Given the current paradigms, I don't think we will
get AI only with the evolution of today's technique. I feel we're
missing something important, maybe still to be discovered.

Regards,
Sergio Navega.

From: juola@mathcs.duq.edu (Patrick Juola)
Subject: Re: Hahahaha
Date: 15 Feb 1999 00:00:00 GMT
Message-ID: <7a9ci2$dku$1@quine.mathcs.duq.edu>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c42795@news3.us.ibm.net> <7a1brd$lbv$1@quine.mathcs.duq.edu> <36c82325@news3.us.ibm.net>
Organization: Duquesne University, Pittsburgh PA  USA
Newsgroups: comp.ai,comp.ai.philosophy

In article <36c82325@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net> wrote:
>>>In my vision, intelligent software is the one that *improves*
>>>its performance automatically, the more it runs (that's exactly
>>>the opposite of what happens with most of the softwares
>>>classified as AI), just like a baby when growing to adult.
>>
>>Oh, you mean, like neural networks in finance.
>>
>
>In a certain way, NN can do that in a limited form. What's missing
>then? Should we compare its performance with an expert human
>analyst? The human expert was not born as such. He/She learned,
>absorbed information and grasped the essential aspects.
>
>Why would an expert human analyst present better overall
>performance?

The problem with this is that, bluntly, there are a lot of domains
where an expert human does *NOT* do better than neural networks;
these domains include some corners of the financial industry.  It
helps, of course, that financial "experts" are routinely outperformed
by throwing darts at pages of stock listings, and that the overall
performance of, for example, the mutual funds industry looks so much
like a Gaussian curve as to be frightening.

If your metric for "intelligences" is just better-than-human performance,
we've had "intelligent" systems since the 70s.

>>>It should learn from the user, even without its awareness.
>>
>>Oh.  You mean like neural networks in finance.
>>
>>>It should perceive things that the user does and attempt to
>>>correlate with its current status (sort of "sensory perceptions").
>>
>>Oh.  You mean like neural networks in finance.
>>
>>>One AI software should, after some time, "know" how its user
>>>works and proceed doing things for the user.
>>
>>Oh.  You mean like neural networks in finance.
>>
>>> Reasonable things,
>>>because it should have learned what is "reasonable".
>>
>>Oh.  You mean like neural networks in finance.
>>
>
>You definitely know of something about NN in finance that transcends
>my knowledge. For me, NNs are cumbersome, difficult and slow to train
>and occasionally presents "false hits". Any 5000 node NN will put a
>Pentium II to its knees.

So?  This isn't an algorithmic problem -- this is simply a problem of
our theory outstripping the hardware.   You might as well complain
that computers can't play chess well on the basis that your old
Sinclair doesn't run fast enough. 

A typical human MBA's brain has about 10^12 nodes, all learning in
parallel and 25 years of continuous stimulation in order to teach it.
Give me a Pentium capable of that kind of computing power and we'll
discuss the time it takes to train.

        -kiten

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 15 Feb 1999 00:00:00 GMT
Message-ID: <36c86212@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c42795@news3.us.ibm.net> <7a1brd$lbv$1@quine.mathcs.duq.edu> <36c82325@news3.us.ibm.net> <7a9ci2$dku$1@quine.mathcs.duq.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 15 Feb 1999 18:06:10 GMT, 200.229.243.85
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Patrick Juola wrote in message <7a9ci2$dku$1@quine.mathcs.duq.edu>...
>In article <36c82325@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
wrote:
>>>>In my vision, intelligent software is the one that *improves*
>>>>its performance automatically, the more it runs (that's exactly
>>>>the opposite of what happens with most of the softwares
>>>>classified as AI), just like a baby when growing to adult.
>>>
>>>Oh, you mean, like neural networks in finance.
>>>
>>
>>In a certain way, NN can do that in a limited form. What's missing
>>then? Should we compare its performance with an expert human
>>analyst? The human expert was not born as such. He/She learned,
>>absorbed information and grasped the essential aspects.
>>
>>Why would an expert human analyst present better overall
>>performance?
>
>The problem with this is that, bluntly, there are a lot of domains
>where an expert human does *NOT* do better than neural networks;
>these domains include some corners of the financial industry.  It
>helps, of course, that financial "experts" are routinely outperformed
>by throwing darts at pages of stock listings, and that the overall
>performance of, for example, the mutual funds industry looks so much
>like a Gaussian curve as to be frightening.
>
>If your metric for "intelligences" is just better-than-human performance,
>we've had "intelligent" systems since the 70s.
>

I am constantly outperformed by my pocket calculator. I cannot say that
I'm less intelligent because of that. If neural nets get better results
than human experts on some financial predictions, what should I conclude?
That NNs are more intelligent than us (or even that they represent
some kind of intelligence)? My "metric" for intelligence does not
include this kind of behavior.

>>[snip]
>>You definitely know of something about NN in finance that transcends
>>my knowledge. For me, NNs are cumbersome, difficult and slow to train
>>and occasionally presents "false hits". Any 5000 node NN will put a
>>Pentium II to its knees.
>
>So?  This isn't an algorithmic problem -- this is simply a problem of
>our theory outstripping the hardware.   You might as well complain
>that computers can't play chess well on the basis that your old
>Sinclair doesn't run fast enough.

Oh, well, I knew we would get this far. I guess it's the same old
problem, Smolensky,McClelland x Fodor,Pylyshyn. The formers need
more hardware, because theory is ok, the latter say there's
enough hardware, we just need a little bit more theory. What if
none of them are right, in their essential suppositions? Or better,
what if both camps are right, in some of their suppositions?

What I'm questioning here is that we are not aware of the essential
points behind intelligent behavior. If Fodor and McClelland can't
understand each other's positions, and if both have *very* strong
arguments to justify their positions (each one with a horde of
followers), then for me something new (a third vision) must be
lurking. I have the impression that we will make tremendous
progress once we *detect* what is this third vision that can
accomodate not only the points of connectionists and symbolicists,
but also the ideas of the guys who work with the brain as a
dynamic, chaotic system (Kelso, Holland and others). I think
we still don't know what it is to be intelligent and for that
reason I can't agree that NNs are the solution to our problems.

>
>A typical human MBA's brain has about 10^12 nodes, all learning in
>parallel and 25 years of continuous stimulation in order to teach it.

And yet this MBA brain is not able to perform as well as a simple
financial NN with a tiny number of neurons. This tells me that this
NN is solving something *very* different in nature than what the
MBA's brain is prepared to solve. I'm after *this* kind of
intelligence. I'm after the kind of perceptual intelligence that
makes a 5 year old child a *challenge* to any AI architecture
of today.

Regards,
Sergio Navega.

From: juola@mathcs.duq.edu (Patrick Juola)
Subject: Re: Hahahaha
Date: 15 Feb 1999 00:00:00 GMT
Message-ID: <7a9uuk$ebh$1@quine.mathcs.duq.edu>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c82325@news3.us.ibm.net> <7a9ci2$dku$1@quine.mathcs.duq.edu> <36c86212@news3.us.ibm.net>
Organization: Duquesne University, Pittsburgh PA  USA
Newsgroups: comp.ai,comp.ai.philosophy

In article <36c86212@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net> wrote:
>Patrick Juola wrote in message <7a9ci2$dku$1@quine.mathcs.duq.edu>...
>>In article <36c82325@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
>wrote:
>>>>>In my vision, intelligent software is the one that *improves*
>>>>>its performance automatically, the more it runs (that's exactly
>>>>>the opposite of what happens with most of the softwares
>>>>>classified as AI), just like a baby when growing to adult.
>>>>
>>>>Oh, you mean, like neural networks in finance.
>>>>
>>>
>>>In a certain way, NN can do that in a limited form. What's missing
>>>then? Should we compare its performance with an expert human
>>>analyst? The human expert was not born as such. He/She learned,
>>>absorbed information and grasped the essential aspects.
>>>
>>>Why would an expert human analyst present better overall
>>>performance?
>>
>>The problem with this is that, bluntly, there are a lot of domains
>>where an expert human does *NOT* do better than neural networks;
>>these domains include some corners of the financial industry.  It
>>helps, of course, that financial "experts" are routinely outperformed
>>by throwing darts at pages of stock listings, and that the overall
>>performance of, for example, the mutual funds industry looks so much
>>like a Gaussian curve as to be frightening.
>>
>>If your metric for "intelligences" is just better-than-human performance,
>>we've had "intelligent" systems since the 70s.
>>
>
>
>I am constantly outperformed by my pocket calculator. I cannot say that
>I'm less intelligent because of that. If neural nets get better results
>than human experts on some financial predictions, what should I conclude?
>That NNs are more intelligent than us (or even that they represent
>some kind of intelligence)? My "metric" for intelligence does not
>include this kind of behavior.

So, simply being better than humans is necessarily a sign of intelligence --
I can live with that.  However, note the shifting-definition problem.

You first proposed that intelligent software should improve its performance
(implication : intelligence is the ability to learn from mistakes);
when I pointed out that neural networks do learn, you dropped back to
the position that learning is necessary-but-not-sufficient and that
there's something more that, specifically "present[s] better overall
performance."  (implication : intelligence is the ability to learn TO
A SPECIFIED LEVEL OF PERFORMANCE).

When I pointed out that neural networks can achieve human-level -- in
fact, superhuman-level -- performance on some tasks, you suggest
that, again, there's something missing.

Which, as I said, I can live with.

BUT WHAT IS IT THAT'S MISSING?

It's *not*, as you suggest, the ability to learn.  It's *not* the
ability to adapt to new requirements and do do the ``reasonable'' thing;
AI systems now do that routinely.

>>>[snip]
>>>You definitely know of something about NN in finance that transcends
>>>my knowledge. For me, NNs are cumbersome, difficult and slow to train
>>>and occasionally presents "false hits". Any 5000 node NN will put a
>>>Pentium II to its knees.
>>
>>So?  This isn't an algorithmic problem -- this is simply a problem of
>>our theory outstripping the hardware.   You might as well complain
>>that computers can't play chess well on the basis that your old
>>Sinclair doesn't run fast enough.
>
>Oh, well, I knew we would get this far. I guess it's the same old
>problem, Smolensky,McClelland x Fodor,Pylyshyn. The formers need
>more hardware, because theory is ok, the latter say there's
>enough hardware, we just need a little bit more theory. What if
>none of them are right, in their essential suppositions? Or better,
>what if both camps are right, in some of their suppositions?

All of which are possible -- and in science's current state, we
simply don't know which, if any of them, are correct.

However, this sort of ignorance cuts both ways.  If Smolensky and
McClelland don't KNOW that more hardware will necessarily produce
intelligence, neither do their detractors KNOW that the problem
isn't simply a lack of hardware.   Stating that "we won't have
intelligence because systems can't...." or stating that "the
current paradigm cannot produce intelligent machines" without
giving principles and only giving examples of things that have
not yet been achieved, is both foolish and premature.

>I think
>we still don't know what it is to be intelligent and for that
>reason I can't agree that NNs are the solution to our problems.

But then why are you falling all over yourself to claim taht
NNs are *NOT* the solution to our problems?

        -kitten

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 16 Feb 1999 00:00:00 GMT
Message-ID: <36c98582@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c82325@news3.us.ibm.net> <7a9ci2$dku$1@quine.mathcs.duq.edu> <36c86212@news3.us.ibm.net> <7a9uuk$ebh$1@quine.mathcs.duq.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 16 Feb 1999 14:49:38 GMT, 200.229.243.94
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Patrick Juola wrote in message <7a9uuk$ebh$1@quine.mathcs.duq.edu>...
>In article <36c86212@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
wrote:
>>Patrick Juola wrote in message <7a9ci2$dku$1@quine.mathcs.duq.edu>...
>>>The problem with this is that, bluntly, there are a lot of domains
>>>where an expert human does *NOT* do better than neural networks;
>>>these domains include some corners of the financial industry.  It
>>>helps, of course, that financial "experts" are routinely outperformed
>>>by throwing darts at pages of stock listings, and that the overall
>>>performance of, for example, the mutual funds industry looks so much
>>>like a Gaussian curve as to be frightening.
>>>
>>>If your metric for "intelligences" is just better-than-human performance,
>>>we've had "intelligent" systems since the 70s.
>>>
>>
>>
>>I am constantly outperformed by my pocket calculator. I cannot say that
>>I'm less intelligent because of that. If neural nets get better results
>>than human experts on some financial predictions, what should I conclude?
>>That NNs are more intelligent than us (or even that they represent
>>some kind of intelligence)? My "metric" for intelligence does not
>>include this kind of behavior.
>
>So, simply being better than humans is necessarily a sign of
intelligence --
>I can live with that.  However, note the shifting-definition problem.
>
>You first proposed that intelligent software should improve its performance
>(implication : intelligence is the ability to learn from mistakes);
>when I pointed out that neural networks do learn, you dropped back to
>the position that learning is necessary-but-not-sufficient and that
>there's something more that, specifically "present[s] better overall
>performance."  (implication : intelligence is the ability to learn TO
>A SPECIFIED LEVEL OF PERFORMANCE).
>

I don't have to specify a desired (future) level of performance just
to know that I'm improving. Suffice it to be more than yesterday's.
But I accept that the definition of intelligence is complex enough
to not being easily summarized in few necessary/sufficient conditions.

>When I pointed out that neural networks can achieve human-level -- in
>fact, superhuman-level -- performance on some tasks, you suggest
>that, again, there's something missing.
>
>Which, as I said, I can live with.
>
>BUT WHAT IS IT THAT'S MISSING?
>
>It's *not*, as you suggest, the ability to learn.  It's *not* the
>ability to adapt to new requirements and do do the ``reasonable'' thing;
>AI systems now do that routinely.
>

Obviously, to find exactly what is missing is most of the way to
discover how to solve the whole problem. I can tell you that I don't
have that information, only clues, which I'll try to share later in
this post.

>
>
>>>>[snip]
>>>>You definitely know of something about NN in finance that transcends
>>>>my knowledge. For me, NNs are cumbersome, difficult and slow to train
>>>>and occasionally presents "false hits". Any 5000 node NN will put a
>>>>Pentium II to its knees.
>>>
>>>So?  This isn't an algorithmic problem -- this is simply a problem of
>>>our theory outstripping the hardware.   You might as well complain
>>>that computers can't play chess well on the basis that your old
>>>Sinclair doesn't run fast enough.
>>
>>Oh, well, I knew we would get this far. I guess it's the same old
>>problem, Smolensky,McClelland x Fodor,Pylyshyn. The formers need
>>more hardware, because theory is ok, the latter say there's
>>enough hardware, we just need a little bit more theory. What if
>>none of them are right, in their essential suppositions? Or better,
>>what if both camps are right, in some of their suppositions?
>
>All of which are possible -- and in science's current state, we
>simply don't know which, if any of them, are correct.
>
>However, this sort of ignorance cuts both ways.  If Smolensky and
>McClelland don't KNOW that more hardware will necessarily produce
>intelligence, neither do their detractors KNOW that the problem
>isn't simply a lack of hardware.   Stating that "we won't have
>intelligence because systems can't...." or stating that "the
>current paradigm cannot produce intelligent machines" without
>giving principles and only giving examples of things that have
>not yet been achieved, is both foolish and premature.
>

I agree. And this is why I insist on finding general principles
for intelligence, far from any previously devised architecture.
In reality, I shouldn't have said that "NN cannot be intelligent
as humans". I should have started with "NN may not be the best
starting point to find out what it is to be intelligent".

>>I think
>>we still don't know what it is to be intelligent and for that
>>reason I can't agree that NNs are the solution to our problems.
>
>But then why are you falling all over yourself to claim taht
>NNs are *NOT* the solution to our problems?
>

If I appeared to claim that NNs are not the solution, I want to
retract now and say that they don't *seem* to be the solution.
It is, as much as we know, an open issue. But I think I can give
you some reasons to justify my impression. Sorry if it seems too
long and rambling.

Since I started studying AI, I've seen a lot of approaches to
intelligence that made some sense in limited domains. The logic
approach, following the mathematicians point of view, is able
to give some good results. But it failed (or at least proposed
too complex solutions) to "simple problems" like the frame
problem. In much the same situation (because of similarity)
were semantic networks, conceptual graphs, description
logics, etc.

Then, I turned to see the study of our brain as the only way
to get a clear vision of an already working machine with that
kind of capacity. Lets forget for a while Tversky & Kahneman's
arguments for the inadequacy of most humans for rational
thinking. Our brain is the best model of intelligent machine
we have.

If our brain seems to be made of a network of nodes with some
evidences of hebbian-like learning mechanisms, this could easily
put us toward solutions based on NNs. But there's another side
to it. Our brain may be the technical solution that nature
found to satisfy "deeper principles". What are they?

Whatever the mechanism our brain uses, we have some kind of
pyramid: initial levels handle noisy sensory inputs in a
statistical manner, very appropriate for NNs. But higher levels
leave that behavior aside. The more we climb on this architecture,
the more we find crisp, symbol-like structures, to the point of
suggesting to Newell and Simon the PSSH. If they found reasonable
to propose this hypothesis, and if they were not stupid, then
something very strong must be lurking behind it. I don't want
to accept their hypothesis, I want to understand *why* they
have proposed it.

Language is essentially a symbolic and deterministic structure,
although with lots of statistically related elements. Success
of statistical NLP attest this. This may suggest that language
is somewhat in the "middle" of a mechanism where
statistical regularities are important below and solid,
rule-like mechanisms are above. This may sound like a hybrid
architecture, but I believe we should avoid thinking in
architectures at this point.

A mathematician, when reasoning, may use a lot of its internal,
inductive and statistical heuristics, but the end result will
be a sharp and "straight to the point" equation, that do not
resemble the lower levels. We *desire* to find that crisp, neat
level, that exactness. When we draw the project of a nuclear
submarine, we want it to be right to the tiniest of the details.

Now you can say that this kind of behavior can appear naturally
in some neural nets architectures. Yes, that is possible. The
big question is: will we understand its working principles?
Will we be aware of the mechanisms behind this statistic to
symbolic transformation? Will we be able to scientifically
bend its workings to better suit our purposes? Will we really
understand it or just see the "magic" in its emergence?

Add another level of analysis. Our brain is proven to be
functionally modular. We have areas specialized in phonological
processing, semantic coordination, visual cognition, etc.
Some think this is an innate disposition of things. I'm finding
constantly indications that this is not so (I'm a strong
antagonist to innateness of language). Studies with
brain damaged, as you probably know better than me, reveal
a strong adaptive capacity, with plasticity being at its
maximum during childhood. It is an organism that self adapts,
capable of reutilizing unused areas for another purpose
(as studies with tactile development in blind subjects reveal),
although maintaining a modular, interdependent organization.

Now lets think just for a while that this aspect of development
of our brain is not what we must follow in our attempts of
implementation. Lets think that this is the solution that
nature "found" to evolve a thinking mechanism based on biological
matter. What we should do is understand the basic goals
behind this machine in order to be able to implement it using
*another* kind of technology. And this is clear because computers
(digital electronics) are *essentially* different from biological
neurons. It is not wise to try to make in computers something
that works with action potentials and synaptic plasticity.

So what I've proposed to myself is trying to find what are the
mechanisms involved and describe them *without sticking to any
specific implementation or architectural strategy*. I think
that we must understand perfectly what are those essential
mechanisms to be able to implement them in *any* kind of
mechanism. Could a thermostat be more "intelligent"?
Yes, if we understand what are the essential points it must
obey.

Now it is time for me to speculate, and I hope you'll forgive
me if I say anything nonsensical. In fact, as I'm betting on
these principles (also collecting evidences to support those
ideas), I would appreciate very much any reasoned comment on
them.

The list that follows is my "shopping list" for intelligence.

a) Innate sensory transducers and innate feature detectors
Our eyes, ears, etc, were provided by nature to process
the initial levels of transduction of signals. Close to
this level, we have innate circuits that are able to
extract some information from raw data (examples are
cochlear hair cells). Here I base my ideas on the
current knowledge of neuroscience (several authors).

b) Perception of regularity, perception of anomaly
This is my most cherished principle. Most of the things we
perceive as being intelligent have, in its root, some kind
of perception of occurring regularity or lack of regularity
(something that was regular and then suddenly is not anymore).
If I start my day checking my e-mail, then my computer
*should* learn this and activate my Internet connection as
soon as I boot it, because it *should* perceive that this is
happening often. This item is somewhat linked to the
ideas of J. Gerard Wolff on computing as compression.

c) Conceptualization, Categorization
Using principle b), the entity should perceive what are
the concepts that can be grouped under the same category
(sensorimotor information is essential in helping this
process). Lots of false hits disappear with time (apples
and wheels are examples of round objects, but this is not
so strong as apples and oranges being examples of fruits).
This item is supported by the massive works of cognitive
psychologists (Lakoff, Thagard, Holyoak, Barsalou, etc).

d) Spreading activation
A thought is the spreading of activations in parallel and
in a competitive manner (much like a darwinian environment).
This follows the speculations of William Calvin and Gerald
Edelman.

e) Maintenance of Causal structures
Causal models are derived from the structure of sensorimotor
patterns coupled with categorization of action concepts. The
rest is analogical reasoning (Holyoak, Gentner, etc).
For this item I like the studies of Pearl, Shafer and others,
but compare their requirements with what we learn from
neuroscience and cognitive psychology.

f) A source for randomness
Another one of my preferred. Two aspects to account: first,
stochastic resonance, which is most an optimization factor.
The other is the effect on creativity.
I definitely doubt that any system would be able to present
creative behavior without a source of random signals. And
without some sort of creativity, I doubt that we can consider
something to be intelligent. This is the element that makes
the entity  think "out of the box" and *discover* what no
other entity had discovered before. Discovery is, in my
opinion, the birth cradle of most of our knowledge.
I understand creativity after Boden, Steinberg, Weisberg
and others and I see sources of randomness on neurons on
the work of some neurophysiologists (William Calvin is one
example).

Today, I firmly believe that the greatest part of the answers
to the puzzle of intelligence is among the work *already done*
by a handful of scientists. As an exercise for myself, I listed
below those most important in my vision:

- Perceptual theories of cognition (the Gibsons)
- Dynamic and chaotic models of mind (Kelso, Holland, Thelen, Smith)
- Sensorimotor development on children (Piaget, Karmiloff-Smith)
- Innateness of Language (I'm closer to Elman's position)
- Neural coding by populations of neurons (Wolf Singer, Calvin)
- Neural darwinism (Edelman, Calvin)
- Importance of Inductive methods (Thagard, Holyoak, Nisbett)
- Analogy and Metaphor (Lakoff, Veale, Gentner, Johnson)
- Models of Creativity (Boden, Sternberg, Weisberg)
- Copycat models (Hofstadter, Mitchell, French)
- Models of Imagery (Kosslyn)
- Origins of Intelligence (Mithen, Donald, Dunbar, Bickerton)
- Implicit Learning (Berry, Dienes, Cleeremans, Reber)
- Cognition as Compression (Wolff)
- Connectionist models (McClelland, Miikkulainen, Shastri, Sirosh, Sejnowski)
- Foundations of symbolicist models (Newell's SOAR, Anderson ACT-R)
- Language Acquisition (Pinker, Elman, Bates)
- Stories and Scripts (Schank)
- Grounding, categorization, perception (Harnad, Barsalou)
- more (don't remember now)

Regards,
Sergio Navega.

From: juola@mathcs.duq.edu (Patrick Juola)
Subject: Re: Hahahaha
Date: 16 Feb 1999 00:00:00 GMT
Message-ID: <7ac3t7$g4a$1@quine.mathcs.duq.edu>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c86212@news3.us.ibm.net> <7a9uuk$ebh$1@quine.mathcs.duq.edu> <36c98582@news3.us.ibm.net>
Organization: Duquesne University, Pittsburgh PA  USA
Newsgroups: comp.ai,comp.ai.philosophy

In article <36c98582@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net> wrote:
>
>If I appeared to claim that NNs are not the solution, I want to
>retract now and say that they don't *seem* to be the solution.

Acknowledged and agreed.

>It is, as much as we know, an open issue. But I think I can give
>you some reasons to justify my impression. Sorry if it seems too
>long and rambling.
>
>Since I started studying AI, I've seen a lot of approaches to
>intelligence that made some sense in limited domains. The logic
>approach, following the mathematicians point of view, is able
>to give some good results. But it failed (or at least proposed
>too complex solutions) to "simple problems" like the frame
>problem. In much the same situation (because of similarity)
>were semantic networks, conceptual graphs, description
>logics, etc.
>
>Then, I turned to see the study of our brain as the only way
>to get a clear vision of an already working machine with that
>kind of capacity. Lets forget for a while Tversky & Kahneman's
>arguments for the inadequacy of most humans for rational
>thinking. Our brain is the best model of intelligent machine
>we have.

Well, right here you've hit a major stumbling block in the
definition of intelligence; a lot of philosophers, both
professional and otherwise, have claimed that the central core
of intelligence is some form of 'human-like' behavior.  Even more
have claimed that an important part of intelligence is producing
seemingly 'intelligent' behavior via human-like mechanisms.
This is why, for example, Deepest Blue will not be regarded as
'intelligent' irrespective of how good it gets at playing chess;
throwing more hardware at an exhaustive search of the chess tree
doesn't appear to be the approach that humans apply.

But this also suggests that the T/K data, and in particular
the *irrationality* they show can be regarded as key to
'intelligence' -- suggesting that intelligence is at least partially
a question of making the right mistakes.  I've seen it claimed in
print that 'intelligence' and 'rationality' are inherently incompatible....

But I digress somewhat.  If what you want is an intelligent, rational
machine -- a Vulcan in a beige plastic case -- that's fine with me.

>If our brain seems to be made of a network of nodes with some
>evidences of hebbian-like learning mechanisms, this could easily
>put us toward solutions based on NNs. But there's another side
>to it. Our brain may be the technical solution that nature
>found to satisfy "deeper principles". What are they?

[snip, some rearrangement]
>The list that follows is my "shopping list" for intelligence.
>
>a) Innate sensory transducers and innate feature detectors
>Our eyes, ears, etc, were provided by nature to process
>the initial levels of transduction of signals. Close to
>this level, we have innate circuits that are able to
>extract some information from raw data (examples are
>cochlear hair cells). Here I base my ideas on the
>current knowledge of neuroscience (several authors).

In principle, I like the idea of tying intelligence into
sensory perception.  Unfortunately, I haven't yet seen a version
that I like.  The main problem is that, bluntly, I have little
evidence for (or liking for) the principle that someone, e.g.,
blind or deaf from birth has their intelligence somehow
'damaged'; I think it's completely plausible that *any* form
of sensory input would qualify just as well.  If some mad
Frankensteinian scientist were to take a newborn baby and
cyborg it to a radio receiver instead of normal hearing,
I think that we would simply observe a child growing up and
'hearing' radio instead of sound waves. 

So the question then becomes what one regards as 'sensory input';
isn't the keyboard (or Ethernet card) just another 'sense' that
computers have that we as humans don't share?  Babies hear phonemes;
computers 'hear' keystrokes, and the requirements for sensory input
is met automatically by any information processing system.  If this
isn't the case, what's the fundamental difference between sound
and keystrokes?

>b) Perception of regularity, perception of anomaly
>This is my most cherished principle. Most of the things we
>perceive as being intelligent have, in its root, some kind
>of perception of occurring regularity or lack of regularity
>(something that was regular and then suddenly is not anymore).
>If I start my day checking my e-mail, then my computer
>*should* learn this and activate my Internet connection as
>soon as I boot it, because it *should* perceive that this is
>happening often.

... unless, of course, there's some overriding reason that the
computer shouldn't do that.  Just because something happens often
doesn't mean it's a good thing to do.  If every time I log in the
first thing I did was delete my .cshrc and swear fluently about
my idiocy, does that mean that the computer should start to
delete and swear for me.

But yes, this is reasonable.

>c) Conceptualization, Categorization
>Using principle b), the entity should perceive what are
>the concepts that can be grouped under the same category
>(sensorimotor information is essential in helping this
>process). Lots of false hits disappear with time (apples
>and wheels are examples of round objects, but this is not
>so strong as apples and oranges being examples of fruits).
>This item is supported by the massive works of cognitive
>psychologists (Lakoff, Thagard, Holyoak, Barsalou, etc).

Again, completely reasonable if you assume that the groupings
that humans learn are learnable/categorizable by the computer.
This may address the question of what constitutes 'senses'
I posed above -- there's a paper by Cottrell and Someone (or
maybe Someone and Cottrell) in CogSci-98 suggesting this.

>d) Spreading activation
>A thought is the spreading of activations in parallel and
>in a competitive manner (much like a darwinian environment).
>This follows the speculations of William Calvin and Gerald
>Edelman.

You're getting dangerously close to assuming a neural network-like
architecture here.

>e) Maintenance of Causal structures
>Causal models are derived from the structure of sensorimotor
>patterns coupled with categorization of action concepts. [...]
>
>f) A source for randomness
>Another one of my preferred. Two aspects to account: first,
>stochastic resonance, which is most an optimization factor.
>The other is the effect on creativity.
>I definitely doubt that any system would be able to present
>creative behavior without a source of random signals. [...]

My primary concern with this list is that, simply speaking, if
you had written it in 1990 it would have passed muster under
the title "The SubSymbolicist Manifesto";  The things you
require for the most part are not only present in the models
we group under the heading of NNs, but are usually advertised
as features. 

So why are you dissatisfied with NNs as a technical underpinning
for AI?

Earlier you wrote :
>Now you can say that this kind of behavior can appear naturally
>in some neural nets architectures. Yes, that is possible. The
>big question is: will we understand its working principles?
>Will we be aware of the mechanisms behind this statistic to
>symbolic transformation? Will we be able to scientifically
>bend its workings to better suit our purposes? Will we really
>understand it or just see the "magic" in its emergence?

Are you implicitly claiming that we need to understand a technology
for it to be useful -- or are you claiming that for something
to be AI, it must be both 'intelligent' and understandable?

The problem with this is that I find it very difficult to believe
that *any* mature technology will be understandable in the deeper
sense.   Just as an example, I am supposed to be an 'expert' in
operating systems  -- hell, I've taught the class several times --
but I don't presume to understand every line of code distributed
with Linux.  I understand the overall architecture, but a lot of
the implementation details rely on specialized knowledge that I
don't have, and the inner workings of the device drivers are
a complete black art to me because I don't understand the individual
devices in sufficient detail.  And I certainly don't understand the
functioning of the individual chips in any kind of a detailed way.
Why won't my machine run when overclocked by 150%?  If I *understood*
the chips in detail, I could tell you what fails -- as is, I simply
know it doesn't work.

On the other hand, the primary designer of the chip inside my machine
could probably tell me exactly what goes wrong with the chip, but
may have no idea of how Linux handles the directory structures.

Neural networks may, and I stress may, end up in the same situation.
There are at least three levels at which they can be studied -- one is
the analysis of the 'neurons' themselves as individuals and how they
can be tuned to various tasks.  A second is the study of the network
architecture, treating each neuron as some form of magic black box
that converts inputs to outputs and changes over time.  A third is
a study of the overall property of the network -- what is, for example,
the role of frequency in the learning of a general network.  You can
see the same sort of division in human scientists -- some scientists
will study the architecture and functioning of one particular aspect
of the brain in detail (what *do* the cells in the hippocampus look
like, anyway?), some will study the overall architecture of the brain
(how does the hippocampus connect with the rest of the brain?), and
some study ''intelligence'' as a system-wide effect (how do people
with damaged hippocampi learn -- and how is it different from normal
learning?)  And I don't really believe that there's any one person
who is simulaneously at the cutting-edge of all three questions
posed above.

        -kitten

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 17 Feb 1999 00:00:00 GMT
Message-ID: <36cad851@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c86212@news3.us.ibm.net> <7a9uuk$ebh$1@quine.mathcs.duq.edu> <36c98582@news3.us.ibm.net> <7ac3t7$g4a$1@quine.mathcs.duq.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 Feb 1999 14:55:13 GMT, 166.72.29.7
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Patrick Juola wrote in message <7ac3t7$g4a$1@quine.mathcs.duq.edu>...
>In article <36c98582@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net> wrote:
>>
>>If I appeared to claim that NNs are not the solution, I want to
>>retract now and say that they don't *seem* to be the solution.
>
>Acknowledged and agreed.
>
>>It is, as much as we know, an open issue. But I think I can give
>>you some reasons to justify my impression. Sorry if it seems too
>>long and rambling.
>>
>>Since I started studying AI, I've seen a lot of approaches to
>>intelligence that made some sense in limited domains. The logic
>>approach, following the mathematicians point of view, is able
>>to give some good results. But it failed (or at least proposed
>>too complex solutions) to "simple problems" like the frame
>>problem. In much the same situation (because of similarity)
>>were semantic networks, conceptual graphs, description
>>logics, etc.
>>
>>Then, I turned to see the study of our brain as the only way
>>to get a clear vision of an already working machine with that
>>kind of capacity. Lets forget for a while Tversky & Kahneman's
>>arguments for the inadequacy of most humans for rational
>>thinking. Our brain is the best model of intelligent machine
>>we have.
>
>Well, right here you've hit a major stumbling block in the
>definition of intelligence; a lot of philosophers, both
>professional and otherwise, have claimed that the central core
>of intelligence is some form of 'human-like' behavior.  Even more
>have claimed that an important part of intelligence is producing
>seemingly 'intelligent' behavior via human-like mechanisms.
>This is why, for example, Deepest Blue will not be regarded as
>'intelligent' irrespective of how good it gets at playing chess;
>throwing more hardware at an exhaustive search of the chess tree
>doesn't appear to be the approach that humans apply.
>
>But this also suggests that the T/K data, and in particular
>the *irrationality* they show can be regarded as key to
>'intelligence' -- suggesting that intelligence is at least partially
>a question of making the right mistakes.  I've seen it claimed in
>print that 'intelligence' and 'rationality' are inherently incompatible....
>
>But I digress somewhat.  If what you want is an intelligent, rational
>machine -- a Vulcan in a beige plastic case -- that's fine with me.
>

What I want is an intelligence that can stand among the other kinds of
intelligence we have here on Earth. A dog, a dolphin, a chimpanzee all
have intelligences in varying degrees. If we take humans as the only
exemplar of intelligence, we will certainly limit our scope, perhaps
to the point of making it impractical (as I said, computers and
biological machines are essentially different). But instead of
"extending" our concept of intelligence to contain T&K's view of
rationality, I prefer to discover the basic principles behind it
and then transposing it to a different architecture.

But I must acknowledge that animal intelligence tries to solve a
different problem than the one that will have to be solved by the
first exemplars of mechanical intelligence. Biological intelligence
must solve the problem of survival, mechanical intelligence (first
generations) must be able to solve *our* problems, even without
having *our* vision of the world. That will require an enourmous
capacity of empathizing, which will be one of the problems we'll
have to face in the near future. But first, this thing must be
minimally intelligent.

>>If our brain seems to be made of a network of nodes with some
>>evidences of hebbian-like learning mechanisms, this could easily
>>put us toward solutions based on NNs. But there's another side
>>to it. Our brain may be the technical solution that nature
>>found to satisfy "deeper principles". What are they?
>
>[snip, some rearrangement]
>>The list that follows is my "shopping list" for intelligence.
>>
>>a) Innate sensory transducers and innate feature detectors
>>Our eyes, ears, etc, were provided by nature to process
>>the initial levels of transduction of signals. Close to
>>this level, we have innate circuits that are able to
>>extract some information from raw data (examples are
>>cochlear hair cells). Here I base my ideas on the
>>current knowledge of neuroscience (several authors).
>
>In principle, I like the idea of tying intelligence into
>sensory perception.  Unfortunately, I haven't yet seen a version
>that I like.  The main problem is that, bluntly, I have little
>evidence for (or liking for) the principle that someone, e.g.,
>blind or deaf from birth has their intelligence somehow
>'damaged'; I think it's completely plausible that *any* form
>of sensory input would qualify just as well.  If some mad
>Frankensteinian scientist were to take a newborn baby and
>cyborg it to a radio receiver instead of normal hearing,
>I think that we would simply observe a child growing up and
>'hearing' radio instead of sound waves.
>
>So the question then becomes what one regards as 'sensory input';
>isn't the keyboard (or Ethernet card) just another 'sense' that
>computers have that we as humans don't share?  Babies hear phonemes;
>computers 'hear' keystrokes, and the requirements for sensory input
>is met automatically by any information processing system.  If this
>isn't the case, what's the fundamental difference between sound
>and keystrokes?
>

The questions you posed are very interesting and I have a lot to say
about that. The difference between sound and keystrokes? I don't see
any difference, under a special point of view. They can be seen, as
you said, sensory inputs, but I want to add *from different worlds*.

Sensory input, in my vision, are things that allow us to *model our
world*. An intelligent computer, with only text input, could be
intelligent, in my opinion. But the world it will be in touch with
is *not* our world: it is different! An intelligent computer
checking the traffic of messages on the Internet would construct
a model of that cyberworld, certainly without a clear vision of
the meaning of the words in e-mails and homepages, but with another
vision that we do not dream of. Eventually, we may be able to
communicate with such a machine and, obviously, it will be difficult
for us to "know" what this computer is "feeling" (it will be more
difficult than to know what our dogs are feeling, a mystery in
its own terms). But if this machine is able to understand some of
our queries, I bet it can answer with things that could make some
sense to us and even inform us about something new.

Now a slightly different situation. If the computer had a complete
set of sensory inputs (much like a robot), we could expect to have
a machine modeling the same world as ours (it will understand what
is color, shape, pitch, touch, etc). We should be able to understand
it much like we understand a dog. But if it is intelligent,
interacting with us will give the machine an opportunity to get
closer to our real needs. It will have a chance to model *us*, our
mental life, our necessities and idiosyncrasies. I bet this robot
will be useful to us! But I don't want to stop here.

Now the last point, in regard to your first observation. Helen Keller
is the most cited example of a human being who have been "fed" in
terms of information practically only by one sense, touch. She was
able to write books and demonstrated general intelligence. She could
never have our vision of a forest: the amount of green gradations, the
appearance of several shapes of leaves, the combined movement
of branches driven by the wind. But I bet she could handle those
concepts in a pretty intelligent manner, even being capable of
solving problems (within some limits) using abstractions derived
from words and primitive associations.

That's a *very* important result, because it says that we don't need
a machine with *full sensory capability*. We need an intelligent
computer and the minimum contact with the world in order to provide
a *minimum substrate* of knowledge about this world. That's
important, but it is not the center of what I'm proposing.

Now I ask again for your patience and understanding in reading the
following speculation. I believe that we can model what this
minimum substrate of our world is and, in a somewhat "artificial"
manner, insert it into the machine in a manner as to allow it
to perform like Helen Keller: the machine will know about colors,
sounds, and touch sensations in a limited fashion, by the use of
"anchors" to artificially implanted patterns, ineffective to allow
a full human capacity, but *enough* to allow it to *grow abstract
concepts from it*.

I call this idea "Artificial Symbol Grounding", after Stevan Harnad's
"symbol grounding problem". If we read one of Doug Lenat's basic
axioms to the CYC system, we will find that he does not see any
shortcut to intelligence other than "lots of knowledge". I guess
he is right but, unfortunately, he started with the *wrong level*.

He is trying to model the high level behavior, first order predicate
calculus, that will not work because it is not grounded on simpler,
sensory aspects of our world. The frame problem will eat CYC alive.

I have the impression that if we find a way to put this "artificial
grounding" in a computer, even without any other sensory input (other
than keyboard), it will be able to present intelligent behavior, never
like a human, but as a machine useful to us. Add to this a limited
ability to empathize and you have a *very, very* powerful assistant
to any intellectual task you may want. This assistant is what I
want for myself.

>>b) Perception of regularity, perception of anomaly
>>This is my most cherished principle. Most of the things we
>>perceive as being intelligent have, in its root, some kind
>>of perception of occurring regularity or lack of regularity
>>(something that was regular and then suddenly is not anymore).
>>If I start my day checking my e-mail, then my computer
>>*should* learn this and activate my Internet connection as
>>soon as I boot it, because it *should* perceive that this is
>>happening often.
>
>... unless, of course, there's some overriding reason that the
>computer shouldn't do that.  Just because something happens often
>doesn't mean it's a good thing to do.  If every time I log in the
>first thing I did was delete my .cshrc and swear fluently about
>my idiocy, does that mean that the computer should start to
>delete and swear for me.
>
>But yes, this is reasonable.
>

This is where learning is important. If we change our habits, or
if we perceive some problem with what the computer is doing, we
should be able to "tell it" and point him at a "simple explanation"
of why we don't want it to be done anymore. An intelligent computer
will develop recognition that this explanation may be used in
*other* different situations (that we may even be unaware) and
apply its knowledge there. Of course, the first days will be
a mess, because there's so much to tell to the computer and it
will infer lots of wrong things. But that's what happens with a
child: after some time, he/she will not put his finger in the fire.
An intelligent computer should be seen as a learning child and,
after some time, we should expect to have a competent "artificial
adult" working with us. Will it be infalible? Not in this world.
But I bet that, if it is creative, it will discover what went
wrong in a situation where neither we know what to do.

>>c) Conceptualization, Categorization
>>Using principle b), the entity should perceive what are
>>the concepts that can be grouped under the same category
>>(sensorimotor information is essential in helping this
>>process). Lots of false hits disappear with time (apples
>>and wheels are examples of round objects, but this is not
>>so strong as apples and oranges being examples of fruits).
>>This item is supported by the massive works of cognitive
>>psychologists (Lakoff, Thagard, Holyoak, Barsalou, etc).
>
>Again, completely reasonable if you assume that the groupings
>that humans learn are learnable/categorizable by the computer.
>This may address the question of what constitutes 'senses'
>I posed above -- there's a paper by Cottrell and Someone (or
>maybe Someone and Cottrell) in CogSci-98 suggesting this.
>

I will look for that paper. But I think that the categorization
the machine will do will be somewhat similar to ours *if* (and
only if) we are able to set the stage properly. By "set the stage"
I mean that idea of "artificial symbol grounding". Granted,
this idea is easier said than done. There's an yet unknown aspect
of what would be the elements we must put in that ground to
allow the construction of the knowledge edifice over it. But I
think we may have a clue, if we follow the suggestions that
our cognition stems over sensorimotor patterns. Then, the problem
reduces to find what are the essential elements that these
sensorimotor aspects may inform to abstract concepts. One line
I'm investigating, besides Piaget's ideas, are the language
acquisition by children of action verbs (David Bailey's Phd
thesis).

>>d) Spreading activation
>>A thought is the spreading of activations in parallel and
>>in a competitive manner (much like a darwinian environment).
>>This follows the speculations of William Calvin and Gerald
>>Edelman.
>
>You're getting dangerously close to assuming a neural network-like
>architecture here.
>

You're right, but I prefer to say that I'm dangerously close to a
connectionist architecture. This is one reason why I value the
modeling being done by Miikkulainen, Sirosh, Shastri, O'Reilly
and others. But this don't mean I'm far from symbolic ideas.
I'm giving a chance to the appearance of a *third* method, a
connectionist or dynamic *symbol*, a symbol able to flow through
a network and, due to some "internal structure", be able to
activate similar symbols, which in turn activate others. The
result, when the thing settles, is the response(s). Unfortunately,
this is in the "sci-fi" stage (so far).

>>e) Maintenance of Causal structures
>>Causal models are derived from the structure of sensorimotor
>>patterns coupled with categorization of action concepts. [...]
>>
>>f) A source for randomness
>>Another one of my preferred. Two aspects to account: first,
>>stochastic resonance, which is most an optimization factor.
>>The other is the effect on creativity.
>>I definitely doubt that any system would be able to present
>>creative behavior without a source of random signals. [...]
>
>My primary concern with this list is that, simply speaking, if
>you had written it in 1990 it would have passed muster under
>the title "The SubSymbolicist Manifesto";  The things you
>require for the most part are not only present in the models
>we group under the heading of NNs, but are usually advertised
>as features.
>
>So why are you dissatisfied with NNs as a technical underpinning
>for AI?
>

This is a very good question and one of the reasons I don't like
NNs is laughable. Do you know that one of Marvin Minsky's preferred
suspicion is that we have enough machine power today do to AI and
we just need the right software? I share Minsky's suspicion in this
regard, although I am not able to tell why. Using purely NN methods,
this thing will need a lot of computing power. Good for Intel,
bad for me.

>Earlier you wrote :
>>Now you can say that this kind of behavior can appear naturally
>>in some neural nets architectures. Yes, that is possible. The
>>big question is: will we understand its working principles?
>>Will we be aware of the mechanisms behind this statistic to
>>symbolic transformation? Will we be able to scientifically
>>bend its workings to better suit our purposes? Will we really
>>understand it or just see the "magic" in its emergence?
>
>Are you implicitly claiming that we need to understand a technology
>for it to be useful -- or are you claiming that for something
>to be AI, it must be both 'intelligent' and understandable?
>

What I claim is interestingly different. Let me use a story.
Imagine what went through the mind of our forebears when they
first imagined how they could travel by sea to another land.
They could have seen a large leaf of palm floating in a lake.
That palm had a long and weighty stem and could support the
weight of a mouse.

The man then thought in replicating this "boat",
augmenting the design proportionately to carry himself and
his family through the waters. The design, obviously, would
have failed constantly, because the analog of the "stem" he was
using (a large trunk) was heavy and kept the boat unstable.
His boat would have sank repeatedly, to his surprise, because
he was just obeying what he thought was right in the model.
He couldn't perceive what was wrong because he failed to
perceive the essential working principles.

Understanding these principles is what was necessary to
transpose the design to a very different "implementation"
situation. I see computers with NNs as the equivalent of
the palm in our mind's lake.

>The problem with this is that I find it very difficult to believe
>that *any* mature technology will be understandable in the deeper
>sense.   Just as an example, I am supposed to be an 'expert' in
>operating systems  -- hell, I've taught the class several times --
>but I don't presume to understand every line of code distributed
>with Linux.  I understand the overall architecture, but a lot of
>the implementation details rely on specialized knowledge that I
>don't have, and the inner workings of the device drivers are
>a complete black art to me because I don't understand the individual
>devices in sufficient detail.  And I certainly don't understand the
>functioning of the individual chips in any kind of a detailed way.
>Why won't my machine run when overclocked by 150%?  If I *understood*
>the chips in detail, I could tell you what fails -- as is, I simply
>know it doesn't work.
>

I think that, after half an hour conversation with an specialist,
you will be up to date on this subject. Hard to believe?

You certainly know about MULTICS, the 1969 MIT project for a
timesharing operating system. That was 30 years ago! I bet most
of Windows NT (and Linux, for that matter) use very similar
principles (demand paged memory, protection rings, scheduling, etc)
In 30 years, few really innovative things were added to that
fantastic conception. Most differences between current-day
systems and this "granddady" are in implementation aspects (and
sizes, each MULTIC's expensive drum storage had only 8 Mb capacity).

I think that AI should be based on a set of principles such as
these. In 40 years from now, AI should have evolved a lot, but
the set of principles should be almost the same. Most AI scientists
seem to disagree about what are these principles. And we keep
seeing in the proceedings of AAAI a huge amount of deeper and
deeper works in what? Logic, knowledge representation, planning,
reasoning.

>On the other hand, the primary designer of the chip inside my machine
>could probably tell me exactly what goes wrong with the chip, but
>may have no idea of how Linux handles the directory structures.
>
>Neural networks may, and I stress may, end up in the same situation.
>There are at least three levels at which they can be studied -- one is
>the analysis of the 'neurons' themselves as individuals and how they
>can be tuned to various tasks.  A second is the study of the network
>architecture, treating each neuron as some form of magic black box
>that converts inputs to outputs and changes over time.  A third is
>a study of the overall property of the network -- what is, for example,
>the role of frequency in the learning of a general network.  You can
>see the same sort of division in human scientists -- some scientists
>will study the architecture and functioning of one particular aspect
>of the brain in detail (what *do* the cells in the hippocampus look
>like, anyway?), some will study the overall architecture of the brain
>(how does the hippocampus connect with the rest of the brain?), and
>some study ''intelligence'' as a system-wide effect (how do people
>with damaged hippocampi learn -- and how is it different from normal
>learning?)  And I don't really believe that there's any one person
>who is simulaneously at the cutting-edge of all three questions
>posed above.
>

I mostly concur with this vision, in particular with the inexistence
of somebody who can be at the edge of all these viewpoints. However,
I find it absolutely necessary to add a fourth kind of person in
this description. It is the person who is able to understand the
basic principles of each vision and join them in a coherent theory.
This should allow the changing of one vision (for example, changing
the implementation paradigm) without affecting the overall workings
of the system.

Regards,
Sergio Navega.

From: juola@mathcs.duq.edu (Patrick Juola)
Subject: Re: Hahahaha
Date: 17 Feb 1999 00:00:00 GMT
Message-ID: <7afcat$jiq$1@quine.mathcs.duq.edu>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c98582@news3.us.ibm.net> <7ac3t7$g4a$1@quine.mathcs.duq.edu> <36cad851@news3.us.ibm.net>
Organization: Duquesne University, Pittsburgh PA  USA
Newsgroups: comp.ai,comp.ai.philosophy

In article <36cad851@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net> wrote:
>Patrick Juola wrote in message <7ac3t7$g4a$1@quine.mathcs.duq.edu>...
>>In article <36c98582@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>

>>My primary concern with this list [of desired behaviors] is that,
>>simply speaking, if
>>you had written it in 1990 it would have passed muster under
>>the title "The SubSymbolicist Manifesto";  The things you
>>require for the most part are not only present in the models
>>we group under the heading of NNs, but are usually advertised
>>as features.
>>
>>So why are you dissatisfied with NNs as a technical underpinning
>>for AI?
>>
>
>This is a very good question and one of the reasons I don't like
>NNs is laughable. Do you know that one of Marvin Minsky's preferred
>suspicion is that we have enough machine power today do to AI and
>we just need the right software? I share Minsky's suspicion in this
>regard, although I am not able to tell why. Using purely NN methods,
>this thing will need a lot of computing power. Good for Intel,
>bad for me.

You're looking at far too early a view of NNs; the idea that a
system will somehow miraculously appear out of throwing lots of
hardware at a connectoplasm architecture is not generally believed.
But a *tuned* neural network -- by which read, a network with
a architecture designed for a particular task -- is generally
aweinspiringly efficient.

Are you familiar with Ramin Nakisa's work on 'genetic connectionism'?
He's produced very small networks that will learn to identify the
various phonemes that comprise human speech, irrespective of
the speaker or the language, upon exposure to as little as about
sixty seconds of speech.  Of course, the architecture of this
network is a little unusual -- and he didn't simply jot down the
design.  In fact, he didn't design it at all; it arose at the end
of a lengthy process of genetic programming.

This, however, is approximately what humans have -- a very finely
tuned neural architecture that resulted from several billion years
of evolution.  Give me several billion years to evolve a NN and
I could probably come up with some very slick methods to solve
problems that didn't use a lot of hardware.

>>Earlier you wrote :
>>>Now you can say that this kind of behavior can appear naturally
>>>in some neural nets architectures. Yes, that is possible. The
>>>big question is: will we understand its working principles?
>>>Will we be aware of the mechanisms behind this statistic to
>>>symbolic transformation? Will we be able to scientifically
>>>bend its workings to better suit our purposes? Will we really
>>>understand it or just see the "magic" in its emergence?
>>
>>Are you implicitly claiming that we need to understand a technology
>>for it to be useful -- or are you claiming that for something
>>to be AI, it must be both 'intelligent' and understandable?
>>
>
>What I claim is interestingly different. Let me use a story.
>Imagine what went through the mind of our forebears when they
>first imagined how they could travel by sea to another land.
>They could have seen a large leaf of palm floating in a lake.
>That palm had a long and weighty stem and could support the
>weight of a mouse.
>
>The man then thought in replicating this "boat",
>augmenting the design proportionately to carry himself and
>his family through the waters. The design, obviously, would
>have failed constantly, because the analog of the "stem" he was
>using (a large trunk) was heavy and kept the boat unstable.
>His boat would have sank repeatedly, to his surprise, because
>he was just obeying what he thought was right in the model.
>He couldn't perceive what was wrong because he failed to
>perceive the essential working principles.
>
>Understanding these principles is what was necessary to
>transpose the design to a very different "implementation"
>situation. I see computers with NNs as the equivalent of
>the palm in our mind's lake.

I don't believe that this is how the caveman designed the first
boat.  In fact, I suspect that this is exactly how he *DIDN'T*
design it; normal engineer methods involve a much higher proportion
of trial and error, and the idea that there are fundamental
principles behind the successful solution comes only after you
have several successful solutions to compare.

In the case of AI, we've got (broadly speaking) one system that
we all agree is 'intelligent' and a lot of questionable ones.

>I think that AI should be based on a set of principles such as
>[described upthread]. In 40 years from now, AI should have evolved a lot, but
>the set of principles should be almost the same. Most AI scientists
>seem to disagree about what are these principles. And we keep
>seeing in the proceedings of AAAI a huge amount of deeper and
>deeper works in what? Logic, knowledge representation, planning,
>reasoning.

And I don't think that the principle you would like to see are
sufficiently well-understood -- or sufficiently well-defined, for
that matter, to make this even a vaguely plausible wish.  Might
as well wish for a computer made out of orange peel.

        -kitten

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 18 Feb 1999 00:00:00 GMT
Message-ID: <36cc1e9b@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c98582@news3.us.ibm.net> <7ac3t7$g4a$1@quine.mathcs.duq.edu> <36cad851@news3.us.ibm.net> <7afcat$jiq$1@quine.mathcs.duq.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 18 Feb 1999 14:07:23 GMT, 166.72.21.242
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Patrick Juola wrote in message <7afcat$jiq$1@quine.mathcs.duq.edu>...
>In article <36cad851@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
wrote:
>>Patrick Juola wrote in message <7ac3t7$g4a$1@quine.mathcs.duq.edu>...
>>>In article <36c98582@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
>
>>>My primary concern with this list [of desired behaviors] is that,
>>>simply speaking, if
>>>you had written it in 1990 it would have passed muster under
>>>the title "The SubSymbolicist Manifesto";  The things you
>>>require for the most part are not only present in the models
>>>we group under the heading of NNs, but are usually advertised
>>>as features.
>>>
>>>So why are you dissatisfied with NNs as a technical underpinning
>>>for AI?
>>>
>>
>>This is a very good question and one of the reasons I don't like
>>NNs is laughable. Do you know that one of Marvin Minsky's preferred
>>suspicion is that we have enough machine power today do to AI and
>>we just need the right software? I share Minsky's suspicion in this
>>regard, although I am not able to tell why. Using purely NN methods,
>>this thing will need a lot of computing power. Good for Intel,
>>bad for me.
>
>You're looking at far too early a view of NNs; the idea that a
>system will somehow miraculously appear out of throwing lots of
>hardware at a connectoplasm architecture is not generally believed.
>But a *tuned* neural network -- by which read, a network with
>a architecture designed for a particular task -- is generally
>aweinspiringly efficient.
>

I guess I can agree with you here. This is what propels me to
seriously consider some connectionist approaches.

>Are you familiar with Ramin Nakisa's work on 'genetic connectionism'?
>He's produced very small networks that will learn to identify the
>various phonemes that comprise human speech, irrespective of
>the speaker or the language, upon exposure to as little as about
>sixty seconds of speech.  Of course, the architecture of this
>network is a little unusual -- and he didn't simply jot down the
>design.  In fact, he didn't design it at all; it arose at the end
>of a lengthy process of genetic programming.
>

I didn't know about Nakisa, I'll try to find something about it.
Sounds like a good approach, I've seen other approaches to
self-developing networks (Ronen Segev's Phd thesis on
self-wiring NN). It seems to be interesting, besides somewhat
neurobiologically plausible.

>This, however, is approximately what humans have -- a very finely
>tuned neural architecture that resulted from several billion years
>of evolution.  Give me several billion years to evolve a NN and
>I could probably come up with some very slick methods to solve
>problems that didn't use a lot of hardware.
>

I understand what you say and I think it is correct. But let me
wander (again) a bit (by now you should be used to my wanderings).

If we were to analyze carefully the aspect of a bird's wing, we
would find a lot of "apparently" unnecessary details. Most birds
use feathers to compose the wing, but bats have continuous
membranes instead of feathers. Why sometimes nature seems to go
to the hard way? I often think of brains as being a machine to
solve a problem but the way it uses to solve it (the functional
principles) may be "hidden" because of a network of neurons,
just like the feathers in the wings.

Now back to our problem. I know of two fronts in trying to
understand how neurons work in the human brain. One is the
"classic" way to analyze neurons individually, trying to
make sense of individual spikes or trains of spikes. But another,
relatively more recent, find more usefulness in analyzing the
behavior of groups of neurons. In this vision, what is important
is not the spike trains, but the collective synchronization
of this population. This turns the analysis of the problem
upside down: now we are not interested in individual spikes,
but on global parameters such as frequency and phase of
oscillations.

The two fronts are trying to explain phenomena in the brain
and the game is almost even. Such doubt casts a lot of
potential implications. What if the latter approach reveals
as more likely? Our concepts of connectionism will have to
change a bit (Lokendra Shastri's temporal synchrony apparently
goes toward that direction).

I see population coding as a candidate paradigm to to reconcile
connectionism and symbolicism, to satisfy Fodor and McClelland
simultaneously. Needless to say, this is *highly* speculative.

>
>>>Earlier you wrote :
>>>>Now you can say that this kind of behavior can appear naturally
>>>>in some neural nets architectures. Yes, that is possible. The
>>>>big question is: will we understand its working principles?
>>>>Will we be aware of the mechanisms behind this statistic to
>>>>symbolic transformation? Will we be able to scientifically
>>>>bend its workings to better suit our purposes? Will we really
>>>>understand it or just see the "magic" in its emergence?
>>>
>>>Are you implicitly claiming that we need to understand a technology
>>>for it to be useful -- or are you claiming that for something
>>>to be AI, it must be both 'intelligent' and understandable?
>>>
>>
>>What I claim is interestingly different. Let me use a story.
>>Imagine what went through the mind of our forebears when they
>>first imagined how they could travel by sea to another land.
>>They could have seen a large leaf of palm floating in a lake.
>>That palm had a long and weighty stem and could support the
>>weight of a mouse.
>>
>>The man then thought in replicating this "boat",
>>augmenting the design proportionately to carry himself and
>>his family through the waters. The design, obviously, would
>>have failed constantly, because the analog of the "stem" he was
>>using (a large trunk) was heavy and kept the boat unstable.
>>His boat would have sank repeatedly, to his surprise, because
>>he was just obeying what he thought was right in the model.
>>He couldn't perceive what was wrong because he failed to
>>perceive the essential working principles.
>>
>>Understanding these principles is what was necessary to
>>transpose the design to a very different "implementation"
>>situation. I see computers with NNs as the equivalent of
>>the palm in our mind's lake.
>
>I don't believe that this is how the caveman designed the first
>boat.  In fact, I suspect that this is exactly how he *DIDN'T*
>design it; normal engineer methods involve a much higher proportion
>of trial and error, and the idea that there are fundamental
>principles behind the successful solution comes only after you
>have several successful solutions to compare.
>

Yes, I agree. I was trying to put that example exactly in the
way I think it didn't happen. But that example seems to me to
be the way connectionism was researched when the field was infant.

>In the case of AI, we've got (broadly speaking) one system that
>we all agree is 'intelligent' and a lot of questionable ones.
>
>>I think that AI should be based on a set of principles such as
>>[described upthread]. In 40 years from now, AI should have evolved a lot, but
>>the set of principles should be almost the same. Most AI scientists
>>seem to disagree about what are these principles. And we keep
>>seeing in the proceedings of AAAI a huge amount of deeper and
>>deeper works in what? Logic, knowledge representation, planning,
>>reasoning.
>
>And I don't think that the principle you would like to see are
>sufficiently well-understood -- or sufficiently well-defined, for
>that matter, to make this even a vaguely plausible wish.  Might
>as well wish for a computer made out of orange peel.
>

In fact, my main concern today is the refinement of these, granted,
vague principles. But instead of sticking with one line where I
draw who I like and who I dislike, I somewhat mix them all.

This puts me in a very funny situation: for one side I agree with
Fodor (systematicity, productivity, etc), for other I disagree with
him (innateness of language, strong modularity, domain specificity).
I agree with Jeff Elman (no to innateness) but disagree with him
(defense of connectionism). I agree with Pinker (rule based past-tense
formation) but disagree (language innate by natural selection).

Regards,
Sergio Navega.

From: juola@mathcs.duq.edu (Patrick Juola)
Subject: Re: Hahahaha
Date: 17 Feb 1999 00:00:00 GMT
Message-ID: <7afb2g$jfa$1@quine.mathcs.duq.edu>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c98582@news3.us.ibm.net> <7ac3t7$g4a$1@quine.mathcs.duq.edu> <36cad851@news3.us.ibm.net>
Organization: Duquesne University, Pittsburgh PA  USA
Newsgroups: comp.ai,comp.ai.philosophy

Wow.  Long and information packed post, Mr. Navega.  I'm going to
have to break it up into pieces to keep the discussion manageable....
[and lots of extensive editing done, obviously]

In article <36cad851@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net> wrote:
>Patrick Juola wrote in message <7ac3t7$g4a$1@quine.mathcs.duq.edu>...
>>In article <36c98582@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
>wrote:
>>>Since I started studying AI, I've seen a lot of approaches to
>>>intelligence that made some sense in limited domains. The logic
>>>approach, following the mathematicians point of view, is able
>>>to give some good results. But it failed (or at least proposed
>>>too complex solutions) to "simple problems" like the frame
>>>problem. In much the same situation (because of similarity)
>>>were semantic networks, conceptual graphs, description
>>>logics, etc.
>>>
>>>Then, I turned to see the study of our brain as the only way
>>>to get a clear vision of an already working machine with that
>>>kind of capacity. Lets forget for a while Tversky & Kahneman's
>>>arguments for the inadequacy of most humans for rational
>>>thinking. Our brain is the best model of intelligent machine
>>>we have.
>>
>>Well, right here you've hit a major stumbling block in the
>>definition of intelligence; a lot of philosophers, both
>>professional and otherwise, have claimed that the central core
>>of intelligence is some form of 'human-like' behavior.  Even more
>>have claimed that an important part of intelligence is producing
>>seemingly 'intelligent' behavior via human-like mechanisms.
>>This is why, for example, Deepest Blue will not be regarded as
>>'intelligent' irrespective of how good it gets at playing chess;
>>throwing more hardware at an exhaustive search of the chess tree
>>doesn't appear to be the approach that humans apply.
>>
>>But this also suggests that the T/K data, and in particular
>>the *irrationality* they show can be regarded as key to
>>'intelligence' -- suggesting that intelligence is at least partially
>>a question of making the right mistakes.  I've seen it claimed in
>>print that 'intelligence' and 'rationality' are inherently incompatible....
>>
>>But I digress somewhat.  If what you want is an intelligent, rational
>>machine -- a Vulcan in a beige plastic case -- that's fine with me.
>>
>
>
>What I want is an intelligence that can stand among the other kinds of
>intelligence we have here on Earth. A dog, a dolphin, a chimpanzee all
>have intelligences in varying degrees. If we take humans as the only
>exemplar of intelligence, we will certainly limit our scope, perhaps
>to the point of making it impractical (as I said, computers and
>biological machines are essentially different). But instead of
>"extending" our concept of intelligence to contain T&K's view of
>rationality, I prefer to discover the basic principles behind it
>and then transposing it to a different architecture.
>
>But I must acknowledge that animal intelligence tries to solve a
>different problem than the one that will have to be solved by the
>first exemplars of mechanical intelligence. Biological intelligence
>must solve the problem of survival, mechanical intelligence (first
>generations) must be able to solve *our* problems, even without
>having *our* vision of the world. That will require an enourmous
>capacity of empathizing, which will be one of the problems we'll
>have to face in the near future. But first, this thing must be
>minimally intelligent.

Unfortunately, this approach seems to beg the question of what
the fundamental principles *are*.   Maybe "beg the question" isn't
the right phrase.  The problem is that we don't really know what
constitutes intelligence.  It's not clear, or at least it's
not uncontroversial, that what a dog, a dolphin, or a chimpanzee
does can be grouped under the rubric of "intelligence", especially
if what you want is a human-level intelligence --  does that
NECESSARILY imply a human-like intelligence as well?

This is the sort of question that all my training and belief --
brainwashing, if you like -- suggests can ONLY be answered empirically
and there's no way of reasoning from first principles to figure
out.  I justify this by pointing out that there's no way to reason
"from first principles" to figure out what the first principles *ARE*;
reasoning is a lousy way to discover axioms and principles.  Or
to put it another way, unless we can agree on whether (e.g.) the
T/K mistakes represent a weakness in reasoning, a symptom of an
essential aspect of 'intelligent' reasoning, or are merely an
epiphenomenon of how humans happen to implement intelligence, we'll
arrive at completely different, but equally plausible and persuasive,
views on what the "first principles" must be.

No, really, the elephant *IS* like a wall/rope/snake/tree.

The posts of 'spider' on the "learning from the environment" vs.
"knowledge freezing" is a good example of this.  Does learning new
things within a given--by-evolution-or-designer framework
constitute learning?  Are all the consequences of a given system
implicit in the axioms and inference rules, or does it still take
'intelligence' to determine the consequences? 

If it takes intelligence to learn within a given environmental or
formal framework, we've had intelligent systems since the 1960s.
If reasoning within a given system is a not a mark of intelligence,
then on what evidence do we regard humans as intelligent?

(more later on the rest of the post)

        -kitten

From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Hahahaha
Date: 18 Feb 1999 00:00:00 GMT
Message-ID: <36cc1e90@news3.us.ibm.net>
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c98582@news3.us.ibm.net> <7ac3t7$g4a$1@quine.mathcs.duq.edu> <36cad851@news3.us.ibm.net> <7afb2g$jfa$1@quine.mathcs.duq.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 18 Feb 1999 14:07:12 GMT, 166.72.21.242
Organization: SilWis
Newsgroups: comp.ai,comp.ai.philosophy

Patrick Juola wrote in message <7afb2g$jfa$1@quine.mathcs.duq.edu>...
>Wow.  Long and information packed post, Mr. Navega.  I'm going to
>have to break it up into pieces to keep the discussion manageable....
>[and lots of extensive editing done, obviously]
>

Thanks.

>In article <36cad851@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
wrote:
>>Patrick Juola wrote in message <7ac3t7$g4a$1@quine.mathcs.duq.edu>...
>>>In article <36c98582@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
>>wrote:
>>>>Since I started studying AI, I've seen a lot of approaches to
>>>>intelligence that made some sense in limited domains. The logic
>>>>approach, following the mathematicians point of view, is able
>>>>to give some good results. But it failed (or at least proposed
>>>>too complex solutions) to "simple problems" like the frame
>>>>problem. In much the same situation (because of similarity)
>>>>were semantic networks, conceptual graphs, description
>>>>logics, etc.
>>>>
>>>>Then, I turned to see the study of our brain as the only way
>>>>to get a clear vision of an already working machine with that
>>>>kind of capacity. Lets forget for a while Tversky & Kahneman's
>>>>arguments for the inadequacy of most humans for rational
>>>>thinking. Our brain is the best model of intelligent machine
>>>>we have.
>>>
>>>Well, right here you've hit a major stumbling block in the
>>>definition of intelligence; a lot of philosophers, both
>>>professional and otherwise, have claimed that the central core
>>>of intelligence is some form of 'human-like' behavior.  Even more
>>>have claimed that an important part of intelligence is producing
>>>seemingly 'intelligent' behavior via human-like mechanisms.
>>>This is why, for example, Deepest Blue will not be regarded as
>>>'intelligent' irrespective of how good it gets at playing chess;
>>>throwing more hardware at an exhaustive search of the chess tree
>>>doesn't appear to be the approach that humans apply.
>>>
>>>But this also suggests that the T/K data, and in particular
>>>the *irrationality* they show can be regarded as key to
>>>'intelligence' -- suggesting that intelligence is at least partially
>>>a question of making the right mistakes.  I've seen it claimed in
>>>print that 'intelligence' and 'rationality' are inherently
incompatible....
>>>
>>>But I digress somewhat.  If what you want is an intelligent, rational
>>>machine -- a Vulcan in a beige plastic case -- that's fine with me.
>>>
>>
>>
>>What I want is an intelligence that can stand among the other kinds of
>>intelligence we have here on Earth. A dog, a dolphin, a chimpanzee all
>>have intelligences in varying degrees. If we take humans as the only
>>exemplar of intelligence, we will certainly limit our scope, perhaps
>>to the point of making it impractical (as I said, computers and
>>biological machines are essentially different). But instead of
>>"extending" our concept of intelligence to contain T&K's view of
>>rationality, I prefer to discover the basic principles behind it
>>and then transposing it to a different architecture.
>>
>>But I must acknowledge that animal intelligence tries to solve a
>>different problem than the one that will have to be solved by the
>>first exemplars of mechanical intelligence. Biological intelligence
>>must solve the problem of survival, mechanical intelligence (first
>>generations) must be able to solve *our* problems, even without
>>having *our* vision of the world. That will require an enourmous
>>capacity of empathizing, which will be one of the problems we'll
>>have to face in the near future. But first, this thing must be
>>minimally intelligent.
>
>Unfortunately, this approach seems to beg the question of what
>the fundamental principles *are*.   Maybe "beg the question" isn't
>the right phrase.  The problem is that we don't really know what
>constitutes intelligence.  It's not clear, or at least it's
>not uncontroversial, that what a dog, a dolphin, or a chimpanzee
>does can be grouped under the rubric of "intelligence", especially
>if what you want is a human-level intelligence --  does that
>NECESSARILY imply a human-like intelligence as well?
>

Your point is valid and reminds me of the kind of "scale" I'm
trying to see. Lets take perception, for instance. A dog may be
trained to recognize a special kind of cup (color, shape, etc).
This means that the dog's brain is able to visually categorize
all the primitive edges, shapes, smooth color areas that
comprise one cup into a single unified object. But it stops
there. A human can not only do that, but assign a symbol to
it ("cup") and also to detect invariant properties in a
manner as to classify a *different* exemplar also as a cup.
A dog cannot do that. A human can even look to a bowl and
say that it can be used, in special circumstances, as a cup.
For me, this means that our brain have, in relation to the
dog's, additional "hierarchical" levels (height of a pyramid)
and also a greater capacity to correlate sensory perceptions
with abstract concepts such as "cupness".

It is not a smooth path, though. And probably this is (among
other things) because we have a strong capacity for symbolic
representation (read: language). I can play with "cup" in my
head in hypothetical situations and communicate my conclusions
so that any other human can also understand it. This ability
"upped the ante" of our species to an unachievable standard
by any other species.

>This is the sort of question that all my training and belief --
>brainwashing, if you like -- suggests can ONLY be answered empirically
>and there's no way of reasoning from first principles to figure
>out.  I justify this by pointing out that there's no way to reason
>"from first principles" to figure out what the first principles *ARE*;
>reasoning is a lousy way to discover axioms and principles.  Or
>to put it another way, unless we can agree on whether (e.g.) the
>T/K mistakes represent a weakness in reasoning, a symptom of an
>essential aspect of 'intelligent' reasoning, or are merely an
>epiphenomenon of how humans happen to implement intelligence, we'll
>arrive at completely different, but equally plausible and persuasive,
>views on what the "first principles" must be.
>

Oh, I agree *entirely*. I'm sorry if I left the impression that I
wanted to propose a theoretical path of work as the solution. I'm an
empiricist (in all meanings of it :-). I'd never propose to do
armchair conjectures to find out those principles. Doing this is,
as you pointed, a dead end road.

What I'm proposing is the use of empirical methods (say,
hypothesising, implementation, experimentation, evaluation) to
further refine those basic principles (or to dismiss them and find
another one). But what I think is important (and what I think is *not*
being done by some AI researchers) is that this cycle should only go
to "deeper" valeys if it keeps a coherent overall structure.

Just as an example, take the case of the logicist approach to AI.
Since the introduction of propositional calculus and first-order
predicate calculus in AI, what the guys found? In 1969, McCarthy &
Hayes "discovered" the frame problem. Lots of mending done. Then
non-monotonic reasoning. Situation Calculus, circumscription,
for me all this means digging a deep well on solid rock, with no
hope of finding water. One should recognize that this line will
not advance AI to the desired human-level and go back to the
principles, to see where it got wrong.

>No, really, the elephant *IS* like a wall/rope/snake/tree.
>

This is exactly why I consider *so much* antagonizing theories
about how we should solve the problem. Logicists say one thing,
I want to know why they thought that this is good. Connectionists
say other things, I want to understand what drove them to believe
that. Statistical NLP got good results. Zadeh found fuzzy sets a
good answer. Why?

I don't think we will be able to find something that satisfies
all the principles of all these approaches simultaneously. But
I'm pretending this unifying approach *really exists*. Finding
something close to all those principles would make it a great
candidate to be the best answer and probably will reveal what
are the essential aspects that must be taken care.

>The posts of 'spider' on the "learning from the environment" vs.
>"knowledge freezing" is a good example of this.  Does learning new
>things within a given--by-evolution-or-designer framework
>constitute learning?  Are all the consequences of a given system
>implicit in the axioms and inference rules, or does it still take
>'intelligence' to determine the consequences?
>
>If it takes intelligence to learn within a given environmental or
>formal framework, we've had intelligent systems since the 1960s.
>If reasoning within a given system is a not a mark of intelligence,
>then on what evidence do we regard humans as intelligent?
>

Obviously, there's a great problem in defining intelligence solely
in terms of learning, and I myself have committed this sin more
than once. And although learning is part of intelligence (Massimo
Piateli-Palmarini *notwithstanding*) it is obviously not enough.
A computer with a video camera could "learn" everything about
its environment, to the limit of its hard disk.

Regards,
Sergio Navega.

From: Jim Balter <jqb@sandpiper.net>
Subject: Re: Hahahaha
Date: 22 Feb 1999 00:00:00 GMT
Message-ID: <36D1102F.69C3D869@sandpiper.net>
Content-Transfer-Encoding: 7bit
References: <79tpgt$3im$1@oak.prod.itd.earthlink.net> <36c98582@news3.us.ibm.net> <7ac3t7$g4a$1@quine.mathcs.duq.edu> <36cad851@news3.us.ibm.net> <7afb2g$jfa$1@quine.mathcs.duq.edu> <36cc1e90@news3.us.ibm.net>
X-Accept-Language: en-US
Content-Type: text/plain; charset=us-ascii
Organization: Sandpiper Networks, Inc.
Mime-Version: 1.0
Newsgroups: comp.ai,comp.ai.philosophy

Sergio Navega wrote:

> Patrick Juola wrote in message <7afb2g$jfa$1@quine.mathcs.duq.edu>...

[snip snip]

> >Unfortunately, this approach seems to beg the question of what
> >the fundamental principles *are*.   Maybe "beg the question" isn't
> >the right phrase.  The problem is that we don't really know what
> >constitutes intelligence.  It's not clear, or at least it's
> >not uncontroversial, that what a dog, a dolphin, or a chimpanzee
> >does can be grouped under the rubric of "intelligence", especially
> >if what you want is a human-level intelligence --  does that
> >NECESSARILY imply a human-like intelligence as well?

[snip snip snip]

> Obviously, there's a great problem in defining intelligence solely
> in terms of learning, and I myself have committed this sin more
> than once. And although learning is part of intelligence (Massimo
> Piateli-Palmarini *notwithstanding*) it is obviously not enough.
> A computer with a video camera could "learn" everything about
> its environment, to the limit of its hard disk.

A while back (10/97, according to DejaNews) I gave the following
definition:

"versatility and resourcefulness in acquiring and applying information"

It ain't perfect, but I still think that it is more to the point than
the vast majority of drivel posted on the subject since then.  Looking
at it again, versatility and resourcefulness express breadth and depth,
respectively, and acquisition and application express the input and
output side.  Versatility and resourcefulness are capacities, while
acquisition and application are behaviors.  All other things being
equal, more of either versatility or resourcefulness in either acquiring
or applying information would, I think, generally be seen as indicating
more intelligence.  There may be other elements too, but I think these
are at least most of the major ones.

I don't think it is true that "we don't really know what constitutes
intelligence" -- rather, lexicography, the particular application
of intelligence that abstracts from the range of linguistic usage of a
word within our culture a short description that largely captures that
usage, is difficult, demanding, and widely misunderstood.  That it is
"not uncontroversial" as to whether such contingent issues as humanness
are an element of the *meaning* of "intelligence" shows just how
unintelligent humans can be -- it's like debating whether companies
that are distant from Seattle can really be considered to have "desktop
market share", or if they had Microsoft-level desktop market share,
would they have to be Microsoft-like.

--
<J Q B>


Back to Menu of Messages               Sergio's Homepage

Any Comments? snavega@attglobal.net