Selected Newsgroup Message
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Just a thought....
Date: 23 Dec 1998 00:00:00 GMT
Message-ID: <36814a7a.0@news3.ibm.net>
References: <xz5g2.51$pj.1364@nsw.nnrp.telstra.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 23 Dec 1998 19:54:34 GMT, 166.72.29.114
Organization: SilWis
Newsgroups: comp.ai.philosophy
John Smith wrote in message ...
>As people are constantly trying to develop a computer that is intelligent,
>could it be that the reason they fail is because they are trying to make a
>computer that knows everything from the start. I would think a more logical
>approach is to give it an intelligence no higher than that of a new-born
>baby and the ability to learn. Then even if it takes years, slowly teach it
>everything that a human of the same age would know/learn and hopefully after
>a few years you really would have an intelligent computer. I mean if you
>think about it, people are trying to cram what would take us years to learn
>into a computer in just hours.
>
John,
Although your position on this subject seems a little bit naive, I
think you expressed a valid concern.
We have today a bunch of systems ranging from the initial "tabula
rasa" to the fully loaded. Examples of the former are several
implementations using connectionist techniques (Miikkulainen's
Discern, Shastri's SHRUTI, etc). Examples of the latter include
Expert Systems and Cycorp's CYC. Examples of systems in between
include Soar and ACT-R.
Then, if we have example systems in most range of possibilities,
why is it that we don't have *that* kind of AI we're waiting for?
Could it be because we're expecting too much from those systems?
Or could it be that the critics of AI are right, and we will
never have machines thinking intelligently?
I'm an optimist. I think we have some amazing discoveries to do
before we find the "fundamental mechanics of intelligence".
But contrary to AI critics, I have confidence that we're able
to discover this mechanism.
What I'm sure is that this discovery will come from researchers
looking to multidisciplinary areas: AI, Cognitive Psychology
and Neuroscience. AI must be done by people with solid knowledge
about how biological brains work. It is the only reference we
have so far.
Regards,
Sergio Navega.
From: Bas de Kruyff <B.deKruyff@itude.com>
Subject: Re: Just a thought....
Date: 06 Jan 1999 00:00:00 GMT
Message-ID: <3693854A.E4F0A73E@itude.com>
Content-Transfer-Encoding: 7bit
References: <xz5g2.51$pj.1364@nsw.nnrp.telstra.net> <36814a7a.0@news3.ibm.net>
X-Accept-Language: en,nl
X-Sender: "Bas de Kruyff" <@smtp.nl.net>
Content-Type: text/plain; charset=us-ascii
X-Complaints-To: abuse@nl.uu.net
X-Trace: koza.nl.net 915637465 16407 212.206.225.58 (6 Jan 1999 15:44:25 GMT)
Organization: Itude Consulting BV
Mime-Version: 1.0
NNTP-Posting-Date: 6 Jan 1999 15:44:25 GMT
Newsgroups: comp.ai.philosophy
Maybe a little late reaction, but still......
Sergio Navega wrote:
> What I'm sure is that this discovery will come from researchers
> looking to multidisciplinary areas: AI, Cognitive Psychology
> and Neuroscience. AI must be done by people with solid knowledge
> about how biological brains work. It is the only reference we
> have so far.
>
Although I agree with this statement quite a bit, you are implying here that AI
should focus on copying a biological brain. I am not a biologist, but I disagree
that most, if not all, human brains works very effectively. And yes I consider
myself as a good example of this statement. I think that as soon as we have a
proper understanding of the human learning process, we first should abstract
this understanding so that we don't make machines which make human mistakes. In
other words: we should understand why we learn the way we do and why we don't
learn differently.
But how do people learn ? It would be smart to first try to emulate the learning
process of a more simple creature, such as an ant or mosquito and then follow
the evolutionary steps ones again. But how do you elect the knowledge of an
ant.......
Interesting subject......
Cheers,
Bas.
---------------------------------------
B.C. de Kruyff MSc.
Itude Consulting BV tel. 030-6997020
Postbus 968 fax
030-6565788
3700 AZ Zeist
---------------------------------------
http://www.itude.com
---------------------------------------
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Just a thought....
Date: 08 Jan 1999 00:00:00 GMT
Message-ID: <3696516a.0@news3.ibm.net>
References: <xz5g2.51$pj.1364@nsw.nnrp.telstra.net> <36814a7a.0@news3.ibm.net>
<3693854A.E4F0A73E@itude.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 8 Jan 1999 18:41:46 GMT, 129.37.182.254
Organization: SilWis
Newsgroups: comp.ai.philosophy
Bas de Kruyff wrote in message <3693854A.E4F0A73E@itude.com>...
>Maybe a little late reaction, but still......
>
>Sergio Navega wrote:
>
>> What I'm sure is that this discovery will come from researchers
>> looking to multidisciplinary areas: AI, Cognitive Psychology
>> and Neuroscience. AI must be done by people with solid knowledge
>> about how biological brains work. It is the only reference we
>> have so far.
>>
>
>Although I agree with this statement quite a bit, you are implying here that AI
>should focus on copying a biological brain. I am not a biologist, but I disagree
>that most, if not all, human brains works very effectively. And yes I consider
>myself as a good example of this statement. I think that as soon as we have a
>proper understanding of the human learning process, we first should abstract
>this understanding so that we don't make machines which make human mistakes. In
>other words: we should understand why we learn the way we do and why we don't
>learn differently.
>
This is also a late reaction, but I find it necessary.
I didn't say we should copy human brains. I said that we'd be better
off understanding how our brain works, because that's the only clue
we have to figure out what are the *fundamental* principles behind
intelligence. And the establishment of what are these principles is,
in my opinion, the most important (and often neglected) point of AI.
Our problem is not to understand how to devise a rational machine.
This is easy (in fact, this have been done since the late 1950s,
with GPS). What we've learned since then is that to work efficiently
in our world and to understand our natural language the "brain"
shouldn't be that "rational". Insisting on rational principles for
the brain is equivalent of trying to model our atmosphere without
using chaos theory: it works for one or two days.
This is not to say that our brain is the definitive model of
intelligence: we have *plenty* of problems (one of them is the recent
discovery of our vulnerability to implanted false memories). We can
correct this in our designs, but I believe that this will have to wait
until "version 2", because we've got to know how to solve the first
part of the problem first.
>But how do people learn ? It would be smart to first try to emulate the
learning
>process of a more simple creature, such as an ant or mosquito and then
follow
>the evolutionary steps ones again. But how do you elect the knowledge of an
>ant.......
>
>Interesting subject......
>
>Cheers,
>Bas.
This is indeed one line of research, but I suspect that it will lead
us to understand better what life is and how to "replicate" it in computers.
I prefer the approaches that focus directly on intelligence (a small subset
of life).
Regards,
Sergio Navega.
Back to Menu of Messages Sergio's Homepage
Any Comments? snavega@attglobal.net