Selected Newsgroup Message
From: rickert@cs.niu.edu (Neil Rickert)
Subject: An HLUT challenge.
Date: 04 Mar 1999 00:00:00 GMT
Message-ID: <7bmnkn$2kl@ux.cs.niu.edu>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy
Earlier arguments have been over my claim that an HLUT (Humongous
Look Up Table) device cannot emulate human behavior.
I must say that the debate has been interesting. Those who opposed
my position have repeatedly asserted the obviousness of my purported
error. Yet they have failed to give any coherent argument to
demonstrate such an error, and have instead descended to ad hominems
(charges of idiocy, and such).
Clearly my challenge was far too difficult for them to answer. I
therefore shall raise what should be a rather simpler challenge.
The question that I now pose is this:
Can the behavior of an ordinary desktop computer be emulated by
an HLUT system?
I am inclined to suspect that the answer is "no, it cannot be
emulated". However, I eagerly await a clear and convincing
demonstration that I am mistaken in this opinion.
We do need to be clear. I am not asking whether a processor can be
emulated by an HLUT. I think there is no important disagreement over
that. My challenge is with respect to an ordinary desktop computer,
considering the external inputs into that computer.
To avoid confusion, I shall make the problem more precise.
We consider a computer TARGET. Let's say that it is a pentium PC.
Its only external inputs are a standard ethernet card and a
keyboard. Inside it has rather typical hardware. I shall stipulate
that no data will ever be entered at the keyboard. (The keyboard is
only there so that the system won't hang during booting). The
question, then, is one of emulating the ethernet behavior. We assume
that the ethernet LAN is linked into the Internet.
We shall also suppose that there is another computer, HLUT, of
unspecified design. Those who would answer my challenge get to
design that system in their responses. It might be the fastest
imaginable future supercomputer.
The HLUT system is connected to the same ethernet, so that it can see
all of the packets too and from the TARGET system, and can detect all
network collisions. HLUT also has on its disks (a) a complete copy
of the initial contents of the disk drive of TARGET, including any
program code that it will run, (b) the manufacturers specifications
of all of the internal components of TARGET, (c) a complete copy of
the BIOS ROM code that TARGET uses to boot itself.
The HLUT system also has a very high speed ATM (or other) network
card. The problem for the HLUT system, is to predict the ethernet
output that will come from the TARGET system, and to transmit its
predicted output via ATM interface before the TARGET system actually
sends that output to its ethernet interface. The ATM interface is
allowed to be on a private network, not gatewayed into the Internet,
such that the TARGET system has no way of accessing the predictions
by HLUT.
I will stipulate that, for the duration of the test (say 1 year),
there are no disk drive crashes or other hardware failures and there
are no power outages. The challenge is only that HLUT predict the
outputs, in the correct sequence, before TARGET produces its
outputs. There is no requirement that HLUT gives exact timing as to
when the outputs will appear.
We can divide the challenge into two subproblems:
1: Is there a general algorithm that can be run on HLUT, using
the information about TARGET and the ethernet
input, such
that it could successfully predict the output
of TARGET.
2: Failing 1, can a persuasive case be made that once humans
have examined the programming of TARGET, they
could then
design a program to run on HLUT to predict its
output.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 04 Mar 1999 00:00:00 GMT
Message-ID: <36def9a9@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 4 Mar 1999 21:22:49 GMT, 129.37.182.16
Organization: SilWis
Newsgroups: comp.ai.philosophy
Neil Rickert wrote in message <7bmnkn$2kl@ux.cs.niu.edu>...
>Earlier arguments have been over my claim that an HLUT (Humongous
>Look Up Table) device cannot emulate human behavior.
>
>I must say that the debate has been interesting. Those who opposed
>my position have repeatedly asserted the obviousness of my purported
>error. Yet they have failed to give any coherent argument to
>demonstrate such an error, and have instead descended to ad hominems
>(charges of idiocy, and such).
>
>Clearly my challenge was far too difficult for them to answer. I
>therefore shall raise what should be a rather simpler challenge.
>
>The question that I now pose is this:
>
> Can the behavior of an ordinary desktop computer be emulated by
> an HLUT system?
>
[snip]
Brilliant and thoughtful thought experiment, I suspect will cause
a damage to the opponent's arguments. I'm curious to read the sort
of reasoning that everybody will do about this.
At first, I became intrigued trying to understand why you relaxed
the condition of precise timing of the HLUT emulator. I thought
that demanding the same timing would put a severe constraint that
would make it easier to accept your conclusion.
On a second thought, I saw that this was not necessary. Even without
being forced to present the same timing, I would agree that the
HLUT computer will not be able to present the same responses in
a 1 year period. I suspect that, depending on the load of accesses
that came through the Ethernet cable, a couple of minutes would
be enough to raise the the difference between the systems to a
more than noticeable level.
And if one is allowed to "prepare" what goes through the ethernet,
I would say that a few transactions would be enough to make both
systems, from that moment to eternity, diverge completely.
Regards,
Sergio Navega.
From: Jim Balter <jqb@sandpiper.net>
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <36DFA339.734E866F@sandpiper.net>
Content-Transfer-Encoding: 7bit
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
X-Accept-Language: en-US
Content-Type: text/plain; charset=us-ascii
Organization: Sandpiper Networks, Inc.
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Sergio Navega wrote:
>
> Neil Rickert wrote in message <7bmnkn$2kl@ux.cs.niu.edu>...
> >Earlier arguments have been over my claim that an HLUT (Humongous
> >Look Up Table) device cannot emulate human behavior.
> >
> >I must say that the debate has been interesting. Those who opposed
> >my position have repeatedly asserted the obviousness of my purported
> >error. Yet they have failed to give any coherent argument to
> >demonstrate such an error, and have instead descended to ad hominems
> >(charges of idiocy, and such).
> >
> >Clearly my challenge was far too difficult for them to answer. I
> >therefore shall raise what should be a rather simpler challenge.
> >
> >The question that I now pose is this:
> >
> > Can the behavior of an ordinary desktop computer be emulated
by
> > an HLUT system?
> >
>
> [snip]
>
> Brilliant and thoughtful thought experiment, I suspect will cause
> a damage to the opponent's arguments.
That's probably because you don't understand the nature of a HLUT
and the nature of the arguments.
> I'm curious to read the sort
> of reasoning that everybody will do about this.
It's nothing that hasn't been said before.
> At first, I became intrigued trying to understand why you relaxed
> the condition of precise timing of the HLUT emulator. I thought
> that demanding the same timing would put a severe constraint that
> would make it easier to accept your conclusion.
>
> On a second thought, I saw that this was not necessary. Even without
> being forced to present the same timing, I would agree that the
> HLUT computer will not be able to present the same responses in
> a 1 year period.
But it can *by definition*. You just don't understand what a HLUT is.
> I suspect that, depending on the load of accesses
> that came through the Ethernet cable, a couple of minutes would
> be enough to raise the the difference between the systems to a
> more than noticeable level.
"the systems"? A HLUT is a hypothetical construct. The HLUT in
question is that HLUT which produces the same results as the
target, not some other HLUT.
The claim is and has always been that there is, in the atemporal
theoretical mathematic sense, a HLUT that reproduces any given
observable behavior. There has never been any claim that some
person can build or produce such a thing "ex ante". Only the
God of Hypotheticals can do so.
This same strawman of "constructing" or "programming" a HLUT
keeps getting raised over an over again -- am I not justified
in feeling like you're acting like idiots?
> And if one is allowed to "prepare" what goes through the ethernet,
> I would say that a few transactions would be enough to make both
> systems, from that moment to eternity, diverge completely.
The relevant HLUT is one that doesn't diverge, not one that does.
There is no "ex ante" HLUT. Sheesh.
--
<J Q B>
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <36dfeab7@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 5 Mar 1999 14:31:19 GMT, 166.72.21.221
Organization: SilWis
Newsgroups: comp.ai.philosophy
Jim Balter wrote in message <36DFA339.734E866F@sandpiper.net>...
>Sergio Navega wrote:
>>
>> Brilliant and thoughtful thought experiment, I suspect will cause
>> a damage to the opponent's arguments.
>
>That's probably because you don't understand the nature of a HLUT
>and the nature of the arguments.
>
Or else because there's something in this argument that's deeper
and was not addressed by you all.
>> I'm curious to read the sort
>> of reasoning that everybody will do about this.
>
>It's nothing that hasn't been said before.
>
Probably yes. These threads are very redundant.
>> At first, I became intrigued trying to understand why you relaxed
>> the condition of precise timing of the HLUT emulator. I thought
>> that demanding the same timing would put a severe constraint that
>> would make it easier to accept your conclusion.
>>
>> On a second thought, I saw that this was not necessary. Even without
>> being forced to present the same timing, I would agree that the
>> HLUT computer will not be able to present the same responses in
>> a 1 year period.
>
>But it can *by definition*. You just don't understand what a HLUT is.
>
This is important. If that specific HLUT you mention is able to
present the same responses than the system it's emulating, then
what you're trying to assert is that *there is such a HLUT even
in our imagination*. I have no problem with that, but let me ask
you to keep this result and read on.
>[snip]
>> And if one is allowed to "prepare" what goes through the ethernet,
>> I would say that a few transactions would be enough to make both
>> systems, from that moment to eternity, diverge completely.
>
>The relevant HLUT is one that doesn't diverge, not one that does.
>There is no "ex ante" HLUT. Sheesh.
>
Jim, I understand what you want to mean. I may even be
able to agree with your ponderings. I just want to
express that what you're saying *is not* addressing the
fundamental point that Neil had proposed. I'll humbly
try to clarify.
I agree, there is no way to come up with an "ex ante" HLUT.
I would say that everybody here in this discussion will
agree that we *cannot* find an "ex ante" HLUT for a system
(computer or human being) immersed in this complex world.
That's obviously not the problem.
Now what I think you and others are proposing is that
a HLUT can be constructed by the recording, with a
sufficiently fine-grained resolution, all aspects of
input/output of that system.
If this sounds too "practical" or infeasible, lets say
that there is a mathematical, imaginary HLUT able to
represent all the action/reaction pairs taken by that
system over a period of 1 year or a lifetime. There should
exist such a HLUT. I do not discuss this.
The claim at stake and the one I agree entirely is that
this recorded HLUT *cannot*, *by any means*, be "played back"
in such a way as to present intelligent behavior comparable
to the system it is emulating. There isn't any practical
condition that we're able to enforce such that the playback
(or "execution") of that HLUT will present comparable
behavior.
This HLUT is not much different than a very high resolution,
multitrack videocassette. A recorder of past actions.
To say that the HLUT is capable of reproducing equivalent
intelligent behavior assumes that we're able to "rewind"
the universe (backward time travel) and, even worse, that
the universe is totally deterministic (will run exactly
the same way as "before"). I believe those conditions to
be far more mind-boggling than the existence of that HLUT
in the first place.
Regards,
Sergio Navega.
From: modlin@concentric.net
Subject: Re: An HLUT challenge.
Date: 04 Mar 1999 00:00:00 GMT
Message-ID: <7bnbaa$ovs@journal.concentric.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu>
Organization: Concentric Internet Services
Reply-To: modlin@concentric.net
Newsgroups: comp.ai.philosophy
In <7bmnkn$2kl@ux.cs.niu.edu>, rickert@cs.niu.edu (Neil Rickert) writes:
[challenge to emulate a computer with an HLUT snipped]
>We can divide the challenge into two subproblems:
>
> 1: Is there a general algorithm that can be run on HLUT,
using
> the information about TARGET and the ethernet
input, such
> that it could successfully predict the output of
TARGET.
>
> 2: Failing 1, can a persuasive case be made that once humans
> have examined the programming of TARGET, they
could then
> design a program to run on HLUT to predict its
output.
>
Neil, this clearly illustrates that you have not grasped the HLUT
concept.
NOBODY claims that there is such an algorithm, or that humans could
design such a program.
The whole of the HLUT thought experiment is to assume that a
complete list of all those predictions exists. Nobody designs
it, there is no algorithm to produce it, everyone knows that
it is impossible in practice. But assume for the sake of
discussion that this impossible thing exists.
GIVEN a complete list of all the responses that a computer will
make to all of its possible inputs, then the HLUT will mimic the
computer externally in every detail, and the question is then
the philosophical one of whether it could be said to be
"really" computing as it looks up all these answers.
Your challenge is quite beside the point.
Bill Modlin
From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: An HLUT challenge.
Date: 04 Mar 1999 00:00:00 GMT
Message-ID: <7bnldu$4bb@ux.cs.niu.edu>
References: <7bmnkn$2kl@ux.cs.niu.edu> <7bnbaa$ovs@journal.concentric.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy
modlin@concentric.net writes:
>In <7bmnkn$2kl@ux.cs.niu.edu>, rickert@cs.niu.edu (Neil Rickert) writes:
>[challenge to emulate a computer with an HLUT snipped]
>>We can divide the challenge into two subproblems:
>> 1: Is there a general algorithm that can be run on
HLUT, using
>> the information about TARGET and the ethernet
input, such
>> that it could successfully predict the output
of TARGET.
>> 2: Failing 1, can a persuasive case be made that once
humans
>> have examined the programming of TARGET, they
could then
>> design a program to run on HLUT to predict
its output.
>Neil, this clearly illustrates that you have not grasped the HLUT
>concept.
>NOBODY claims that there is such an algorithm, or that humans could
>design such a program.
In that case, there could be no basis for the HLUT claim. Any proof
of the validity of the HLUT claim would, in effect, constitute an
algorithm for generating the contents of the table, although perhaps
not an algorithm that could actually be implemented on practical
computers. (Recall that I did not require that the HLUT computer be
a practical machine). So let's just take the HLUT claim for what it
is -- a piece of ideology which some people fervently believe, but
which nobody can justify.
>The whole of the HLUT thought experiment is to assume that a
>complete list of all those predictions exists. Nobody designs
>it, there is no algorithm to produce it, everyone knows that
>it is impossible in practice. But assume for the sake of
>discussion that this impossible thing exists.
Then, for the sake or our discussions, why not just assume that
miracles and magic exist. Perhaps we can do it all with a crystal
ball, and we don't need anything as cumbersome as an HLUT.
You are making the HLUT argument into an absurdity. Unless it could
be demonstrated that an HLUT could exist in principle, it would have
no relevance to the questions for which it is raised. If you are not
satisfied with my 1: and 2: above, I can offer a third alternative
3: At least give a clear and convincing proof of the existence
of a program to run on HLUT.
>GIVEN a complete list of all the responses that a computer will
>make to all of its possible inputs, then the HLUT will mimic the
>computer externally in every detail, and the question is then
>the philosophical one of whether it could be said to be
>"really" computing as it looks up all these answers.
You would have to first establish that there could be such a complete
(finite) list.
>Your challenge is quite beside the point.
In other words, you are clueless on this, yet idealogically
committed, and serving up only empty rhetoric.
From: Jim Balter <jqb@sandpiper.net>
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <36DFA927.22FEFBD4@sandpiper.net>
Content-Transfer-Encoding: 7bit
References: <7bmnkn$2kl@ux.cs.niu.edu> <7bnbaa$ovs@journal.concentric.net>
<7bnldu$4bb@ux.cs.niu.edu>
X-Accept-Language: en-US
Content-Type: text/plain; charset=us-ascii
Organization: Sandpiper Networks, Inc.
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Neil Rickert wrote:
>
> modlin@concentric.net writes:
>
> >In <7bmnkn$2kl@ux.cs.niu.edu>, rickert@cs.niu.edu (Neil Rickert) writes:
>
> >[challenge to emulate a computer with an HLUT snipped]
>
> >>We can divide the challenge into two subproblems:
>
> >> 1: Is there a general algorithm that can be run on
HLUT, using
> >> the information about TARGET and the
ethernet input, such
> >> that it could successfully predict the
output of TARGET.
>
> >> 2: Failing 1, can a persuasive case be made that
once humans
> >> have examined the programming of TARGET,
they could then
> >> design a program to run on HLUT to
predict its output.
>
> >Neil, this clearly illustrates that you have not grasped the HLUT
> >concept.
>
> >NOBODY claims that there is such an algorithm, or that humans could
> >design such a program.
>
> In that case,
"In that case"? "In that case"? Where the hell have you
been?
My God, what a jackass you are.
> there could be no basis for the HLUT claim. Any proof
> of the validity of the HLUT claim would, in effect, constitute an
> algorithm for generating the contents of the table,
"NOBODY claims there is such an algorithm". You just don't
get it. That *isn't* the "HLUT claim". It's a fucking
*existence proof*.
> In other words, you are clueless on this, yet idealogically
> committed, and serving up only empty rhetoric.
Talk about clueless!
--
<J Q B>
From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <7bp1nk$5m2@ux.cs.niu.edu>
References: <7bnldu$4bb@ux.cs.niu.edu> <36DFA927.22FEFBD4@sandpiper.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy
Jim Balter <jqb@sandpiper.net> writes:
>> there could be no basis for the HLUT claim. Any proof
>> of the validity of the HLUT claim would, in effect, constitute an
>> algorithm for generating the contents of the table,
>"NOBODY claims there is such an algorithm". You just don't
>get it. That *isn't* the "HLUT claim". It's a fucking
>*existence proof*.
There is no existence proof, whether fucking or otherwise. There is
just an incredible ideogical committment to this among some very
confused people.
Behavior is something real that occurs in the world. At best you
could have an existence proof for an HLUT that matches a suitably
constrained formal specification of behavior. But that would utterly
beg the question of whether behavior is formally specifiable within
those constraints, or even whether behavior is formally specifiable
at all.
From: Jim Balter <jqb@sandpiper.net>
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <36DF9C9B.63FDD6A0@sandpiper.net>
Content-Transfer-Encoding: 7bit
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu>
X-Accept-Language: en-US
Content-Type: text/plain; charset=us-ascii
Organization: Sandpiper Networks, Inc.
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Neil Rickert wrote:
>
> Earlier arguments have been over my claim that an HLUT (Humongous
> Look Up Table) device cannot emulate human behavior.
>
> I must say that the debate has been interesting. Those who opposed
> my position have repeatedly asserted the obviousness of my purported
> error. Yet they have failed to give any coherent argument to
> demonstrate such an error,
That's a lie.
> and have instead descended to ad hominems
> (charges of idiocy, and such).
You started the ad hominems and continue with them into this message.
I didn't say you're an idiot, I said you're *acting* like one.
> Clearly my challenge was far too difficult for them to answer.
That is the sort of ad hominem crap you have littered this discussion
with all along.
> I am inclined to suspect that the answer is "no, it cannot be
> emulated". However, I eagerly await a clear and convincing
> demonstration that I am mistaken in this opinion.
You have repeatedly failed to accept straightforward demonstrations;
why should things change?
> 1: Is there a general algorithm that can be run on
HLUT, using
> the information about TARGET and the
ethernet input, such
> that it could successfully predict
the output of TARGET.
The general HLUT algorithm is:
forever: (state, output) = table[state, input]
For any particular finite stream of outputs from TARGET,
there exists a HLUT that will produce the output with
no inputs at all. That satisfies the requirement,
since *in principle* some HLUT could produce the required
prediction for any given instance of TARGET. But no doubt you want
to take the same HLUT and test it again. Well, if there
is a deterministic relationship between the inputs and
the outputs, then that means there is an abstract
(infinite) mapping. Given a finite period (1 year)
and a finite time density of inputs, there are only finite
many (n) input streams. Each of these, ISi, has a corresponding
(by the deterministic relationship) output stream, OSi.
There is, by definition, a finite HLUT that maps
(i = 1..n) ISi -> OSi.
If there is not a deterministic relationship, then there is
a nondeterministic mapping such that, for each
ISi there are several possible OSij, each of which occurs
REPj times. A HLUT which, for each ISi, produced the OSij
with maximum REPj would have maximal probability of matching
TARGET's outputs. Over a series of trials, it will do at least
as well as any other predictor.
> 2: Failing 1, can a persuasive case be made that
once humans
> have examined the programming of
TARGET, they could then
> design a program to run on HLUT to
predict its output.
No, for the umpteenth time, no one can design a HLUT.
Once again, you have paid no attention to what has been said.
--
<J Q B>
From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <7bp138$5jj@ux.cs.niu.edu>
References: <7bmnkn$2kl@ux.cs.niu.edu> <36DF9C9B.63FDD6A0@sandpiper.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy
Jim Balter <jqb@sandpiper.net> writes:
>Neil Rickert wrote:
>> Earlier arguments have been over my claim that an HLUT (Humongous
>> Look Up Table) device cannot emulate human behavior.
>> I must say that the debate has been interesting. Those who opposed
>> my position have repeatedly asserted the obviousness of my purported
>> error. Yet they have failed to give any coherent argument to
>> demonstrate such an error,
>That's a lie.
There is no lie. Your side of the argument has made many claims, but
has failed to provide justifying arguments.
>> and have instead descended to ad hominems
>> (charges of idiocy, and such).
>You started the ad hominems and continue with them into this message.
>I didn't say you're an idiot, I said you're *acting* like one.
>> Clearly my challenge was far too difficult for them to answer.
>That is the sort of ad hominem crap you have littered this discussion
>with all along.
It is more like gentle teasing.
>> I am inclined to suspect that the answer is "no, it cannot be
>> emulated". However, I eagerly await a clear and convincing
>> demonstration that I am mistaken in this opinion.
>You have repeatedly failed to accept straightforward demonstrations;
>why should things change?
Just provide the straightforward demonstrations (if you can).
>> 1: Is there a general algorithm that can be run
on HLUT, using
>> the information about TARGET and
the ethernet input, such
>> that it could successfully
predict the output of TARGET.
>The general HLUT algorithm is:
> forever: (state, output) = table[state, input]
Finally, the beginnings of an argument.
>For any particular finite stream of outputs from TARGET,
>...
> ...
Well,
if there
^^^^^^^^
>is a deterministic relationship between the inputs and
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>the outputs, then that means there is an abstract
^^^^^^^^^^^
Aha. A crucial assumption which has not previously been mentioned in
all of the purported arguments given. No if you would kindly oblige
us by establishing a basis for this assumption, perhaps we can make
progress.
>If there is not a deterministic relationship, then there is
>a nondeterministic mapping such that, for each
>ISi there are several possible OSij, each of which occurs
>REPj times. A HLUT which, for each ISi, produced the OSij
>with maximum REPj would have maximal probability of matching
>TARGET's outputs. Over a series of trials, it will do at least
>as well as any other predictor.
Perhaps there is virtually no repetition. In that case, your HLUT
may not do better than a random number generator.
Evidently you cannot deal with the non-deterministic case. At best
you can claim that no other predictor will do better. However, the
original system is a perfect emulator of its own behavior. So the
original system does do better at emulating itself than could any
HLUT.
So, we finally see that the HLUT argument is based an the assumption
of a deterministic relation between inputs and outputs.
I now await your justification for making such an assumption.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <36e01caf@news3.us.ibm.net>
References: <7bmnkn$2kl@ux.cs.niu.edu> <36DF9C9B.63FDD6A0@sandpiper.net>
<7bp138$5jj@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 5 Mar 1999 18:04:31 GMT, 166.72.21.224
Organization: SilWis
Newsgroups: comp.ai.philosophy
Neil Rickert wrote in message <7bp138$5jj@ux.cs.niu.edu>...
>Jim Balter <jqb@sandpiper.net> writes:
>
>>For any particular finite stream of outputs from TARGET,
>>...
>
>> ...
Well,
if there
>
^^^^^^^^
>>is a deterministic relationship between the inputs and
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>the outputs, then that means there is an abstract
> ^^^^^^^^^^^
>
>Aha. A crucial assumption which has not previously been mentioned in
>all of the purported arguments given. No if you would kindly oblige
>us by establishing a basis for this assumption, perhaps we can make
>progress.
>
If you allow me, I would like to say Aha! too. I suggest that Jim
and Daryl rethink their whole argumentations without assuming the
necessity of determinism. I'm inclined to think that they may end
agreeing with Neil, at least in the problem presented at the
beginning of this thread.
Regards,
Sergio Navega.
From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <7bp74a$67p@ux.cs.niu.edu>
References: <7bp138$5jj@ux.cs.niu.edu> <36e01caf@news3.us.ibm.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy
"Sergio Navega" <snavega@ibm.net> writes:
>Neil Rickert wrote in message <7bp138$5jj@ux.cs.niu.edu>...
>>Jim Balter <jqb@sandpiper.net> writes:
>>>For any particular finite stream of outputs from TARGET,
>>>...
>>> ...
Well,
if there
>>
^^^^^^^^
>>>is a deterministic relationship between the inputs and
>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>the outputs, then that means there is an abstract
>> ^^^^^^^^^^^
>>Aha. A crucial assumption which has not previously been mentioned in
>>all of the purported arguments given. No if you would kindly oblige
>>us by establishing a basis for this assumption, perhaps we can make
>>progress.
>If you allow me, I would like to say Aha! too. I suggest that Jim
>and Daryl rethink their whole argumentations without assuming the
>necessity of determinism. I'm inclined to think that they may end
>agreeing with Neil, at least in the problem presented at the
>beginning of this thread.
I would claim that:
(a) Human behavior is not deterministic (in the sense in which Jim
defined the term);
(b) Human behavior does considerably better than what you would get
by making a random selection from a deterministically
chosen
set of alternatives;
(c) The characteristics given in (a) and (b) have a great deal to
do with what we mean by "intelligent."
Incidently, my HLUT challenge was intended to make the point that the
behavior of a desktop computer need not be deterministic (in the
sense of Jim's definition).
From: Jim Balter <jqb@sandpiper.net>
Subject: Re: An HLUT challenge.
Date: 11 Mar 1999 00:00:00 GMT
Message-ID: <36E78CEC.5933BA72@sandpiper.net>
Content-Transfer-Encoding: 7bit
References: <7bp138$5jj@ux.cs.niu.edu> <36e01caf@news3.us.ibm.net>
<7bp74a$67p@ux.cs.niu.edu>
X-Accept-Language: en-US
Content-Type: text/plain; charset=us-ascii
Organization: Sandpiper Networks, Inc.
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Neil Rickert wrote:
>
> "Sergio Navega" <snavega@ibm.net> writes:
> >Neil Rickert wrote in message <7bp138$5jj@ux.cs.niu.edu>...
> >>Jim Balter <jqb@sandpiper.net> writes:
>
> >>>For any particular finite stream of outputs from TARGET,
> >>>...
>
> >>> ...
Well,
if there
> >>
^^^^^^^^
> >>>is a deterministic relationship between the inputs and
> >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >>>the outputs, then that means there is an abstract
> >> ^^^^^^^^^^^
>
> >>Aha. A crucial assumption which has not previously been mentioned in
> >>all of the purported arguments given. No if you would kindly oblige
> >>us by establishing a basis for this assumption, perhaps we can make
> >>progress.
>
> >If you allow me, I would like to say Aha! too. I suggest that Jim
> >and Daryl rethink their whole argumentations without assuming the
> >necessity of determinism. I'm inclined to think that they may end
> >agreeing with Neil, at least in the problem presented at the
> >beginning of this thread.
>
> I would claim that:
>
> (a) Human behavior is not deterministic (in the sense in which Jim
> defined the term);
I doubt that you understood it, since of course human behavior is
not deterministic -- we possess no deterministic model of human
behavior.
But perhaps some day we will, in which case human behavior will be
deterministic. However, given the apparent chaotic nature of the
brain, it is unlikely that any deterministic model of the brain
can be developed (which is not to say that it is impossible in theory,
although QM may well rule *that* out).
> (b) Human behavior does considerably better than what you would get
> by making a random selection from a
deterministically chosen
> set of alternatives;
Not if the "deterministically chosen", whatever that means, set of
alternatives always contained exactly one member, namely the best
(whatever *that* means) one.
> (c) The characteristics given in (a) and (b) have a great deal to
> do with what we mean by "intelligent."
Certainly "doing better than random" has a great deal to do with
what we mean by "intelligent", but that's rather vacuous.
> Incidently, my HLUT challenge was intended to make the point that the
> behavior of a desktop computer need not be deterministic (in the
> sense of Jim's definition).
A system is non-deterministic if we have no predictive model of it;
it isn't an intrinsic attribute above the quantum level.
You are in an essentialist rut on this issue.
--
<J Q B>
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <7bp7ld$dgj@edrn.newsguy.com>
References: <7bmnkn$2kl@ux.cs.niu.edu> <36DF9C9B.63FDD6A0@sandpiper.net>
<7bp138$5jj@ux.cs.niu.edu> <36e01caf@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>If you allow me, I would like to say Aha! too. I suggest that Jim
>and Daryl rethink their whole argumentations without assuming the
>necessity of determinism. I'm inclined to think that they may end
>agreeing with Neil, at least in the problem presented at the
>beginning of this thread.
Determinism is a red-herring. It is completely irrelevant to the
argument. If (for whatever reason) you want nondeterministic
behavior, make one of the inputs be a random number generator.
Besides, there is no observable difference between a deterministic
system and a nondeterministic system. Perhaps you (and Neil) are
thinking of determinism in terms of the *last* input:
1. Q: What is your name?
2. A: Fred Sanders
3. Q: What is your name?
4. A: I already told you, my name is Fred Sanders.
If an AI program responded exactly the same to the second
question as to the first, then it would definitely not
be behaving much like a human. But, as has been explained
many times, the inputs to the HLUT is the entire *sequence*
of inputs received up to the present time, not just the
last one. So there is no problem in making the response
to the sequence: <"What is your name?", "What is your name">
be different from the response to the sequence
<"What is your name?">.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <7bpcb5$6nq@ux.cs.niu.edu>
References: <36e01caf@news3.us.ibm.net> <7bp7ld$dgj@edrn.newsguy.com>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy
daryl@cogentex.com (Daryl McCullough) writes:
>>If you allow me, I would like to say Aha! too. I suggest that Jim
>>and Daryl rethink their whole argumentations without assuming the
>>necessity of determinism. I'm inclined to think that they may end
>>agreeing with Neil, at least in the problem presented at the
>>beginning of this thread.
>Determinism is a red-herring. It is completely irrelevant to the
>argument. If (for whatever reason) you want nondeterministic
>behavior, make one of the inputs be a random number generator.
No, that is wrong. The claim has been that an HLUT can emulate a
system. Since the HLUT is deterministic, this is relevant. For an
HLUT cannot emulate a non-deterministic system.
>
Perhaps
you (and Neil) are
>thinking of determinism in terms of the *last* input:
I am certainly not doing that.
From: Jim Balter <jqb@sandpiper.net>
Subject: Re: An HLUT challenge.
Date: 11 Mar 1999 00:00:00 GMT
Message-ID: <36E789ED.65A8F38B@sandpiper.net>
Content-Transfer-Encoding: 7bit
References: <36e01caf@news3.us.ibm.net> <7bp7ld$dgj@edrn.newsguy.com>
<7bpcb5$6nq@ux.cs.niu.edu>
X-Accept-Language: en-US
Content-Type: text/plain; charset=us-ascii
Organization: Sandpiper Networks, Inc.
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Neil Rickert wrote:
>
> daryl@cogentex.com (Daryl McCullough) writes:
>
> >>If you allow me, I would like to say Aha! too. I suggest that Jim
> >>and Daryl rethink their whole argumentations without assuming the
> >>necessity of determinism. I'm inclined to think that they may end
> >>agreeing with Neil, at least in the problem presented at the
> >>beginning of this thread.
>
> >Determinism is a red-herring. It is completely irrelevant to the
> >argument. If (for whatever reason) you want nondeterministic
> >behavior, make one of the inputs be a random number generator.
>
> No, that is wrong. The claim has been that an HLUT can emulate a
> system. Since the HLUT is deterministic, this is relevant. For an
> HLUT cannot emulate a non-deterministic system.
The inputs to a HLUT need not be (and presumably are not) deterministic.
And in terms of reproducing a device that is non-deterministic in the
sense that we have no deterministic model of it, this is not an issue
because the existence of a HLUT (the set of such HLUTs is not empty)
that duplicates the actual behavior of that device is in no way
dependent upon *our* being able to determine its behavior. Set-wise
existence, which is form of the HLUT claim, is *atemporal* --
determinism has no applicability.
There is also a meaning of "[non-]deterministic" that appears in
connection with machine models, e.g., deterministic and
nondeterministic TMs. But a non-deterministic TM is just one that
allows multiple successor states. Any non-deterministic TM
computes the same function as some deterministic TM. Or, in other
words, any non-deterministic TM can be emulated by some deterministic
TM.
Whatever Neil means by "a non-deterministic system", if he even has
a well-formed notion in mind, his claim is false. And it can't be
rescued by accusing me of ideological commitments or other such ad
hominem trash. The validity of claims simply isn't subject to such
petty concerns.
--
<J Q B>
From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: An HLUT challenge.
Date: 15 Mar 1999 00:00:00 GMT
Message-ID: <7cjdsk$9gk@ux.cs.niu.edu>
References: <7bpcb5$6nq@ux.cs.niu.edu> <36E789ED.65A8F38B@sandpiper.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy
Jim Balter <jqb@sandpiper.net> writes:
>Neil Rickert wrote:
>> No, that is wrong. The claim has been that an HLUT can emulate a
>> system. Since the HLUT is deterministic, this is relevant. For an
>> HLUT cannot emulate a non-deterministic system.
>The inputs to a HLUT need not be (and presumably are not) deterministic.
Completely irrelevant to the points that were being discussed.
>And in terms of reproducing a device that is non-deterministic in the
>sense that we have no deterministic model of it, this is not an issue
>because the existence of a HLUT (the set of such HLUTs is not empty)
>that duplicates the actual behavior of that device is in no way
>dependent upon *our* being able to determine its behavior. Set-wise
>existence, which is form of the HLUT claim, is *atemporal* --
>determinism has no applicability.
Notice how Balter has dishonestly attempted to distort the question.
It was never a question about "*our* being able to determine its
behavior."
>There is also a meaning of "[non-]deterministic" that appears in
>connection with machine models, e.g., deterministic and
>nondeterministic TMs. ....
This is another pointless misdirection.
>Whatever Neil means by "a non-deterministic system", if he even has
>a well-formed notion in mind, his claim is false.
Clearly Balter is not interested in what I meant, for he has worked
real hard at obfuscating it.
>
And
it can't be
>rescued by accusing me of ideological commitments or other such ad
>hominem trash.
I will let the record speak for itself.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <36e0464e@news3.us.ibm.net>
References: <7bmnkn$2kl@ux.cs.niu.edu> <36DF9C9B.63FDD6A0@sandpiper.net>
<7bp138$5jj@ux.cs.niu.edu> <36e01caf@news3.us.ibm.net>
<7bp7ld$dgj@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 5 Mar 1999 21:02:06 GMT, 166.72.29.20
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7bp7ld$dgj@edrn.newsguy.com>...
>Sergio says...
>
>>If you allow me, I would like to say Aha! too. I suggest that Jim
>>and Daryl rethink their whole argumentations without assuming the
>>necessity of determinism. I'm inclined to think that they may end
>>agreeing with Neil, at least in the problem presented at the
>>beginning of this thread.
>
>Determinism is a red-herring. It is completely irrelevant to the
>argument. If (for whatever reason) you want nondeterministic
>behavior, make one of the inputs be a random number generator.
>
No, I don't want a nondeterministic HLUT. What I'm saying is that
the world in which this HLUT was generated is nondeterministic.
If you don't want to talk about the construction of the HLUT,
ok, lets forget how we made it. Lets just think of that mathematical
concept of a HLUT that is a reliable copy of every single if/then
condition of the life of one intelligent being, down to the
femtosecond level.
This HLUT will not present intelligent behavior, because this
HLUT *does not capture the invariant properties* of that world.
It captures what happened during one "instance" of that world,
and that instance will never repeat again. Because of
indeterminacy. The HLUT may receive as input a known vector
but its response will be something that may be not appropriate
for that "run".
>Besides, there is no observable difference between a deterministic
>system and a nondeterministic system. Perhaps you (and Neil) are
>thinking of determinism in terms of the *last* input:
>
> 1. Q: What is your name?
> 2. A: Fred Sanders
>
> 3. Q: What is your name?
> 4. A: I already told you, my name is Fred Sanders.
>
>If an AI program responded exactly the same to the second
>question as to the first, then it would definitely not
>be behaving much like a human. But, as has been explained
>many times, the inputs to the HLUT is the entire *sequence*
>of inputs received up to the present time, not just the
>last one. So there is no problem in making the response
>to the sequence: <"What is your name?", "What is your name">
>be different from the response to the sequence
><"What is your name?">.
>
me propose a different dialog. Questions 1 and 2 are
the same you wrote. Here's my replacements:
3. Q: What is your name?
4. A: Freddy.
I hope you're not afraid of highly hypothetical situations,
because that's what I'll do now (please, try to find the
valuable gist in the middle of the absurd I'm about to write;
this whole story of HLUTs is so out-of-mind that I'm not
ashamed to propose this).
Suppose that this conversation was taking place in a subway.
The noise level was really big, in a manner as to leave human
voices almost inaudible. Suppose, further, that Fred was not
sure that his phrase 2 had been fully understood. But this
"feeling" of Fred was really on the edge. In fact, it was
so in the edge that what made Fred decide that he wasn't
being heard was a *single* molecule of nitrogen vibrating
because of sound that made a difference in a cluster of vibrating
molecules that made a difference in his tympanum that made a
difference in his auditory circuit that made a difference in
his cortex that was exactly in the position of deciding for
that issue.
In another "instance" of this world, this nitrogen molecule,
due to indeterminacy, may make that difference not meaningful.
If this seems absurdly unlikely, think that every atom and
every molecule of us (including the firing rate of our neurons)
and of this universe is being driven by similar situations.
A HLUT does not have the "capacity" to *abstract* the invariant
(or highly constant) aspects of this universe, and then will
always answer with fixed things that, due to context, may even
be inapropriate.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <7bpkmq$6d5@edrn.newsguy.com>
References: <7bmnkn$2kl@ux.cs.niu.edu> <36DF9C9B.63FDD6A0@sandpiper.net>
<7bp138$5jj@ux.cs.niu.edu> <36e01caf@news3.us.ibm.net>
<7bp7ld$dgj@edrn.newsguy.com> <36e0464e@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>If you don't want to talk about the construction of the HLUT,
>ok, lets forget how we made it. Lets just think of that mathematical
>concept of a HLUT that is a reliable copy of every single if/then
>condition of the life of one intelligent being, down to the
>femtosecond level.
>
>This HLUT will not present intelligent behavior, because this
>HLUT *does not capture the invariant properties* of that world.
>It captures what happened during one "instance" of that world,
>and that instance will never repeat again.
No, the HLUT by assumption has an appropriate response for all
*possible* instances of the world.
[stuff deleted]
>A HLUT does not have the "capacity" to *abstract* the invariant
>(or highly constant) aspects of this universe, and then will
>always answer with fixed things that, due to context, may even
>be inapropriate.
A different context is a different input, and the HLUT will
produce a different output.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: houlepn@my-dejanews.com
Subject: Re: An HLUT challenge.
Date: 06 Mar 1999 00:00:00 GMT
Message-ID: <7bq3ke$sgb$1@nnrp1.dejanews.com>
References: <7bmnkn$2kl@ux.cs.niu.edu> <36DF9C9B.63FDD6A0@sandpiper.net>
<7bp138$5jj@ux.cs.niu.edu> <36e01caf@news3.us.ibm.net>
<7bp7ld$dgj@edrn.newsguy.com> <36e0464e@news3.us.ibm.net>
X-Http-Proxy: 1.0 x8.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Sat Mar 06 02:24:23 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)
"Sergio Navega" <snavega@ibm.net> wrote:
<snip>
[S. NAVEGA]
> Let me propose a different dialog. Questions 1 and 2 are
> the same you wrote. Here's my replacements:
>
> 3. Q: What is your name?
> 4. A: Freddy.
>
> I hope you're not afraid of highly hypothetical situations,
> because that's what I'll do now (please, try to find the
> valuable gist in the middle of the absurd I'm about to write;
> this whole story of HLUTs is so out-of-mind that I'm not
> ashamed to propose this).
>
> Suppose that this conversation was taking place in a subway.
> The noise level was really big, in a manner as to leave human
> voices almost inaudible. Suppose, further, that Fred was not
> sure that his phrase 2 had been fully understood. But this
> "feeling" of Fred was really on the edge. In fact, it was
> so in the edge that what made Fred decide that he wasn't
> being heard was a *single* molecule of nitrogen vibrating
> because of sound that made a difference in a cluster of vibrating
> molecules that made a difference in his tympanum that made a
> difference in his auditory circuit that made a difference in
> his cortex that was exactly in the position of deciding for
> that issue.
>
> In another "instance" of this world, this nitrogen molecule,
> due to indeterminacy, may make that difference not meaningful.
>
> If this seems absurdly unlikely, think that every atom and
> every molecule of us (including the firing rate of our neurons)
> and of this universe is being driven by similar situations.
>
> A HLUT does not have the "capacity" to *abstract* the invariant
> (or highly constant) aspects of this universe, and then will
> always answer with fixed things that, due to context, may even
> be inapropriate.
[PNH]
I would like to propose another way of looking at this problem
because I don't think what you say contradicts Daryl McCullough's
viewpoint.
Lets consider some finite time period of length T in the life of
Freddy in whitch he grows, learns, sleeps, etc. Consider now the
state vector of the universe |U_i> at the beginning of this time
period. This could be factored into |environment_i> x |Freddy_i>.
How will Freddy answer question Q at time i + t (t < T) ? Use the
following algorythm:
1) Apply the unitary time evolution operator to |U_i> to get |U_i+t>
2) Factor |U_i+t> into |environment_i+t> x |Freddy_i+t> (this tricky
step might require more intelligence than Freddy actually possess ;)
3) Identify some possible state of the environment that encodes question
Q at time t: |environment_Q>. This is the input state vector.
4) Identify some possible state of Freddy that encodes answer A at time
t+dt : |Freddy_A>. (Ignore states like |Freddy_just_died> or
|Freddy_has_been_turned_into_a _cow_due_to_some_weird_quantum_
_fluctuation>)
4) Let Freddy ponder the question for a time interval dt of a few
seconds. That is: compute |U_i+t+dt> and get the probability
of Freddy's answer being A as the coefficient of the output state
vector |Freddy_A> in the normalized ket:
<environment_Q+dt|U_i+t+dt>.
Use this algorythm to compute the probability p(Q,A) for all possible
distinguishable Q/A pairs at all times intervals dt in the time
period (i,i+T). Call the resulting matrix "Freddy's FAQ" or view it
as as Freddy's quantum HLUT.
To implement (in thought!) this HLUT follow Daryl's suggestion and make
one of it's input a random number generator used to give answer A to
question Q with probability p(Q,A).
Note 1) The formal problem of Freddy becoming entangled with the
environment during the time interval in which it ponders the question
(and thus blurring the distinction between |Fred> and |environment>)
could be minimized by having him isolated in a dull static room and
interacting with his environment through a terminal for the whole
period T considered in the definition of the HLUT (just as in the
original Turing proposal). The only important entanglement would
then mainly be between Freddy and the informational content of the
question asked (And this is essential if we wish Freddy to answer at all!)
Note 2) We do not have to worry about Freddy's particular past
histories (including everything that could posibly happen to him in
his room) as these are all contained in |U_i> (and the unitary operator).
Note 3) This is a thought experiment intended to provide an existence
proof in the mathematical sense. For instance, we will never know |U>
but we can assume it exists. I think is is agreed that HLUTs will
not be implemented ever. (This is why they are called HLUTs rather than
LUTs I guess?)
Regards,
Pierre-Normand Houle
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or
Start Your Own
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 06 Mar 1999 00:00:00 GMT
Message-ID: <36e139ff@news3.us.ibm.net>
References: <7bmnkn$2kl@ux.cs.niu.edu> <36DF9C9B.63FDD6A0@sandpiper.net>
<7bp138$5jj@ux.cs.niu.edu> <36e01caf@news3.us.ibm.net>
<7bp7ld$dgj@edrn.newsguy.com> <36e0464e@news3.us.ibm.net>
<7bq3ke$sgb$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 6 Mar 1999 14:21:51 GMT, 166.72.21.142
Organization: SilWis
Newsgroups: comp.ai.philosophy
houlepn@my-dejanews.com wrote in message
<7bq3ke$sgb$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
><snip>
>
>[S. NAVEGA]
>> Let me propose a different dialog. Questions 1 and 2 are
>> the same you wrote. Here's my replacements:
>>
>> 3. Q: What is your name?
>> 4. A: Freddy.
>>[snip]
>[PNH]
>I would like to propose another way of looking at this problem
>because I don't think what you say contradicts Daryl McCullough's
>viewpoint.
>
>Lets consider some finite time period of length T in the life of
>Freddy in whitch he grows, learns, sleeps, etc. Consider now the
>state vector of the universe |U_i> at the beginning of this time
>period. This could be factored into |environment_i> x |Freddy_i>.
>How will Freddy answer question Q at time i + t (t < T) ? Use the
>following algorythm:
>
Pierre-Normand, would you read the answer I gave to Daryl? I think
my point there applies equally well to your comments.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <7bp2lm$492@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>Now what I think you and others are proposing is that
>a HLUT can be constructed by the recording, with a
>sufficiently fine-grained resolution, all aspects of
>input/output of that system.
No. Nobody is suggesting any method for constructing
an HLUT whatsoever.
>If this sounds too "practical" or infeasible, lets say
>that there is a mathematical, imaginary HLUT able to
>represent all the action/reaction pairs taken by that
>system over a period of 1 year or a lifetime. There should
>exist such a HLUT. I do not discuss this.
>The claim at stake and the one I agree entirely is that
>this recorded HLUT *cannot*, *by any means*, be "played back"
>in such a way as to present intelligent behavior comparable
>to the system it is emulating. There isn't any practical
>condition that we're able to enforce such that the playback
>(or "execution") of that HLUT will present comparable
>behavior.
Everybody agrees that implementing and executing such an
HLUT is not practical.
>This HLUT is not much different than a very high resolution,
>multitrack videocassette. A recorder of past actions.
Not quite. It is a table of pairs <input_history,next_output>.
The input history can be thought of as a very high resolution
multitrack videocassette (if we limit the HLUT to just audiovisual
inputs). The HLUT, by definition, has a next_output for *every*
possible input_history; that is, for every possible videocassette.
If the videocassette is digital, and we only consider videocassettes
of length less than 100 years, then there are only finitely many
possible videocassettes.
>To say that the HLUT is capable of reproducing equivalent
>intelligent behavior assumes that we're able to "rewind"
>the universe (backward time travel) and, even worse, that
>the universe is totally deterministic (will run exactly
>the same way as "before").
No, it doesn't assume any such thing. The HLUT makes no
assumptions about the universe being deterministic. The
HLUT itself is deterministic, although that can easily
be changed by allowing one of its inputs be from a
random number generator.
The HLUT is simply a huge if-then-else table, each line
being of the form:
If [such and such happens] then [do this]
Else if [this other thing happens] then [do that]
.
.
.
Such a table doesn't presuppose that the world is
deterministic.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <36e02628@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 5 Mar 1999 18:44:56 GMT, 166.72.29.222
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7bp2lm$492@edrn.newsguy.com>...
>Sergio says...
>
>>Now what I think you and others are proposing is that
>>a HLUT can be constructed by the recording, with a
>>sufficiently fine-grained resolution, all aspects of
>>input/output of that system.
>
>No. Nobody is suggesting any method for constructing
>an HLUT whatsoever.
>
So I'm correct in saying that you're thinking about the
existence of a mathematical entity, a HLUT, conceived by,
lets say for the sake of argumentation, a "Goedel-free" god
outside of this universe, which would present the same
behavior than the entity being analyzed. That's what
I'm perceiving as problematic: this is equivalent as
having a recording of all entries and playing them back.
And that's what I'm inclined to say is not possible.
>>If this sounds too "practical" or infeasible, lets say
>>that there is a mathematical, imaginary HLUT able to
>>represent all the action/reaction pairs taken by that
>>system over a period of 1 year or a lifetime. There should
>>exist such a HLUT. I do not discuss this.
>
>>The claim at stake and the one I agree entirely is that
>>this recorded HLUT *cannot*, *by any means*, be "played back"
>>in such a way as to present intelligent behavior comparable
>>to the system it is emulating. There isn't any practical
>>condition that we're able to enforce such that the playback
>>(or "execution") of that HLUT will present comparable
>>behavior.
>
>Everybody agrees that implementing and executing such an
>HLUT is not practical.
>
Now the question is not of practicality. It is of impossibility.
And the root of the problem, by my current understanding of
it, is the indeterminacy of the universe.
>>This HLUT is not much different than a very high resolution,
>>multitrack videocassette. A recorder of past actions.
>
>Not quite. It is a table of pairs <input_history,next_output>.
>The input history can be thought of as a very high resolution
>multitrack videocassette (if we limit the HLUT to just audiovisual
>inputs). The HLUT, by definition, has a next_output for *every*
>possible input_history; that is, for every possible videocassette.
>If the videocassette is digital, and we only consider videocassettes
>of length less than 100 years, then there are only finitely many
>possible videocassettes.
>
I may agree with the finiteness you propose. What I don't agree
is that this HLUT will present the same behavior when subject to
a "second run". More on this follows.
>>To say that the HLUT is capable of reproducing equivalent
>>intelligent behavior assumes that we're able to "rewind"
>>the universe (backward time travel) and, even worse, that
>>the universe is totally deterministic (will run exactly
>>the same way as "before").
>
>No, it doesn't assume any such thing. The HLUT makes no
>assumptions about the universe being deterministic. The
>HLUT itself is deterministic, although that can easily
>be changed by allowing one of its inputs be from a
>random number generator.
>
>The HLUT is simply a huge if-then-else table, each line
>being of the form:
>
> If [such and such happens] then [do this]
> Else if [this other thing happens] then [do that]
> .
> .
> .
>
>Such a table doesn't presuppose that the world is
>deterministic.
>
Daryl, tell me if you agree with this. To consider this
HLUT different from a recording such as the one I
proposed, we must find a way of putting this HLUT
into the world and exercise its I/O with context
dependent entries. If we can't do this, then this
HLUT is, in practice, just a recorder.
I see you don't want to face the HLUT as a simple
recorder. Then, if we put this HLUT into the world,
it is expected to behave intelligently. That's
the problem I'm starting to be aware of. The world is not
deterministic. That means that *eventually* one of the
if/then rules that the HLUT has *may not* work
the way expected. This deviation will be cumulative
and I expect that in a few minutes of living in this
world that HLUT will behave as a random number
generator (imagine that controlling a strong robot :-).
If you ask me to formalize this, I'll say I can't do it.
But I can propose a thought experiment that can help
understand better my point. Lets forget HLUTs for a while.
Suppose that, by an act of "magic", when you was born
that same "Goedel-free" god I mentioned earlier made a
"quark by quark" description of your entire baby body,
brain included. Life continued and now you're X years old.
Now suppose that this god rewound the universe to the
exact position, down again to the quark, that it had
during your birth. Suppose now that, without altering
the indeterminacy principle and the laws of physics we're
used, you have been "instantiated" again in that universe.
Everything is the same as before, even the baby you.
What is the probability that you would develop to
the exact (or even close) condition you had with X
years old on your previous life? I'd say zilch.
Unless the universe is deterministic, which doesn't
seem to be true.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <7bp9qr$hif@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>Suppose that, by an act of "magic", when you was born
>that same "Goedel-free" god I mentioned earlier made a
>"quark by quark" description of your entire baby body,
>brain included. Life continued and now you're X years old.
>
>Now suppose that this god rewound the universe to the
>exact position, down again to the quark, that it had
>during your birth. Suppose now that, without altering
>the indeterminacy principle and the laws of physics we're
>used, you have been "instantiated" again in that universe.
>Everything is the same as before, even the baby you.
>
>What is the probability that you would develop to
>the exact (or even close) condition you had with X
>years old on your previous life? I'd say zilch.
>Unless the universe is deterministic, which doesn't
>seem to be true.
Right. The world is not deterministic. I don't understand
what that has to do with anything. The HLUT doesn't
assume that the world is deterministic.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <36e0464c@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 5 Mar 1999 21:02:04 GMT, 166.72.29.20
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7bp9qr$hif@edrn.newsguy.com>...
>Sergio says...
>
>>Suppose that, by an act of "magic", when you was born
>>that same "Goedel-free" god I mentioned earlier made a
>>"quark by quark" description of your entire baby body,
>>brain included. Life continued and now you're X years old.
>>
>>Now suppose that this god rewound the universe to the
>>exact position, down again to the quark, that it had
>>during your birth. Suppose now that, without altering
>>the indeterminacy principle and the laws of physics we're
>>used, you have been "instantiated" again in that universe.
>>Everything is the same as before, even the baby you.
>>
>>What is the probability that you would develop to
>>the exact (or even close) condition you had with X
>>years old on your previous life? I'd say zilch.
>>Unless the universe is deterministic, which doesn't
>>seem to be true.
>
>Right. The world is not deterministic. I don't understand
>what that has to do with anything. The HLUT doesn't
>assume that the world is deterministic.
>
Lets go again one step at a time. You obviously agree
that there is an imaginary HLUT whose entries are I/O
pairs (if/then conditions) representing the "intelligent"
acts throughout the life of one specific human being.
Lets say that there is such a thing.
Take this HLUT. Put it to perform in the world (execute its
if/then conditions), under the exact same initial conditions
of that human being. What I'm inclined to believe is that, after
a short time, it will not present behavior that can be
considered intelligent, let alone reproduce the behavior
of that original human being.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 05 Mar 1999 00:00:00 GMT
Message-ID: <7bpkbo$5r2@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>Lets go again one step at a time. You obviously agree
>that there is an imaginary HLUT whose entries are I/O
>pairs (if/then conditions) representing the "intelligent"
>acts throughout the life of one specific human being.
More than that; for each *possible* history of inputs
for a human being, the HLUT gives an action that is an
intelligent response. So it contains much more information
than the history of i/o pairs for any one human's lifetime.
>Take this HLUT. Put it to perform in the world (execute its
>if/then conditions), under the exact same initial conditions
>of that human being. What I'm inclined to believe is that, after
>a short time, it will not present behavior that can be
>considered intelligent, let alone reproduce the behavior
>of that original human being.
Let me try to clarify the meaning of the HLUT a little
further. Let's suppose that the person we are trying to
reproduce is called "Sergio". By assumption, for every
history H of inputs that Sergio could possibly receive,
there is an action A that Sergio could plausibly have
made in response to history H such that <H,A> is an
entry in the HLUT. ("Plausibly" in this case
means both that it is intelligent, and that it is in
keeping with Sergio's personality, abilities, etc.)
Now, when this HLUT runs, what it is constantly doingusing
its sensors to figure out what history H it is
in, and then outputs the corresponding A. By the time it
outputs A, the history has changed (more inputs have
arrived) and the history is now not H but H'. So the HLUT
finds the corresponding A' and outputs *that*. And so on.
What you seem to be saying is that the HLUT will fail
to behave intelligently, and more specifically will fail
to behave like Sergio after some period of time. What does
that mean? To me, to fail to behave like Sergio is to perform
some action A which Sergio would never have performed in
similar circumstances. But by assumption, the output A
*is* an action that Sergio would plausibly have performed
in such circumstances! So by definition, the HLUT cannot
fail to behave like Sergio (at least to the extent that
the HLUT's "circumstances" are determined by the history
of inputs it has received).
So I don't understand what you mean.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 06 Mar 1999 00:00:00 GMT
Message-ID: <36e139fd@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 6 Mar 1999 14:21:49 GMT, 166.72.21.142
Organization: SilWis
Newsgroups: comp.ai.philosophy
Things are getting interesting...
Daryl McCullough wrote in message <7bpkbo$5r2@edrn.newsguy.com>...
>Sergio says...
>
>>Lets go again one step at a time. You obviously agree
>>that there is an imaginary HLUT whose entries are I/O
>>pairs (if/then conditions) representing the "intelligent"
>>acts throughout the life of one specific human being.
>
>More than that; for each *possible* history of inputs
>for a human being, the HLUT gives an action that is an
>intelligent response. So it contains much more information
>than the history of i/o pairs for any one human's lifetime.
>
This is equivalent to think that the universe is deterministic
and that the HLUT can be completely addressed with the inputs
it receives. I'll develop these points below.
>>Take this HLUT. Put it to perform in the world (execute its
>>if/then conditions), under the exact same initial conditions
>>of that human being. What I'm inclined to believe is that, after
>>a short time, it will not present behavior that can be
>>considered intelligent, let alone reproduce the behavior
>>of that original human being.
>
>Let me try to clarify the meaning of the HLUT a little
>further. Let's suppose that the person we are trying to
>reproduce is called "Sergio". By assumption, for every
>history H of inputs that Sergio could possibly receive,
>there is an action A that Sergio could plausibly have
>made in response to history H such that <H,A> is an
>entry in the HLUT. ("Plausibly" in this case
>means both that it is intelligent, and that it is in
>keeping with Sergio's personality, abilities, etc.)
>
>Now, when this HLUT runs, what it is constantly doing
>is using its sensors to figure out what history H it is
>in, and then outputs the corresponding A. By the time it
>outputs A, the history has changed (more inputs have
>arrived) and the history is now not H but H'. So the HLUT
>finds the corresponding A' and outputs *that*. And so on.
>
That's not only unlikely, it is *impossible*.
The root of the question is that inputs H inform the
HLUT of only a humongously minimal situation of the world
in that moment.
Suppose that the HLUT received input H, the world is in a
condition W (meaning the world have a determinate set of
atoms positions, velocities, spins, etc) and the HLUT answers with
A. Now suppose that it receives H1, to which it answers A1 and
so on. What you're saying is that if we run this HLUT
*again*, and supposing that we restart the universe from the
exact same conditions, the answer A1 would be always appropriate.
That will not work, because the world may have evolved differently
and action A1, appropriate in the previous instance, may not
be appropriate now. And to know that this is not appropriate
now, you'd have to know *all* the state of the universe to
decide. The past history of H is not enough! The world
goes by itself, never repeating! The past history is useless!
(detail: only for the HLUT, not for an intelligent brain).
This would only be possible if the HLUT had access to the
*COMPLETE* state of the universe (W). But the HLUT "sees" the
universe through an extremely infinitesimal window (H). You're
trying to make the HLUT perform as if it had complete access
to the evolution of the status of the world (all atoms, all
velocities, all spins of electrons, etc). That's not possible,
even in our most wild dreams. I'll try to suggest a more
intuitive thought experiment later.
>What you seem to be saying is that the HLUT will fail
>to behave intelligently, and more specifically will fail
>to behave like Sergio after some period of time. What does
>that mean? To me, to fail to behave like Sergio is to perform
>some action A which Sergio would never have performed in
>similar circumstances. But by assumption, the output A
>*is* an action that Sergio would plausibly have performed
>in such circumstances! So by definition, the HLUT cannot
>fail to behave like Sergio (at least to the extent that
>the HLUT's "circumstances" are determined by the history
>of inputs it has received).
>
>So I don't understand what you mean.
>
Again, this only demonstrates that HLUTs are static recordings
and that playing them back ("executing" them) will not
reproduce the same behaviors in this world we're in.
Daryl, do you understand that what you're proposing would only
be possible if the universe was deterministic? If you
put this HLUT to run again, on the same universe, it will
not present the same behavior, because the universe is
nondeterministic.
If this HLUT, by definition, will behave like Sergio and if
this HLUT is able to behave like Sergio if it runs again, then
that means that this HLUT have, as entries, all possible
conditions of all possible atoms in the universe in all
possible positions, velocities, interaction with each other,
etc. Is that right? Do you think that this would work?
I claim that *even* in this situation, *it will not work*!!!
Even if constructed this way, the HLUT will not be able to
do it right. Because to do that, the entries H we would have to
provide to the HLUT would have to include *all the current
status of the universe in that moment*, so that it can be
used as address inside the HLUT to come up with the *right*
answer A.
But, by definition, we are receiving *just* a vector H, with an
absurdly limited vision of this universe (besides, the sensors
who provide H have an accuracy very far from the quantum
positions and attributes of the atoms).
So this is impossible. And the effects on the potential answers
for behavior are dramatic.
Now let me try to find a more "down to earth" example. See if
you agree with me here:
Take a coin, put it in the palm of your hand, face up. Now
gently throw it in the air in such a way as to make it give
exactly one complete turn, falling face up again in your
hand. I guess that everybody is able to do this in a very
predictable manner.
Now do the same thing, but this time, make it give two turns
and fall again face up. Things get a little bit more difficult
but I guess that after some training, one can be confident in
getting this right.
Do the same thing again with three turns, then four. There is
a certain number where no human will be able to control that
with precision. Don't give up yet.
Build a *highly accurate* robot hand which is able to work in
a sealed, very controlled environment. Make it turn the coin
for 10 times. Now 20, then 50, and then 500 times. I think it is
not very difficult to believe that there is a number of turns "n"
above what *no mechanism*, no matter how precise, would give a face
up coin with probability *better than chance*. Why this? Because
we've got to the atomic and quantum limits of our world (the
coin is losing atoms of nickel when it turns in the air!)
Is this example too much contrived? Not at all! Everything we're
living right now is the result of the interactions on this level.
This is not an exception, it is the rule!
How's the weather today there? I bet that if it's raining you would
reluct going out. On the other hand, if it's a sunny day, you will
finish reading this message and you may consider going for a walk
in the park. Your behavior is affected by the way the weather is
now, right?.
You're proposing that a HLUT can capture this behavior, which
means, capture all atmospheric variations since you've born and
predict what the weather will be today. You're also assuming that
if we "start this world again", we will obtain the *same* weather
we're having now.
If I had to summarize all this post in a single phrase, I'd say
that a HLUT doesn't "know" that sunny days are good for a walk
at the park!
Regards,
Sergio Navega.
P.S: By the way, you may be wondering how our brain is able to
perform reasonably in such an unpredictable world. I'd remember
what Neil said elsewhere, that's *exactly* what intelligence is
about!
From: "Gary Forbis" <forbis@accessone.com>
Subject: Re: An HLUT challenge.
Date: 06 Mar 1999 00:00:00 GMT
Message-ID: <7brsgr$shl$1@remarQ.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3155.0
X-Complaints-To: newsabuse@remarQ.com
X-Trace: 920745307 C4TGOY8WA812CD12BC usenet76.supernews.com
Organization: Posted via RemarQ, http://www.remarQ.com - Discussions start here!
Newsgroups: comp.ai.philosophy
Sergio Navega wrote in message <36e139fd@news3.us.ibm.net>...
...
>Suppose that the HLUT received input H, the world is in a
>condition W (meaning the world have a determinate set of
>atoms positions, velocities, spins, etc) and the HLUT answers with
>A. Now suppose that it receives H1, to which it answers A1 and
>so on. What you're saying is that if we run this HLUT
>*again*, and supposing that we restart the universe from the
>exact same conditions, the answer A1 would be always appropriate.
OK, I understand this.
>That will not work, because the world may have evolved differently
>and action A1, appropriate in the previous instance, may not
>be appropriate now.
But I don't understand this.
>And to know that this is not appropriate
>now, you'd have to know *all* the state of the universe to
>decide. The past history of H is not enough! The world
>goes by itself, never repeating! The past history is useless!
>(detail: only for the HLUT, not for an intelligent brain).
And I disagree with this.
Here's why...
The human brain doesn't have access to the world except through its
senses, that is, it's input. We don't know anything about the world
we were not born with or learned.
The HLUT is just a set of input stream, output stream pairs. Since
the brain is able to produce appropriate output based upon state and
input then, provided there is a fininte number of possible inputs and
outputs,
so should the HLUT.
What you appear to be proposing is the human brain can produce
different and appropriate output based upon differences in the universe
to which it does not have access.
>This would only be possible if the HLUT had access to the
>*COMPLETE* state of the universe (W). But the HLUT "sees" the
>universe through an extremely infinitesimal window (H). You're
>trying to make the HLUT perform as if it had complete access
>to the evolution of the status of the world (all atoms, all
>velocities, all spins of electrons, etc). That's not possible,
>even in our most wild dreams.
How does this differ from the human brain? Do you suppose it
has complete access to W?
>I'll try to suggest a more intuitive thought experiment later.
...
a nice example of how quantum effects have global consequenses
deleted.
>How's the weather today there? I bet that if it's raining you would
>reluct going out. On the other hand, if it's a sunny day, you will
>finish reading this message and you may consider going for a walk
>in the park. Your behavior is affected by the way the weather is
>now, right?.
>
>You're proposing that a HLUT can capture this behavior, which
>means, capture all atmospheric variations since you've born and
>predict what the weather will be today. You're also assuming that
>if we "start this world again", we will obtain the *same* weather
>we're having now.
I don't think the brain predicts weather so I don't know why the HLUT
would have to in order to produce the appropraite behavior.
>If I had to summarize all this post in a single phrase, I'd say
>that a HLUT doesn't "know" that sunny days are good for a walk
>at the park!
I don't know how human brains know these things either if one
proposes personal history is insufficient to make such determinations.
>P.S: By the way, you may be wondering how our brain is able to
>perform reasonably in such an unpredictable world. I'd remember
>what Neil said elsewhere, that's *exactly* what intelligence is
>about!
The HLUT shouldn't be subject to any greater unpredictability than
the human brain. You've convinced me the HLUT could be intelligent
even if it doesn't have phenomenal existence but only noumenal existence.
(No one's saying an HLUT could actually exist, this is a thought
experiment.)
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <36e3ee91@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7brsgr$shl$1@remarQ.com>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 8 Mar 1999 15:36:49 GMT, 129.37.182.47
Organization: SilWis
Newsgroups: comp.ai.philosophy
Gary, thanks for the comments, it will give me the opportunity to
readdress the arguments. I have arranged a bit your comments in
order to ease my responses.
Gary Forbis wrote in message <7brsgr$shl$1@remarQ.com>...
>
>The human brain doesn't have access to the world except through its
>senses, that is, it's input. We don't know anything about the world
>we were not born with or learned.
>
Agreed. And according to the specification of the thought experiment,
the HLUT do not have access to the world other than by a similar
method. What I claimed is that a HLUT, to present intelligent
behavior, would have to have access to all the states of the universe
(all positions of atoms, all spins, all velocities, etc). More on
this follows.
>The HLUT is just a set of input stream, output stream pairs. Since
>the brain is able to produce appropriate output based upon state and
>input then, provided there is a fininte number of possible inputs and
>outputs, so should the HLUT.
>
The first question here is, can we find a HLUT which is equivalent
to the behaviors presented by an intelligent brain? Yes, I do not
discuss that, one way of coming up with that is doing a recording
of all I/O pairs of the life of that person.
Now, can we put *this specific HLUT* to run in the world and obtain
comparable intelligent behavior? My answer to this is NO!
>What you appear to be proposing is the human brain can produce
>different and appropriate output based upon differences in the universe
>to which it does not have access.
If you mean different and more appropriate than the HLUT, yes, that's
what I'm saying: the brain is able to produce intelligent behavior
*even* without knowing the full state of the universe. The HLUT,
on the same conditions, can't. The brain can work very fine without
knowing the positions, spins and velocities of all matter in the
universe. The HLUT can't.
>
>I don't think the brain predicts weather so I don't know why the HLUT
>would have to in order to produce the appropraite behavior.
>
Gary, take a look through your window. What do you see? Maybe some clouds,
maybe the sun. Could you consider that a good weather? Lets suppose yes.
So, if invited, you'd go for a walk in the park.
Now feed that same image that entered your vision in a HLUT programmed
to behave like you. The HLUT will take each pixel (whatever resolution),
add that to the complete history of inputs you had and use that as
an address to the big, humongous table. This will return an entry(ies)
that is the set of behavioral answers equivalent, for example, to the
act of walking toward the park.
Now, like a god, do the following: put a very small, but visible, "leg"
in one of the clouds of your sky. That would not interfere in your
humanly decision of going for a walk. So, neither should it do to the
HLUT. But to allow that, the HLUT would have to map a *different* set of
pixels (which means, a different HLUT input address) to the *same*
behavior (even if stored as another entry). Do you understand where
we're getting at?
How many changes can I devise in that picture in order to have what
can be considered a sunny day? All of them would have to be represented
as *individual entries* in the HLUT. That could mean a lot, lot more
of space, but still feasible. Is that so?
Do you agree that your behavior would be different if you'd broken your
leg yesterday? Even with a sunny day, you would reluct going for a walk.
Unless, of course, that in the day before yesterday you met with a
gorgeous "dream" blonde that set a date with you on the park today.
That would make you go for a walk even if it were raining. But wait,
10 years ago you had a similar experience with a blonde that was part
of a group of thieves and you promised for yourself never go after that
kind of "mermaid sing" again. But you must be told that you have been
victimized 10 years ago because the group choose to attack you, and not
the guy that passed 1 second earlier. And that happened because the
traffic sign of the 5th avenue went "nuts" for a "complete" 300
miliseconds
and made you loose those precious seconds that could have prevented
you from being there at the time of the robbery. I must add that the
traffic sign lost those 0,3 secs because a fly got in the middle of
the mechanism. That fly, by the way, were not meant to be there,
because that butterfly was crossing and then...
You see, your behavior today regarding that walk in the park is
a *direct* function of that butterfly 10 years ago. This does
not makes part of your vector H of input experiences. Sounds crazy?
A HLUT can, obviously, store all these conditions, provided that
this HLUT model the *entire universe*, atom by atom, spin by spin,
velocity by velocity. So you may ask, what is it that our brain
does that a HLUT cannot?
The essential aspect is that a brain can understand the invariant
aspects of the experiences it is subject to.
By doing that, a brain will not care about the shape of the clouds to
consider a day appropriate for a walk or not. The brain *reduces*
the complexity of the universe, a HLUT do not. Without reducing
the complexity (which could be translated to categorizing the universe,
grouping similar experiences in the same umbrella, using induction to
come up with generalizing principles), one would not be able to
present intelligent behavior.
>I don't know how human brains know these things either if one
>proposes personal history is insufficient to make such determinations.
Because what the brain stores is not personal histories. The brain
stores personal invariants, things that help it classify new
experiences according to previous ones. Part of this is done through
perception. When you try to define things (a cup, for example), you
present the essential, fixed, "important" aspects of cups you have seen.
A HLUT is unable to come up with such a definition. Given a generic
cup in a world (which means a new situation in an unpredictable
environment), you know what you can do (the behaviors you can
express), given your desires. A HLUT can't do that without knowing
the full universe.
Regards,
Sergio Navega.
From: houlepn@my-dejanews.com
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <7bvj56$5vv$1@nnrp1.dejanews.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
X-Http-Proxy: 1.0 x12.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Mon Mar 08 04:19:57 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)
"Sergio Navega" <snavega@ibm.net> wrote:
> Things are getting interesting...
>
> Daryl McCullough wrote in message <7bpkbo$5r2@edrn.newsguy.com>...
> >Sergio says...
> >
> >> Lets go again one step at a time. You obviously agree
> >> that there is an imaginary HLUT whose entries are I/O
> >> pairs (if/then conditions) representing the "intelligent"
> >> acts throughout the life of one specific human being.
> >
> > More than that; for each *possible* history of inputs
> > for a human being, the HLUT gives an action that is an
> > intelligent response. So it contains much more information
> > than the history of i/o pairs for any one human's lifetime.
> >
>
> This is equivalent to think that the universe is deterministic
> and that the HLUT can be completely addressed with the inputs
> it receives. I'll develop these points below.
>
> >> Take this HLUT. Put it to perform in the world (execute its
> >> if/then conditions), under the exact same initial conditions
> >> of that human being. What I'm inclined to believe is that, after
> >> a short time, it will not present behavior that can be
> >> considered intelligent, let alone reproduce the behavior
> >> of that original human being.
> >
> > Let me try to clarify the meaning of the HLUT a little
> > further. Let's suppose that the person we are trying to
> > reproduce is called "Sergio". By assumption, for every
> > history H of inputs that Sergio could possibly receive,
> > there is an action A that Sergio could plausibly have
> > made in response to history H such that <H,A> is an
> > entry in the HLUT. ("Plausibly" in this case
> > means both that it is intelligent, and that it is in
> > keeping with Sergio's personality, abilities, etc.)
> >
> > Now, when this HLUT runs, what it is constantly doing
> > is using its sensors to figure out what history H it is
> > in, and then outputs the corresponding A. By the time it
> > outputs A, the history has changed (more inputs have
> > arrived) and the history is now not H but H'. So the HLUT
> > finds the corresponding A' and outputs *that*. And so on.
> >
>
> That's not only unlikely, it is *impossible*.
Ok. I read your response and it seems it would be a valid counter
to Daryl's argument if his HLUT (with a finite alphabet input)
was meant to reproduce the exact behavior of the actual 'Sergio'
but he just claims to define 'Sergio like' capabilities. I'll try to
indicate how this is relevant in what follows...
> The root of the question is that inputs H inform the
> HLUT of only a humongously minimal situation of the world
> in that moment.
Indeed, but that just reduces the size of the HLUT and its sensibility,
not necessarily its 'intelligence' and ability to produce 'Sergio like'
behavior. Note that the real Sergio's mind also has access to a
humongously small amount of information about the world and would
often be expected to exhibit the same behavior in similar circumstances.
For instance, in your example below, he might choose not to have a walk
in the park if the weather is bad irrespective of whether this is due
to some butterfly flapping it's wings in Honolulu one week earlier or to
some cockroach falling from a counter top in Stockholm. Neil's
objection seems rather to be about the very discriminative power of
Sergio's brain and sensory system due to the particular circumstances of
their development. This objection is interesting but I will first attempt
to address your own objections.
> Suppose that the HLUT received input H, the world is in a
> condition W (meaning the world have a determinate set of
> atoms positions, velocities, spins, etc) and the HLUT answers with
> A. Now suppose that it receives H1, to which it answers A1 and
> so on. What you're saying is that if we run this HLUT
> *again*, and supposing that we restart the universe from the
> exact same conditions, the answer A1 would be always appropriate.
The universe might follow a different history but then the HLUT will
be provided with a different input. Answer A1 is only given if the
universe follows history H1.
> That will not work, because the world may have evolved differently
Then its history will not be H1.
> and action A1, appropriate in the previous instance, may not
> be appropriate now. And to know that this is not appropriate
> now, you'd have to know *all* the state of the universe to
> decide. The past history of H is not enough! The world
> goes by itself, never repeating! The past history is useless!
> (detail: only for the HLUT, not for an intelligent brain).
Why? Both have a limited access to the state of the universe.
Note that although Sergio's answers might be affected by infinitesimal
perturbations (due to his brain being subjected to internal and external
non linear dynamic processes) he still only has access to it's past history
as a basis to make intelligent answers. The presence of more
bifurcations in phase space does not necessarily lead to a larger
HLUT (as in the butterfly/cockroach example) if the only goal of the
HLUT is to emulate Sergio like capabilities and not reproduce the exact
answers the actual Sergio would have given in the exact same circumstances.
[snip]
> But, by definition, we are receiving *just* a vector H, with an
> absurdly limited vision of this universe (besides, the sensors
> who provide H have an accuracy very far from the quantum
> positions and attributes of the atoms).
It seems to me this vector H is no more limited than a vector
representing all of Sergio's synaptic connection strengths (for
instance). They both are absurdly limited representations of the
universe. The current state of your brain and what you read on
your computer screen nevertheless provide enough context for you
to formulate intelligent answers.
[snip]
> Regards,
> Sergio Navega.
>
> P.S: By the way, you may be wondering how our brain is able to
> perform reasonably in such an unpredictable world. I'd remember
> what Neil said elsewhere, that's *exactly* what intelligence is
> about!
I think that Neil's main objection is that a static discrete alphabet
is inappropriate to define human behavior. It is certainly true that
the humans nervous system does not have a complete immutable alphabet
for I/Os specified by its DNA. This however does not appear to rule
out all the possibility to map Sergio's intelligence to a HLUT (in
principle, not in practice) if we either 1) restrict ourselves to some
period of time in which the high level alphabet does not change appreciably
(Sergio does not learn a new language) or 2) if we choose as an alphabet
a description of the I/Os at a lower more elementary level possibly ahead
of Sergio's sensory organs. Finally, I would say Neil's objection seems
to be directed more to the HLUT's apparent lack of long term biological
plasticity and thus ability to exhibit general intelligence in the long
run. I don't believe the quantum mechanical HLUT I presented earlier
suffer from this flaw but upon request I am willing to update it so as
to match more explicitly Neil's challenge (of which I just became aware
in browsing earlier posts of this thread with dejanews).
Regards,
Pierre-Normand Houle
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or
Start Your Own
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <36e3ee95@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 8 Mar 1999 15:36:53 GMT, 129.37.182.47
Organization: SilWis
Newsgroups: comp.ai.philosophy
Pierre, sorry, I have again started my answers in another post
and developed my arguments there. If you don't mind, take a look
at my answer to Gary. I promise that next time I'll start with
your post. However, there are some points I'd like to comment here.
houlepn@my-dejanews.com wrote in message
<7bvj56$5vv$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>> The root of the question is that inputs H inform the
>> HLUT of only a humongously minimal situation of the world
>> in that moment.
>
>Indeed, but that just reduces the size of the HLUT and its sensibility,
>not necessarily its 'intelligence' and ability to produce 'Sergio like'
>behavior. Note that the real Sergio's mind also has access to a
>humongously small amount of information about the world and would
>often be expected to exhibit the same behavior in similar circumstances.
>For instance, in your example below, he might choose not to have a walk
>in the park if the weather is bad irrespective of whether this is due
>to some butterfly flapping it's wings in Honolulu one week earlier or to
>some cockroach falling from a counter top in Stockholm.
What I'd like to discuss is when he *chooses* to go to a walk in the
park *in spite* of a bad weather! This argument could be easily left
aside by some sort of indeterminacy of emotions of unpredictability
of human behavior.
But I'd like to stick with a very rationalist analysis here, for the
sake of my arguments: he may have chosen to go to a walk because he
noticed that all his prior walks to a park on a sunny day had some
sort of problem (robbery, too crowded, skin burns, etc), and his
only one previous experience of walking in the rain was very
pleasant. How is a HLUT supposed to decide about this?
>Neil's
>objection seems rather to be about the very discriminative power of
>Sergio's brain and sensory system due to the particular circumstances of
>their development. This objection is interesting but I will first attempt
>to address your own objections.
>
My starting point for the HLUT argument is now the same as of Neil's
(as a matter of fact, things "settled" in my mind when I read a response
Neil has given to someone else). From that common starting point, Neil
develops his thoughts to another direction. I'm still on the purely
HLUT aspect. Another thing I find important in Neil's vision is that
the HLUT story, no matter if impractical, convoluted and mind-boggling,
demonstrates one of the essential points behind intelligent behavior.
In my answer to Gary (to which I report you), I give another
hypothetical situation which makes it clear the following points:
a) I agree that there is a HLUT able to represent all the intelligent
action/reactions of any intelligent being.
b) I do *not* agree that such a HLUT, if made to run in the world, will
present intelligent behavior (let alone model a specific person), unless...
c)...the HLUT stores *all physical parameters* from all atoms and ions in
the universe.
But I must confess, Pierre, that I'm not satisfied with even these
conditions. I think they are too "weak" (meaning that even with all
those conditions, I'd doubt that a HLUT would be intelligent). So let me
propose the following situation:
Take Einstein's HLUT. It is a recording of all input/output of all
experiences and behaviors lived by Einstein. Put this HLUT into a
hypothetical "computer", able to receive I/O pairs. Show this
computer the experience of a metal ball bouncing when thrown
to a wall. Now do the same thing with a nylon ball (something
Einstein didn't know, *but it looks like a rubber or wood ball*).
The ball bounces differently. Ask Einstein's HLUT to explain the
phenomena. It'll not.
>> Suppose that the HLUT received input H, the world is in a
>> condition W (meaning the world have a determinate set of
>> atoms positions, velocities, spins, etc) and the HLUT answers with
>> A. Now suppose that it receives H1, to which it answers A1 and
>> so on. What you're saying is that if we run this HLUT
>> *again*, and supposing that we restart the universe from the
>> exact same conditions, the answer A1 would be always appropriate.
>
>The universe might follow a different history but then the HLUT will
>be provided with a different input. Answer A1 is only given if the
>universe follows history H1.
>
Pierre, do you see that if we provide a different set of input
histories, then we're talking about a different HLUT? You may
propose that we can meld together all these possible HLUTs,
giving the definitive big HLUT. I'd say that even this won't
be enough to provide intelligent behavior.
What's different in a period of 10 years starting 1990 and 10 years
starting 1980 and starting 1960 and 1860 and 1512? Everything and
nothing.
Everything:
There isn't a single frame of visual pixels equal in all those periods
(lets say, the probability is humongously close to zero). This means
that we don't have sets of H vetors equal among these years. This
means we'd have to have *different* HLUTs for each situation, and
that means that this group of HLUTs will not be complete, because
the *future* 10 years will have another set, and 1000 years later
even another one...
Nothing:
Nothing is different, it's all the same, the same law of gravitation,
the same day/night cycle, the same atmospheric composition, the same
boiling point of water. All those HLUTs were not able to present
*any* kind of behavior that were dependent on any of these constant
characteristics. Yet, this is what I think is missing in a HLUT:
perception of what is regular.
>> That will not work, because the world may have evolved differently
>
>Then its history will not be H1.
>
Then, to present the same answer A1 given a history H1', it will have
to be *another HLUT*, unless we devise an HLUT that can have all
possible HLUTs, and this means stuffing *all* the universe (atoms,
velocities, spins, etc) in a HLUT.
>> and action A1, appropriate in the previous instance, may not
>> be appropriate now. And to know that this is not appropriate
>> now, you'd have to know *all* the state of the universe to
>> decide. The past history of H is not enough! The world
>> goes by itself, never repeating! The past history is useless!
>> (detail: only for the HLUT, not for an intelligent brain).
>
>Why? Both have a limited access to the state of the universe.
>Note that although Sergio's answers might be affected by infinitesimal
>perturbations (due to his brain being subjected to internal and external
>non linear dynamic processes) he still only has access to it's past history
>as a basis to make intelligent answers. The presence of more
>bifurcations in phase space does not necessarily lead to a larger
>HLUT (as in the butterfly/cockroach example) if the only goal of the
>HLUT is to emulate Sergio like capabilities and not reproduce the exact
>answers the actual Sergio would have given in the exact same circumstances.
>
Such a HLUT cannot reproduce Sergio's behavior, more than that, it cannot
do anything that is intelligent.
The point here is the opposite of what we're thinking. I'm not discussing
that there couldn't exist a HLUT large enough to store all the information
necessary. The question is almost the opposite: in which way we devise a
mechanism to store as few as possible in order to allow it to *predict*
future outcomes.
The thing here is not the past. A videocassette does that. The question
is, given a *new* input vector Hn, how is the HLUT supposed to come
with an intelligent behavior, if it does not have any correspondent
entry?
A HLUT cannot do that. So a HLUT cannot behave intelligently, let alone
behave as Sergio (I'm not sure if this is good or bad :-).
>[snip]
>
>> Regards,
>> Sergio Navega.
>>
>> P.S: By the way, you may be wondering how our brain is able to
>> perform reasonably in such an unpredictable world. I'd remember
>> what Neil said elsewhere, that's *exactly* what intelligence is
>> about!
>
>I think that Neil's main objection is that a static discrete alphabet
>is inappropriate to define human behavior. It is certainly true that
>the humans nervous system does not have a complete immutable alphabet
>for I/Os specified by its DNA. This however does not appear to rule
>out all the possibility to map Sergio's intelligence to a HLUT (in
>principle, not in practice) if we either 1) restrict ourselves to some
>period of time in which the high level alphabet does not change appreciably
>(Sergio does not learn a new language) or 2) if we choose as an alphabet
>a description of the I/Os at a lower more elementary level possibly ahead
>of Sergio's sensory organs. Finally, I would say Neil's objection seems
>to be directed more to the HLUT's apparent lack of long term biological
>plasticity and thus ability to exhibit general intelligence in the long
>run. I don't believe the quantum mechanical HLUT I presented earlier
>suffer from this flaw but upon request I am willing to update it so as
>to match more explicitly Neil's challenge (of which I just became aware
>in browsing earlier posts of this thread with dejanews).
>
Pierre, when you entered your office this morning and recognized your
computer, your desk, your chair, etc, do you believe that a HLUT could
also have done that (just recognizing), even knowing that *not a single
object* was seen by you exactly as you saw them in the last 5 years?
There isn't a single entry in such a HLUT able to recognize the cup
you had coffee yesterday, unless such a HLUT was able to predict the
movement of that ant that we found today in your cup. And you know,
the movement of that ant may have something to do with that cockroach
from Stockholm, a hundred years ago...
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <7c0u4t$3vt@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>What I'd like to discuss is when he *chooses* to go to a walk in the
>park *in spite* of a bad weather! This argument could be easily left
>aside by some sort of indeterminacy of emotions of unpredictability
>of human behavior.
>But I'd like to stick with a very rationalist analysis here, for the
>sake of my arguments: he may have chosen to go to a walk because he
>noticed that all his prior walks to a park on a sunny day had some
>sort of problem (robbery, too crowded, skin burns, etc), and his
>only one previous experience of walking in the rain was very
>pleasant. How is a HLUT supposed to decide about this?
The bad experiences (robbery, crowds, skin burns, etc.) are
all captured by the input history. By hypothesis, the HLUT
has, for each possible input history, a Sergio-like output.
>a) I agree that there is a HLUT able to represent all the intelligent
>action/reactions of any intelligent being.
>
>b) I do *not* agree that such a HLUT, if made to run in the world, will
>present intelligent behavior (let alone model a specific person), unless...
>
>c)...the HLUT stores *all physical parameters* from all atoms and ions in
>the universe.
I just don't understand where you are coming from, Sergio. The HLUT
by definition makes the same output that Sergio would make, in the
same circumstances. It has nothing to do with modeling atoms and
ions (except to the extent that Sergio himself models atoms and
ions).
Consider some robot that is programmed to behave like Sergio
(whether or not it is implemented by an HLUT). To do this,
the robot repeatedly does the following:
1. Receive inputs, and from those inputs try to determine
as well as possible what the situation is.
2. Make an output that Sergio would do in such a situation.
If the robot fails to behave intelligently (or Sergio-like), then
it must be because one of these two steps fails. Either the robot's
inputs are inadequate to determine what the actual situation is,
or else, its program relies on a mistaken idea about what Sergio
would have done.
In the case of the HLUT, step 2 is, by assumption, correct.
The HLUT never makes an output that Sergio wouldn't have made
(given the same input history), and never fails to make an
output that Sergio would make.
You talk about what happens if we restart the world (and the
HLUT) from scratch. Well, if in this run, the HLUT receives
different inputs, then it will make different outputs, but
they will *still* be outputs that Sergio would make, given
the same inputs.
>Take Einstein's HLUT. It is a recording of all input/output of all
>experiences and behaviors lived by Einstein.
That isn't what the HLUT is. Einstein's HLUT records, not only
the *actual* inputs and outputs for Einstein, but the set of
answers to all possible counterfactual questions, such as:
"What would Einstein do if he saw a flying saucer?"
"What would Einstein do if a beautiful 18 year old woman
had tried to seduce him while he worked on Special Relativity?"
"What would Einstein do if he won the New Jersey lottery?"
>Put this HLUT into a hypothetical "computer", able to receive
>I/O pairs. Show this computer the experience of a metal ball
>bouncing when thrown to a wall. Now do the same thing with a
>nylon ball (something Einstein didn't know, *but it looks like
>a rubber or wood ball*). The ball bounces differently. Ask
>Einstein's HLUT to explain the phenomena.
By assumption the HLUT will say the same thing as Einstein would
have. "Hmm, it looks like some sort of ball. It doesn't bounce
like a wooden ball would. I'm not sure what it's made of."
(except in German).
>>The universe might follow a different history but then the HLUT will
>>be provided with a different input. Answer A1 is only given if the
>>universe follows history H1.
>>
>
>Pierre, do you see that if we provide a different set of input
>histories, then we're talking about a different HLUT?
No! By definition, the HLUT has an entry for every *possible*
input history.
>What's different in a period of 10 years starting 1990 and 10 years
>starting 1980 and starting 1960 and 1860 and 1512?
A person born in 1960 or 1512 learns about his world
through his inputs, what he sees, hears, smells, etc.
Therefore, people in those two different years have
different input histories. Therefore, they have different
outputs in the HLUT. The HLUT contains, by assumption,
the set of all possible input histories (bounded by
resolution and by length---we only consider histories
less than 150 years long). It would contain appropriate
outputs for someone in 1512 as well as for someone in
1960.a
>Everything:
>There isn't a single frame of visual pixels equal in all those periods
>(lets say, the probability is humongously close to zero). This means
>that we don't have sets of H vetors equal among these years. This
>means we'd have to have *different* HLUTs for each situation,
No. By definition, one HLUT contains all possible input histories!
It is not necessary to have more than one, and this one covers every
possible time frame, past or future.
>that means that this group of HLUTs will not be complete, because
>the *future* 10 years will have another set, and 1000 years later
>even another one...
The HLUT is complete when it has enumerated all possible input
histories that a human can receive in one lifetime. If the length
of a history is limited (to 150 years, say) and the resolution is
limited, then there are only finitely many such histories.
>>> That will not work, because the world may have evolved differently
>>
>>Then its history will not be H1.
>>
>
>Then, to present the same answer A1 given a history H1', it will have
>to be *another HLUT*, unless we devise an HLUT that can have all
>possible HLUTs, and this means stuffing *all* the universe (atoms,
>velocities, spins, etc) in a HLUT.
No, it only needs to have an appropriate response for all possible
*input* histories. Those histories are finite resolution, they don't
need to be concerned about locations and velocities of distant atoms.
>>Why? Both have a limited access to the state of the universe.
>>Note that although Sergio's answers might be affected by infinitesimal
>>perturbations (due to his brain being subjected to internal and external
>>non linear dynamic processes) he still only has access to it's past history
>>as a basis to make intelligent answers. The presence of more
>>bifurcations in phase space does not necessarily lead to a larger
>>HLUT (as in the butterfly/cockroach example) if the only goal of the
>>HLUT is to emulate Sergio like capabilities and not reproduce the exact
>>answers the actual Sergio would have given in the exact same circumstances.
>>
>
>Such a HLUT cannot reproduce Sergio's behavior, more than that, it cannot
>do anything that is intelligent.
By assumption, the HLUT does exactly what Sergio would.
>The point here is the opposite of what we're thinking. I'm not discussing
>that there couldn't exist a HLUT large enough to store all the information
>necessary. The question is almost the opposite: in which way we devise a
>mechanism to store as few as possible in order to allow it to *predict*
>future outcomes.
It doesn't need to predict future outcomes any better than Sergio can.
>The thing here is not the past. A videocassette does that. The question
>is, given a *new* input vector Hn, how is the HLUT supposed to come
>with an intelligent behavior, if it does not have any correspondent
>entry?
By assumption, the HLUT already contains entries for all *possible*
input vectors!
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <36e437a5@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c0u4t$3vt@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 8 Mar 1999 20:48:37 GMT, 200.229.242.158
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7c0u4t$3vt@edrn.newsguy.com>...
>Sergio says...
>
>>What I'd like to discuss is when he *chooses* to go to a walk in the
>>park *in spite* of a bad weather! This argument could be easily left
>>aside by some sort of indeterminacy of emotions of unpredictability
>>of human behavior.
>
>>But I'd like to stick with a very rationalist analysis here, for the
>>sake of my arguments: he may have chosen to go to a walk because he
>>noticed that all his prior walks to a park on a sunny day had some
>>sort of problem (robbery, too crowded, skin burns, etc), and his
>>only one previous experience of walking in the rain was very
>>pleasant. How is a HLUT supposed to decide about this?
>
>The bad experiences (robbery, crowds, skin burns, etc.) are
>all captured by the input history. By hypothesis, the HLUT
>has, for each possible input history, a Sergio-like output.
>
Daryl, OK, that means that this HLUT is, by assumption, equivalent
to the behaviors I *had*. But put this HLUT to run *again*. Does
the fact that I had skin burns previously will be enough to
counterbalance my wish to go to the park? And if I was told that
a bag full of money was there in the park? You will say that this
new experience will be part of another HLUT and that HLUT *plus*
the previous could have been my Sergio's HLUT. Then I'd say that
there is another set of experiences and we would find another
HLUT to add to the previous. The end of this process will be
when we settle on a HLUT that contains the *whole* set of
experiences one could be able to experience, and this (remember
my astronomy example) demands knowledge of all the positions,
velocities,... of the universe.
>>a) I agree that there is a HLUT able to represent all the intelligent
>>action/reactions of any intelligent being.
>>
>>b) I do *not* agree that such a HLUT, if made to run in the world, will
>>present intelligent behavior (let alone model a specific person),
unless...
>>
>>c)...the HLUT stores *all physical parameters* from all atoms and ions in
>>the universe.
>
>I just don't understand where you are coming from, Sergio. The HLUT
>by definition makes the same output that Sergio would make, in the
>same circumstances. It has nothing to do with modeling atoms and
>ions (except to the extent that Sergio himself models atoms and
>ions).
>
Geez, Daryl, I wonder if you read my answers to Gary and Pierre.
I wrote some situations where I made clear why things go to
the ion level :-)
>Consider some robot that is programmed to behave like Sergio
>(whether or not it is implemented by an HLUT). To do this,
>the robot repeatedly does the following:
>
> 1. Receive inputs, and from those inputs try to determine
> as well as possible what the situation is.
> 2. Make an output that Sergio would do in such a situation.
>
>If the robot fails to behave intelligently (or Sergio-like), then
>it must be because one of these two steps fails. Either the robot's
>inputs are inadequate to determine what the actual situation is,
>or else, its program relies on a mistaken idea about what Sergio
>would have done.
>
Ok, so far so good, I can agree that such a robot may exist (although
I wouldn't be happy to present it to my wife :-)
>In the case of the HLUT, step 2 is, by assumption, correct.
>The HLUT never makes an output that Sergio wouldn't have made
>(given the same input history), and never fails to make an
>output that Sergio would make.
>
>You talk about what happens if we restart the world (and the
>HLUT) from scratch. Well, if in this run, the HLUT receives
>different inputs, then it will make different outputs, but
>they will *still* be outputs that Sergio would make, given
>the same inputs.
>
No, that's wrong. In the end of this post I'll propose a
different experiment that may shed some light in this direction.
>>Take Einstein's HLUT. It is a recording of all input/output of all
>>experiences and behaviors lived by Einstein.
>
>That isn't what the HLUT is. Einstein's HLUT records, not only
>the *actual* inputs and outputs for Einstein, but the set of
>answers to all possible counterfactual questions, such as:
> "What would Einstein do if he saw a flying saucer?"
> "What would Einstein do if a beautiful 18 year old woman
> had tried to seduce him while he worked on Special Relativity?"
> "What would Einstein do if he won the New Jersey lottery?"
>
Right, I agree that such a HLUT is mathematically conceived. What
I say is that this only works if the universe is deterministic and
finite. But both conditions are not reasonable, given what we know
about physics.
>
>>Put this HLUT into a hypothetical "computer", able to receive
>>I/O pairs. Show this computer the experience of a metal ball
>>bouncing when thrown to a wall. Now do the same thing with a
>>nylon ball (something Einstein didn't know, *but it looks like
>>a rubber or wood ball*). The ball bounces differently. Ask
>>Einstein's HLUT to explain the phenomena.
>
>By assumption the HLUT will say the same thing as Einstein would
>have. "Hmm, it looks like some sort of ball. It doesn't bounce
>like a wooden ball would. I'm not sure what it's made of."
>(except in German).
>
By assumption, this HLUT would have to have all the possible
outcomes of all events that can occur in this universe. You can
only get that if you have a way to know the position, velocity,
etc, of all the particles in that universe. But the one we're
in seems to be nondeterministic, so this is not possible, to
the extent of what we know of physics today.
>>>The universe might follow a different history but then the HLUT will
>>>be provided with a different input. Answer A1 is only given if the
>>>universe follows history H1.
>>>
>>
>>Pierre, do you see that if we provide a different set of input
>>histories, then we're talking about a different HLUT?
>
>No! By definition, the HLUT has an entry for every *possible*
>input history.
Then you melded them together into a single one. But you cannot stop
there, for I can find another set of experiences. Is this finite?
It is as much finite as the universe is, because all the universe
is able to influence me (no, this is not mystical, see my
example about a newspaper article about astronomy).
>
>>What's different in a period of 10 years starting 1990 and 10 years
>>starting 1980 and starting 1960 and 1860 and 1512?
>
>A person born in 1960 or 1512 learns about his world
>through his inputs, what he sees, hears, smells, etc.
>Therefore, people in those two different years have
>different input histories. Therefore, they have different
>outputs in the HLUT. The HLUT contains, by assumption,
>the set of all possible input histories (bounded by
>resolution and by length---we only consider histories
>less than 150 years long). It would contain appropriate
>outputs for someone in 1512 as well as for someone in
>1960.
NO! It would contain appropriate answers only for one
person living exactly that life. *One second* later,
that HLUT will not be able to account for the intelligence
of that man (think about everything that happened in that
man's life in which 1 second is enough to alter dramatically
the outcome of an event). Come on, Daryl!
>
>>Everything:
>>There isn't a single frame of visual pixels equal in all those periods
>>(lets say, the probability is humongously close to zero). This means
>>that we don't have sets of H vetors equal among these years. This
>>means we'd have to have *different* HLUTs for each situation,
>
>No. By definition, one HLUT contains all possible input histories!
>It is not necessary to have more than one, and this one covers every
>possible time frame, past or future.
>
We seem to be running after each other's tails here. So let me
propose another thought experiment and see if you agree.
Let's forget about HLUTs for a moment. Let's think only about brains.
Suppose those aliens you referred in another post did a different
kind of experiment: they made a copy of me, down to the quark level.
This copy of me took no time to develop and it was placed by my
side (in the same room). My questions would be:
a) Is that copy also a Sergio?
b) Does the copy could be called intelligent?
c) How long until we see differences in behavior?
a) No. It is not Sergio, because that copy, one milisecond later,
is already having thoughts that do not occur in the original one.
Why that? Because of quantum variations that compound and cause
differences in spike rates of the neurons. The initial thoughts
may be very similar, but they will get apart from one another in
a fast and *extremely cumulative* manner. In a few seconds, the
copy is not Sergio anymore.
b) I think the copy can be called intelligent (as long as
the original one can also be seen this way :-)
c) I would not be surprised if one week later we would be
visibly different (I, for one, would not be quiet seeing
someone else kissing my wife).
How can you propose that a HLUT can be equivalent to a human if
even a perfect copy cannot be so?
And if we lock the original Sergio in a jail and make the copy
assume his life? Do you think that after 1 year this copy Sergio
would be exactly where I'd be if I had the chance of living
that same life? No, I don't believe, because *each run* is different,
things never repeat and personal experiences cannot be duplicated,
even if the perceptual apparatus, personal histories and memories
are the same.
And why this happens? Remember of the story of throwing the coin in
such a way as to get a face up? That's the root of the question
about this world.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <7c3ga7$fdo@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c0u4t$3vt@edrn.newsguy.com> <36e437a5@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>>The bad experiences (robbery, crowds, skin burns, etc.) are
>>all captured by the input history. By hypothesis, the HLUT
>>has, for each possible input history, a Sergio-like output.
>>
>
>
>Daryl, OK, that means that this HLUT is, by assumption, equivalent
>to the behaviors I *had*.
No. For each possible input history, the HLUT lists a Sergio-like
output consistent with that history. So it's not just the experiences
you actually had, but all histories you *might* have had, but didn't.
All input sequences you might have in the future. They are all there
in the HLUT. For each such input history, there is an appropriate
output.
>But put this HLUT to run *again*. Does
>the fact that I had skin burns previously will be enough to
>counterbalance my wish to go to the park? And if I was told that
>a bag full of money was there in the park? You will say that this
>new experience will be part of another HLUT and that HLUT *plus*
>the previous could have been my Sergio's HLUT.
There is never a need for more than one HLUT, since, by
hypothesis, it covers every possible input history that
you could ever experience.
>Then I'd say that there is another set of experiences and
>we would find another HLUT to add to the previous. The end
>of this process will be when we settle on a HLUT that
>contains the *whole* set of experiences one could be able to
>experience,
Exactly. That's what the HLUT contains.
>and this (remember my astronomy example) demands knowledge
>of all the positions, velocities,... of the universe.
No, it does not.
>>I just don't understand where you are coming from, Sergio. The HLUT
>>by definition makes the same output that Sergio would make, in the
>>same circumstances. It has nothing to do with modeling atoms and
>>ions (except to the extent that Sergio himself models atoms and
>>ions).
>>
>
>Geez, Daryl, I wonder if you read my answers to Gary and Pierre.
Yes, but they didn't make sense to me. If facts about ions affect
you at all, then they affect you through your sensory inputs.
The HLUT already (by hypothesis) covers all possible histories
of sensory inputs, and so already includes whatever sensations
you might experience due to the behavior of ions. It isn't
necessary for the HLUT to know *how* your sensory inputs
are produced, it only needs to know what your possible
sensory inputs are, and what your responses to them are.
>>You talk about what happens if we restart the world (and the
>>HLUT) from scratch. Well, if in this run, the HLUT receives
>>different inputs, then it will make different outputs, but
>>they will *still* be outputs that Sergio would make, given
>>the same inputs.
>>
>
>No, that's wrong.
By definition of the HLUT, it has, for each possible
input history that Sergio could possibly experience, a
corresponding output that Sergio would make if he experienced
that input history.
>>That isn't what the HLUT is. Einstein's HLUT records, not only
>>the *actual* inputs and outputs for Einstein, but the set of
>>answers to all possible counterfactual questions, such as:
>> "What would Einstein do if he saw a flying saucer?"
>> "What would Einstein do if a beautiful 18 year old woman
>> had tried to seduce him while he worked on Special Relativity?"
>> "What would Einstein do if he won the New Jersey lottery?"
>>
>
>Right, I agree that such a HLUT is mathematically conceived. What
>I say is that this only works if the universe is deterministic and
>finite.
I know you said that, but you're wrong. It has nothing to do
with whether the universe is deterministic, only has to do with
the fact that the set of possible experiences that Einstein
could ever have is limited by (1) his sense organs, (2) their
resolution, (3) his lifespan.
>>>Put this HLUT into a hypothetical "computer", able to receive
>>>I/O pairs. Show this computer the experience of a metal ball
>>>bouncing when thrown to a wall. Now do the same thing with a
>>>nylon ball (something Einstein didn't know, *but it looks like
>>>a rubber or wood ball*). The ball bounces differently. Ask
>>>Einstein's HLUT to explain the phenomena.
>>
>>By assumption the HLUT will say the same thing as Einstein would
>>have. "Hmm, it looks like some sort of ball. It doesn't bounce
>>like a wooden ball would. I'm not sure what it's made of."
>>(except in German).
>>
>
>By assumption, this HLUT would have to have all the possible
>outcomes of all events that can occur in this universe.
No, that's not true. The HLUT only needs to have a list of
all possible input histories that Einstein could experience.
>>>Pierre, do you see that if we provide a different set of input
>>>histories, then we're talking about a different HLUT?
>>
>>No! By definition, the HLUT has an entry for every *possible*
>>input history.
>
>Then you melded them together into a single one.
That was the definition of the HLUT! I haven't "melded them together".
There was always only one HLUT, which listed each possible input
history.
>But you cannot stop there, for I can find another set of
>experiences.
No, you can't. If our senses are limited to finite resolution,
and our lives are limited to a finite lifespan, then the number
of distinguishable experiences in one lifetime is finite.
>Is this finite?
Yes.
>It is as much finite as the universe is,
The universe *is* finite (at least the subset of the universe
that could possibly affect you in one lifetime). But that's
beside the point, because it is *not* necessary to consider
supernova explosions, it is only necessary to consider your
*reading* about supernova explosions in the newspaper, or
your seeing the light from a supernova. If the HLUT has an
entry for all possible newspaper articles you could ever
read (and that is a finite number) and all visual images
that you could ever see (and that is a finite number), then
your case of a supernova explosion is already covered.
>>A person born in 1960 or 1512 learns about his world
>>through his inputs, what he sees, hears, smells, etc.
>>Therefore, people in those two different years have
>>different input histories. Therefore, they have different
>>outputs in the HLUT. The HLUT contains, by assumption,
>>the set of all possible input histories (bounded by
>>resolution and by length---we only consider histories
>>less than 150 years long). It would contain appropriate
>>outputs for someone in 1512 as well as for someone in
>>1960.
>
>NO!
Yes!
>It would contain appropriate answers only for one
>person living exactly that life. *One second* later,
>that HLUT will not be able to account for the intelligence
>of that man (think about everything that happened in that
>man's life in which 1 second is enough to alter dramatically
>the outcome of an event). Come on, Daryl!
If the one second alters anything, then that is a *different*
input history, Sergio. The HLUT would then produce a *different*
output.
>>No. By definition, one HLUT contains all possible input histories!
>>It is not necessary to have more than one, and this one covers every
>>possible time frame, past or future.
>
>We seem to be running after each other's tails here. So let me
>propose another thought experiment and see if you agree.
>
>Let's forget about HLUTs for a moment. Let's think only about brains.
>
>Suppose those aliens you referred in another post did a different
>kind of experiment: they made a copy of me, down to the quark level.
>This copy of me took no time to develop and it was placed by my
>side (in the same room). My questions would be:
>
>a) Is that copy also a Sergio?
Soon afterwards, the two copies will diverge. It doesn't
matter which one is called "Sergio", but they won't be the
same.
>b) Does the copy could be called intelligent?
Yes.
>c) How long until we see differences in behavior?
Almost immediately.
[Your answers deleted, since they agree with mine]
>How can you propose that a HLUT can be equivalent to a human if
>even a perfect copy cannot be so?
There is a range of behavior that someone would considered
intelligent, and there is a range of behavior that someone
would consider Sergio-like. The claim is that the HLUT will
produce behavior that is in this range.
>And if we lock the original Sergio in a jail and make the copy
>assume his life? Do you think that after 1 year this copy Sergio
>would be exactly where I'd be if I had the chance of living
>that same life? No, I don't believe, because *each run* is different,
>things never repeat and personal experiences cannot be duplicated,
>even if the perceptual apparatus, personal histories and memories
>are the same.
>
>And why this happens? Remember of the story of throwing the coin in
>such a way as to get a face up? That's the root of the question
>about this world.
I agree with what you say, but it doesn't argue against the
possibility of the HLUT. The HLUT produces intelligent, Sergio-like
behavior. If Sergio is nondeterministic, that means that more
than one behavior counts as Sergio-like.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: houlepn@ibm.net
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <7c22no$au3$1@nnrp1.dejanews.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
X-Http-Proxy: 1.0 x3.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Tue Mar 09 02:58:05 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)
"Sergio Navega" <snavega@ibm.net> wrote:
> Pierre, sorry, I have again started my answers in another post
> and developed my arguments there. If you don't mind, take a look
> at my answer to Gary. I promise that next time I'll start with
> your post. However, there are some points I'd like to comment here.
No problem Sergio. I have read your answers to Gary and Daryl and
I will take them into account while formulating my answers here.
Note that in what follow I am not arguing that humans are finite
state machines but rather that you have not shown they can not be
modeled by these.
> >"Sergio Navega" <snavega@ibm.net> wrote:
> >
> >> The root of the question is that inputs H inform the
> >> HLUT of only a humongously minimal situation of the world
> >> in that moment.
> >
> > Indeed, but that just reduces the size of the HLUT and its sensibility,
> > not necessarily its 'intelligence' and ability to produce 'Sergio like'
> > behavior. Note that the real Sergio's mind also has access to a
> > humongously small amount of information about the world and would
> > often be expected to exhibit the same behavior in similar circumstances.
> > For instance, in your example below, he might choose not to have a walk
> > in the park if the weather is bad irrespective of whether this is due
> > to some butterfly flapping it's wings in Honolulu one week earlier or to
> > some cockroach falling from a counter top in Stockholm.
>
> What I'd like to discuss is when he *chooses* to go to a walk in the
> park *in spite* of a bad weather! This argument could be easily left
> aside by some sort of indeterminacy of emotions of unpredictability
> of human behavior.
>
> But I'd like to stick with a very rationalist analysis here, for the
> sake of my arguments: he may have chosen to go to a walk because he
> noticed that all his prior walks to a park on a sunny day had some
> sort of problem (robbery, too crowded, skin burns, etc), and his
> only one previous experience of walking in the rain was very
> pleasant. How is a HLUT supposed to decide about this?
The HLUT is a different representation of a finite state machine.
(Implementing a HLUT, far from impossible, is just as trivial as
providing the actual finite state machine it represents!)
so your question really is "How is a finite state machine supposed to
decide about this?" and the answer is: (exactly as humans do) use
sensory inputs of finite discriminatory power, imperfect memories and
limited computational (symbolic, ANN or whatever) ability to provide
an intelligent answer. You are effectively arguing that a HLUT is not
equivalent to Sergio in a way Sergio is not equivalent to Sergio_prime
because if history was to repeat itself the outcome would be different.
This is true but although Sergio_prime might behave differently on
superficially similar circumstances that does not make him stupid!
Same thing for the HLUT/finite state machine that models sergio's
brain.
[snip]
> In my answer to Gary (to which I report you), I give another
> hypothetical situation which makes it clear the following points:
>
> a) I agree that there is a HLUT able to represent all the intelligent
> action/reactions of any intelligent being.
>
> b) I do *not* agree that such a HLUT, if made to run in the world, will
> present intelligent behavior (let alone model a specific person), unless...
>
> c)...the HLUT stores *all physical parameters* from all atoms and ions in
> the universe.
Do you agree that Sergio_prime is intelligent even though he does not
store all the physical parameters of Sergio's universe? Why would a
finite state machine be less intelligent than Sergio_prime?
> But I must confess, Pierre, that I'm not satisfied with even these
> conditions. I think they are too "weak" (meaning that even with all
> those conditions, I'd doubt that a HLUT would be intelligent). So let me
> propose the following situation:
>
> Take Einstein's HLUT. It is a recording of all input/output of all
> experiences and behaviors lived by Einstein. Put this HLUT into a
> hypothetical "computer", able to receive I/O pairs.
Then I would say this Einstein is a SLUT (Single history LUT). Much
worse than a finite state machine. Do you agree DeepBlue can only be
modeled by a HLUT with much more states than there are photons in the
observable universe? No surprise! How would it have beaten Kasparov
it its SLUT consisted the listing of just one single chess game! On the
other hand, adding stochastic elements to DeepBlue (as the real networked
multiprocessor computer probably has) would probably make some of its
moves dependent on honoluluan butterflies but would that make DeepBlue
any more intelligent? (I am not arguing against the possible usefulness
of stochastic processes in some real implementation of intelligence)
> >> [S. Navega]
> >> Suppose that the HLUT received input H, the world is in a
> >> condition W (meaning the world have a determinate set of
> >> atoms positions, velocities, spins, etc) and the HLUT answers with
> >> A. Now suppose that it receives H1, to which it answers A1 and
> >> so on. What you're saying is that if we run this HLUT
> >> *again*, and supposing that we restart the universe from the
> >> exact same conditions, the answer A1 would be always appropriate.
> >
> > The universe might follow a different history but then the HLUT will
> > be provided with a different input. Answer A1 is only given if the
> > universe follows history H1.
>
> Pierre, do you see that if we provide a different set of input
> histories, then we're talking about a different HLUT? You may
> propose that we can meld together all these possible HLUTs,
Yes. It take many SLUTs to be worth a HLUT.
> giving the definitive big HLUT. I'd say that even this won't
> be enough to provide intelligent behavior.
Ah! We are finally talking about true non-trivial finite state
machines!
> What's different in a period of 10 years starting 1990 and 10 years
> starting 1980 and starting 1960 and 1860 and 1512? Everything and
> nothing.
The Y2K bug ;)
> Everything:
> There isn't a single frame of visual pixels equal in all those periods
> (lets say, the probability is humongously close to zero). This means
> that we don't have sets of H vetors equal among these years. This
> means we'd have to have *different* HLUTs for each situation, and
> that means that this group of HLUTs will not be complete, because
> the *future* 10 years will have another set, and 1000 years later
> even another one...
I don't see any problem. In this situation, building a larger HLUT
is just the formal equivalent of providing the corresponding finite state
machine with more battery time.
> Nothing:
> Nothing is different, it's all the same, the same law of gravitation,
> the same day/night cycle, the same atmospheric composition, the same
> boiling point of water. All those HLUTs were not able to present
> *any* kind of behavior that were dependent on any of these constant
> characteristics. Yet, this is what I think is missing in a HLUT:
> perception of what is regular.
A HLUT can not perceive anything new although a finite state machine
can. How is that possible? This paradox is due to a category mistake.
The HLUT is the set of all the possible evolutions of that finite state
machine (in a finite time). Asking the HLUT to perceive something new
is like asking Sergio to answer questions asked to him before he was
born, after he died or hidden in black envelopes.
> >> That will not work, because the world may have evolved differently
> >
> >Then its history will not be H1.
>
> Then, to present the same answer A1 given a history H1', it will have
> to be *another HLUT*, unless we devise an HLUT that can have all
> possible HLUTs, and this means stuffing *all* the universe (atoms,
> velocities, spins, etc) in a HLUT.
Not anymore than deep blue needs this to beat Kasparov. Granted it
might answer differently on two separate runs. Daryl has already
shown how to model stochasticity in the HLUT. The point is that
HLUTs need no more go awry in special circumstances than known
finite state machines do. This is because HLUTS are just formal
descriptions of real finite state machines. (DeepBlue is not as
intelligent as Kasparov outside of the domain of chess but isn't
it much more flexible (immune to foreign buterflies) and intelligent
than the H(S)LUTS you are portraying in your examples?
[snip]
> The point here is the opposite of what we're thinking. I'm not discussing
> that there couldn't exist a HLUT large enough to store all the information
> necessary. The question is almost the opposite: in which way we devise a
> mechanism to store as few as possible in order to allow it to *predict*
> future outcomes.
Can not this mechanism be a finite state machine? Then it would indeed
*be* formally a HLUT. You can argue that finite state machines can not
be intelligent but this would require a different argumentation than the
one you have presented so far, I believe.
> The thing here is not the past. A videocassette does that.
Sure, any videocassette will fit in a SLUT ;)
> The question
> is, given a *new* input vector Hn, how is the HLUT supposed to come
> with an intelligent behavior, if it does not have any correspondent
> entry?
But it will have one, thanks to it's ability to digitalize inputs.
[snip]
> Pierre, when you entered your office this morning and recognized your
> computer, your desk, your chair, etc, do you believe that a HLUT could
> also have done that (just recognizing), even knowing that *not a single
> object* was seen by you exactly as you saw them in the last 5 years?
Sergio, can't a speech recognition software recognize words it has
never heard exactly the same way before?
> There isn't a single entry in such a HLUT able to recognize the cup
> you had coffee yesterday, unless such a HLUT was able to predict the
> movement of that ant that we found today in your cup. And you know,
Why would a finite state machine need to predict it's next input
when it has an answer ready for every possible one? Human ability
to anticipate the future (and DeepBlue's ability to anticipate
Kasparov subtle traps) is to be found at a higher level than the
simple state to state transition process.
> the movement of that ant may have something to do with that cockroach
> from Stockholm, a hundred years ago...
And that ant might have bitten the foot of Kasparov's great grand mother
but neither Kasparov nor DeepBlue mind a bit. They are much to busy
thinking about the next move ;)
Regards,
Pierre-Normand Houle
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or
Start Your Own
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <36e53362@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 9 Mar 1999 14:42:42 GMT, 166.72.21.141
Organization: SilWis
Newsgroups: comp.ai.philosophy
To Pierre-Normand and Daryl:
Ok, guys, I finally managed to understand what you're proposing,
and it is obvious. In fact, it is so obvious that I thought it
would be a waste of Internet time to discuss that sort of thing.
That explains, partially, why I took so much time to find out
what you all were proposing. I forgot I was on comp.ai."philosophy".
I *agree* that if you have a HLUT containing all possible entries
that can be sensed by a human during a finite amount of time
and couple that with all possible reactions (which means,
all kinds of outputs) that this HLUT is able to present the
behavior of that modeled human in all situations of his life
(whether today, past or future, it's all there!). And it is,
obviously, finite in size, although really big (considering
just vision, *one second* of the life of that human would
require on the order of (10 ^ 10)!, ten to the tenth
factorial entries in the table).
Hence, what you all are proposing is something as mathematical
(and in this instance, unuseful) as a definition without
consequences. The HLUT you're referring contains all possible
answers to all behaviors. Yes, you're right that it does not
have to care about the atoms, velocities, etc. of the universe,
such a knowledge would be worthless for the HLUT. It contains
only the I/O pairs and history. It assumes that whoever did it,
was able to put in its entries every possible interaction and
outcome of that interaction.
So it is capable of *everything*, in terms of behavioral response.
It could be devised by a god-like entity outside of the universe,
able to predict all muscular movements done by a human during a
tennis game and deal with every possible input (visual, tactile,
etc) resulting from the ball being hit by any kind of opponent.
It's all there in the HLUT.
So what? It will work, by definition. It is the same thing as
saying that George is a man (dead, living or yet to be born) who
was able to get a face up in his coin *every time* he flipped
a coin in his life. And he did that once a minute. Such a
guy exists, *by definition*. What's the deal here? Do we learn
anything new about probability?
This HLUT tells nothing about intelligence, because it knows
everything beforehand. Intelligence is something that discovers
the answers to the problems *without* having a ready-made answer.
That HLUT knows all answers, by definition. So I would think it
inappropriate to call it intelligent!
HLUTs such as the one you're proposing are useless for anybody
trying to find out what intelligence is. Such a HLUT is comparable,
in terms of intelligence, to a door, if what we want to
represent is the memory of a single bit.
I hope both of you understand that I have a serious difficulty
of thinking without strictly practical implications. I wouldn't
do much of a career if I were a theoretical mathematician.
On the other hand, perhaps this is my greatest advantage ;-)
So when I was insisting on my vision of the HLUT (let me call it
t-HLUT, or tiny-HLUT), I was thinking on a much more "down to earth"
thing. And my line of thought was, from the conclusion that t-HLUTs
were not able to present intelligent behavior, what would we devise
in order to *make it work*.
The first thing would be to think about grouping entries in such
a way that certain groups of stimuli retrieved the same kind of
reaction. Then, to think about what could be a method to allow
the system to learn how to devise these groups (categories).
Then to learn how to find groups of groups that seem to be able
to form a new group. Then to find models that account for the
behavior of several groups at once. Then to copy (analogy) models
found in one area to other areas and use this as a way to make
models for other situations, easing the learning process.
And so on.
All this, in my vision, is a consequence of my attempt to fix the
t-HLUT that I was defending during this thread. If I had to think
about your HLUT, then maybe I would have stopped, because with
your HLUT there's nothing else to do. It works by definition!
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <7c3h6r$h8t@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>
>To Pierre-Normand and Daryl:
>
>Ok, guys, I finally managed to understand what you're proposing,
>and it is obvious. In fact, it is so obvious that I thought it
>would be a waste of Internet time to discuss that sort of thing.
Right. It *is* a waste of time. As I said to Oliver Sparrow,
the HLUT is not in any way intended to be a serious model for
how AI could work. It's just useful in various philosophical
thought-experiments.
>This HLUT tells nothing about intelligence, because it knows
>everything beforehand.
Right. I never claimed otherwise.
>HLUTs such as the one you're proposing are useless for anybody
>trying to find out what intelligence is.
Right.
>So when I was insisting on my vision of the HLUT (let me call it
>t-HLUT, or tiny-HLUT), I was thinking on a much more "down to earth"
>thing. And my line of thought was, from the conclusion that t-HLUTs
>were not able to present intelligent behavior, what would we devise
>in order to *make it work*.
>
>The first thing would be to think about grouping entries in such
>a way that certain groups of stimuli retrieved the same kind of
>reaction. Then, to think about what could be a method to allow
>the system to learn how to devise these groups (categories).
>Then to learn how to find groups of groups that seem to be able
>to form a new group. Then to find models that account for the
>behavior of several groups at once. Then to copy (analogy) models
>found in one area to other areas and use this as a way to make
>models for other situations, easing the learning process.
>And so on.
>
>All this, in my vision, is a consequence of my attempt to fix the
>t-HLUT that I was defending during this thread. If I had to think
>about your HLUT, then maybe I would have stopped, because with
>your HLUT there's nothing else to do. It works by definition!
Right. The only practical point of the HLUT is this: Imagine
that we have a computer program that works the way we would like it
to work, but it requires 10^10^10 bytes of memory to store the
program. How could we imagine going about compressing that enormous
mountain of data so that it fits in, say, a few gigabytes?
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <36e5925e@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
<7c3h6r$h8t@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 9 Mar 1999 21:27:58 GMT, 166.72.21.143
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7c3h6r$h8t@edrn.newsguy.com>...
>Sergio says...
>>
>>All this, in my vision, is a consequence of my attempt to fix the
>>t-HLUT that I was defending during this thread. If I had to think
>>about your HLUT, then maybe I would have stopped, because with
>>your HLUT there's nothing else to do. It works by definition!
>
>Right. The only practical point of the HLUT is this: Imagine
>that we have a computer program that works the way we would like it
>to work, but it requires 10^10^10 bytes of memory to store the
>program. How could we imagine going about compressing that enormous
>mountain of data so that it fits in, say, a few gigabytes?
>
That's essentially the root of what I'm questioning and a simple
explanation why I reluct to accept the HLUT: to use the
best starting model. Your model requires that one devises a
"compression scheme" and *also* feed the gigabytes manually,
a la CYC. This "machine" will be as good as its creator.
I know that some of us here (and you seem to be one of them)
are after good principles to develop some practical ideas or
at least good theories about our mind and how it can be
implemented in computers.
But to develop that sort of thing, we've got to start from somewhere.
Depending where we start, we may have a hard time finding some
good solution in this (unfortunately) small life span we have.
My previous experiences say that in order to design anything
complex, we've got to start from simple models that can be implemented
and put to work and that can be progressively fixed and improved.
But the system must work from the start and must continue to
work *each time* we alter something. Or else we end up with
Frankensteins in our hands.
I know that this is a common "engineering" practice and not so
common when one tries to do science. But AI is, according to some
thinkers, an engineering discipline done over an unfinished science.
My starting point is the baby. What happens with a baby in order
to turn it into an intelligent adult? I'm interested in the kind
of processes that allow a baby to perform so well in language,
object recognition, perception. If I can conceive a mechanism
with such a learning ability, I hope to be closer to the best
answer nature found so far: our adult, human brain. After all,
nature started from simple brains toward complex ones.
I'm not proposing a genetic algorithm or anything evolutionary
in nature, but just a doable path to develop that mechanism,
given the complexity of the task and our inability to devise
the definitive mechanism from the starting point.
Coincidentally, this path reinforces the importance of
something that I find the Rosetta stone of
intelligence: Learning. Everything I read about neuroscience
points towards the importance of learning processes in the
development of the brain.
Learning is something that goes in the opposite direction of
what you propose. Learning increases the system functionality
and capacity as time passes and experiences flow in. Learning
allows the system to surpass its creators, as it can have
experiences that its creators didn't have.
Your proposal (by what I understood so far, I may be wrong)
seems to stick with a stimulus/response paradigm that must
use a lot of innate knowledge. For all I'm finding in
neuroscience and cognitive psychology, our innate bagage
is very, very small. Besides, all attempts of AI so far
that used innate knowledge found, sooner or later, serious
problems.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 10 Mar 1999 00:00:00 GMT
Message-ID: <7c64pb$7dq@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
<7c3h6r$h8t@edrn.newsguy.com> <36e5925e@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
[good stuff deleted]
>Your proposal (by what I understood so far, I may be wrong)
>seems to stick with a stimulus/response paradigm that must
>use a lot of innate knowledge.
No, I haven't made any kind of proposal. I *don't* think
that the HLUT is a good starting point for any kind of
discussion about how best to implement AI. The only reason
that I argue about the HLUT is because I disagree with
what other people are saying about it. I had just as soon
drop the subject.
>For all I'm finding in neuroscience and cognitive psychology,
>our innate bagage is very, very small.
However, I don't agree with this. I think that we are
predisposed by biology to be good at learning some sorts
of things, and not others. We can learn to recognize
faces, but we can't learn to recognize prime numbers,
for instance. (We can figure out that a number is prime,
but that is different from *recognizing* it to be
prime.)
>Besides, all attempts of AI so far that used innate
>knowledge found, sooner or later, serious problems.
Well, I agree that explicitly representing knowledge
(in the way that CYC does it) is obviously far
different from the way that humans do things. What
I think is innate is *strategies* for learning, for
forming hypotheses, etc., certainly not basic facts.
(Although it is possible that babies are born with
some innate knowledge about objects, time, continuity
of existence.)
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 10 Mar 1999 00:00:00 GMT
Message-ID: <36e6cd05@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
<7c3h6r$h8t@edrn.newsguy.com> <36e5925e@news3.us.ibm.net>
<7c64pb$7dq@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 10 Mar 1999 19:50:29 GMT, 166.72.29.186
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7c64pb$7dq@edrn.newsguy.com>...
>Sergio says...
>
>[good stuff deleted]
>
>>Your proposal (by what I understood so far, I may be wrong)
>>seems to stick with a stimulus/response paradigm that must
>>use a lot of innate knowledge.
>
>No, I haven't made any kind of proposal. I *don't* think
>that the HLUT is a good starting point for any kind of
>discussion about how best to implement AI.
In that regard, we're pretty much on the same side.
>The only reason
>that I argue about the HLUT is because I disagree with
>what other people are saying about it. I had just as soon
>drop the subject.
>
And I commend you for having the patience to stick with me until
I realized what that concept meant. I guess it is obvious that
all the arguments I've done in this thread referred to the version
I was imagining (the tiny-HLUT, obtained by the storage of all
I/O of one's life, which by its own way is pretty large) and
obviously I still support *everything* I said in that context.
But it was nice to capture what was meant for HLUT in the first
place (I guess Jim Balter did that earlier, but I didn't read his
messages). I still find my t-HLUT a more rich ground for discussion,
also because it is "less-impossible" than the full HLUT.
Anyway, now I guess I have a good vision of both your side and
that of Neil's. I keep thinking that Neil's considerations are
very important, if not fundamental, to a new vision of the problem
of intelligence. The doubt that remains in my mind is if you
acquired the essentials of his vision (even if disagreeing on the
details).
>>For all I'm finding in neuroscience and cognitive psychology,
>>our innate bagage is very, very small.
>
>However, I don't agree with this. I think that we are
>predisposed by biology to be good at learning some sorts
>of things, and not others. We can learn to recognize
>faces, but we can't learn to recognize prime numbers,
>for instance. (We can figure out that a number is prime,
>but that is different from *recognizing* it to be
>prime.)
>
The aspect of what is innate and what's not is a matter
of heated discussion even among top-of-the-line cognitive
scientists. Undeniably we do have certain aspects that
were evolved in function of the survival of our forebears.
In this regard, arithmetic is not something important,
while recognition and perception are.
But the real battle happens in the front of innateness
of language. And it is in this battle that I'm in the
side of the minority: all arguments presented by those
who think language is innate in humans can be shown to
be faulty, weak or inconclusive. For some time now I've
been collecting results from neuroscience and cognitive
science and I'm increasingly convinced that innate
language is not tenable.
Why I consider this important? Because it is one of the
first steps toward thinking of strongly general and
domain unspecialized mechanisms for learning and
perception, and it is these mechanisms that I think are
missing in contemporary AI architectures.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 10 Mar 1999 00:00:00 GMT
Message-ID: <7c6lub$aaa@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
<7c3h6r$h8t@edrn.newsguy.com> <36e5925e@news3.us.ibm.net>
<7c64pb$7dq@edrn.newsguy.com> <36e6cd05@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>The aspect of what is innate and what's not is a matter
>of heated discussion even among top-of-the-line cognitive
>scientists. Undeniably we do have certain aspects that
>were evolved in function of the survival of our forebears.
>In this regard, arithmetic is not something important,
>while recognition and perception are.
>
>But the real battle happens in the front of innateness
>of language. And it is in this battle that I'm in the
>side of the minority: all arguments presented by those
>who think language is innate in humans can be shown to
>be faulty, weak or inconclusive. For some time now I've
>been collecting results from neuroscience and cognitive
>science and I'm increasingly convinced that innate
>language is not tenable.
What results or arguments are there for language *not*
being innate? (The weakness of the argument for innateness
does not by itself constitute an argument against innateness.)
I definitely think that language and general reasoning
ability go hand-in-hand, but I'm not sure about the causal
direction. Are we good at language because we have advanced
reasoning and pattern-recognition skills, or are we good
at general reasoning because we have advanced language skills?
I lean towards Chomsky and Pinker in this debate. I think
that humans are evolutionarily predisposed to learning languages.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 11 Mar 1999 00:00:00 GMT
Message-ID: <36e7d627@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
<7c3h6r$h8t@edrn.newsguy.com> <36e5925e@news3.us.ibm.net>
<7c64pb$7dq@edrn.newsguy.com> <36e6cd05@news3.us.ibm.net>
<7c6lub$aaa@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 11 Mar 1999 14:41:43 GMT, 200.229.242.127
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7c6lub$aaa@edrn.newsguy.com>...
>Sergio says...
>>But the real battle happens in the front of innateness
>>of language. And it is in this battle that I'm in the
>>side of the minority: all arguments presented by those
>>who think language is innate in humans can be shown to
>>be faulty, weak or inconclusive. For some time now I've
>>been collecting results from neuroscience and cognitive
>>science and I'm increasingly convinced that innate
>>language is not tenable.
>
>What results or arguments are there for language *not*
>being innate? (The weakness of the argument for innateness
>does not by itself constitute an argument against innateness.)
>
There are weak arguments and there are faulty ones. Those who
claim that language has a fixed site in our cortex and that
because of that there is a specialized mechanism, for example,
are faulty (Broca's and Wernicke's areas are frequently cited).
They assume that if we analyze the neurons in Broca's area
we will find physiological differences from the neurons from,
say, the frontal lobe.
There appears to be some correlation between handedness and
the hemisphere where language is processed. It was found that
some people developed language in the right hemisphere of
the brain, instead of on the left.
Children who had left hemispherectomy (removal of
left hemisphere) were able to develop language almost to a
point of normality, using only the right hemisphere.
Studies involving congenitally deaf subjects showed that
they use the same areas of the brain for language production
and understanding (using signs through the ASL, American
Sign Language). Although these results could support the
idea of specific mechanisms for language, recent studies
using fMRI by Neville et al. found significant activity
on the right hemisphere during language related activities
on congenitally deaf subjects, indicating that additional
processing areas were put to use. Other studies revealed the
use of the auditory cortex by those deaf subjects,
something like "hearing the sign language", in a
demonstration of the power of neural plasticity. Plasticity
is, in reality, one of the indications that our brain is
highly self-organizing, and that language should be
compared to object perception, something learned through
exposure to experiences.
Those who purport to explain evolutionary origins of language
should try to explain better how evolution, a slow and
diversifying mechanism by its own nature, was able to
follow a "moving target", such as language generation and
understanding, a social phenomenon that can be altered
significantly within a single generation.
Other interesting ideas are those who try to see language
as an essentially cultural construct, that cannot appear
without strong interaction of the entity with a community.
Besides, language can be seen as a dynamical, self-evolving
system (following ideas from Jeff Elman and others).
But my primary reason to suspect from innateness (and also
Chomsky's Universal Grammar proposition) is that it seems
more plausible to explain the ease with which children
acquire language not by the presence of a UG grammar, common
to all mankind, in its brain, but by the perception that
this child receives, from the world, a series of clues
that indicate the best way to construct meaningful phrases.
It is like if the proposed Universal Grammars were not in
the brain, but were in the physics of our universe (action
verbs such as grab, move, turn, etc., reflect things that
have a very different nature than nouns; Antonio Damasio
found activity in motor control areas of the brain when
the subject thought of verbs, but not when he/she thought
of nouns). This is enough to suggest that a lot of the
regularity and universality of grammars can be the effect
of the universality of the *environment* we're in: an
Indonesian lives in a world which can be described by
the same laws of physics than the world in which Englishmen
lives.
So my vision have a lot to do with Piaget: that we owe
much (if not all) of our cognition because of invariant and
interdependent relationships that our brain *perceives* from
sensorimotor patterns that we acquire since early childhood.
No matter if these patterns came from vision, audition and
touch or any combination of these. I believe that cognition
(and obviously also language) are the result of "intelligent"
correlations and associations using sensorimotor patterns
as ground.
>I definitely think that language and general reasoning
>ability go hand-in-hand, but I'm not sure about the causal
>direction. Are we good at language because we have advanced
>reasoning and pattern-recognition skills, or are we good
>at general reasoning because we have advanced language skills?
>
I guess they are interdependent, and that only adds to the
complexity of understanding its constituting parts.
>I lean towards Chomsky and Pinker in this debate. I think
>that humans are evolutionarily predisposed to learning languages.
>
Pinker is more reasonable in this regard. Chomsky's and Fodor's
positions are more dogmatic. But all of them disregard the
strength of the arguments presented by the "other side" (Elman,
Bates, Deacon, Karmiloff-Smith and many others).
Regards,
Sergio Navega.
From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: An HLUT challenge.
Date: 13 Mar 1999 00:00:00 GMT
Message-ID: <7ce3lt$5mi@ux.cs.niu.edu>
References: <7c6lub$aaa@edrn.newsguy.com>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy
daryl@cogentex.com (Daryl McCullough) writes:
>Sergio says...
>>But the real battle happens in the front of innateness
>>of language. And it is in this battle that I'm in the
>>side of the minority: all arguments presented by those
>>who think language is innate in humans can be shown to
>>be faulty, weak or inconclusive. For some time now I've
>>been collecting results from neuroscience and cognitive
>>science and I'm increasingly convinced that innate
>>language is not tenable.
>...
>I lean towards Chomsky and Pinker in this debate. I think
>that humans are evolutionarily predisposed to learning languages.
It is entirely possible that humans could be evolutionarily disposed
toward learning languages, with the language (that which is to be
learned) being innate.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 13 Mar 1999 00:00:00 GMT
Message-ID: <36ead3ca@news3.us.ibm.net>
References: <7c6lub$aaa@edrn.newsguy.com> <7ce3lt$5mi@ux.cs.niu.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 13 Mar 1999 21:08:26 GMT, 200.229.240.224
Organization: SilWis
Newsgroups: comp.ai.philosophy
Neil Rickert wrote in message <7ce3lt$5mi@ux.cs.niu.edu>...
>daryl@cogentex.com (Daryl McCullough) writes:
>
>>I lean towards Chomsky and Pinker in this debate. I think
>>that humans are evolutionarily predisposed to learning languages.
>
>It is entirely possible that humans could be evolutionarily disposed
>toward learning languages, with the language (that which is to be
>learned) being innate.
>
I wouldn't complain to accept that evolution predisposed
something, but I'd say it is not language alone. I find it more
reasonable to think evolution influenced generic learning
abilities and that these abilities are used to acquire difficult
things (like language, like detecting 3D spatial organization
from 2D drawings on paper, etc).
Regards,
Sergio Navega.
From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: An HLUT challenge.
Date: 18 Mar 1999 00:00:00 GMT
Message-ID: <7cr79g$ibs@ux.cs.niu.edu>
References: <7ce3lt$5mi@ux.cs.niu.edu> <36ead3ca@news3.us.ibm.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy
"Sergio Navega" <snavega@ibm.net> writes:
>Neil Rickert wrote in message <7ce3lt$5mi@ux.cs.niu.edu>...
>>daryl@cogentex.com (Daryl McCullough) writes:
>>>I lean towards Chomsky and Pinker in this debate. I think
>>>that humans are evolutionarily predisposed to learning languages.
>>It is entirely possible that humans could be evolutionarily disposed
>>toward learning languages, with the language (that which is to be
^^^^
>>learned) being innate.
Oops. That "with" was supposed to be "without".
>I wouldn't complain to accept that evolution predisposed
>something, but I'd say it is not language alone.
I agree. In any case, thanks for commenting, so that I could see my
serious typo in what I had written.
From: Jim Balter <jqb@sandpiper.net>
Subject: Re: An HLUT challenge.
Date: 11 Mar 1999 00:00:00 GMT
Message-ID: <36E77BE4.38E5E52C@sandpiper.net>
Content-Transfer-Encoding: 7bit
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
<7c3h6r$h8t@edrn.newsguy.com> <36e5925e@news3.us.ibm.net>
<7c64pb$7dq@edrn.newsguy.com> <36e6cd05@news3.us.ibm.net>
X-Accept-Language: en-US
Content-Type: text/plain; charset=us-ascii
Organization: Sandpiper Networks, Inc.
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Sergio Navega wrote:
> But it was nice to capture what was meant for HLUT in the first
> place (I guess Jim Balter did that earlier, but I didn't read his
> messages). I still find my t-HLUT a more rich ground for discussion,
Nonetheless, you saw fit to jump in much later and ask questions
I had already answered and make claims I had already refuted. Feh.
--
<J Q B>
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 11 Mar 1999 00:00:00 GMT
Message-ID: <36e7d62a@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
<7c3h6r$h8t@edrn.newsguy.com> <36e5925e@news3.us.ibm.net>
<7c64pb$7dq@edrn.newsguy.com> <36e6cd05@news3.us.ibm.net>
<36E77BE4.38E5E52C@sandpiper.net>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 11 Mar 1999 14:41:46 GMT, 200.229.242.127
Organization: SilWis
Newsgroups: comp.ai.philosophy
Jim Balter wrote in message <36E77BE4.38E5E52C@sandpiper.net>...
>Sergio Navega wrote:
>
>> But it was nice to capture what was meant for HLUT in the first
>> place (I guess Jim Balter did that earlier, but I didn't read his
>> messages). I still find my t-HLUT a more rich ground for discussion,
>
>Nonetheless, you saw fit to jump in much later and ask questions
>I had already answered and make claims I had already refuted. Feh.
>
Feh, Jim, I'm not as omniscient as your HLUT... :-)
From: "Gary Forbis" <forbis@accessone.com>
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <7c4aj8$nme$1@remarQ.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
<7c3h6r$h8t@edrn.newsguy.com>
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3155.0
X-Complaints-To: newsabuse@remarQ.com
X-Trace: 921021864 C4TGOY8WA819DD12BC usenet58.supernews.com
Organization: Posted via RemarQ, http://www.remarQ.com - Discussions start here!
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7c3h6r$h8t@edrn.newsguy.com>...
...
>Right. The only practical point of the HLUT is this: Imagine
>that we have a computer program that works the way we would like it
>to work, but it requires 10^10^10 bytes of memory to store the
>program. How could we imagine going about compressing that enormous
>mountain of data so that it fits in, say, a few gigabytes?
My question is, can the imagined HLUT be compressed into a few gigabytes?
I am more than ready to grant that all of life's contingencies can be
quantisized into a resolution below which there is no difference. I'm not
yet willing to concede viewing a human's possible histories in such
desecrate
units results in such a highly compressible HLUT. I doubt it can be
compressed into something smaller than the human I don't think this
presents much of a problem for AI, only for a particular view of AI, in
particular, that of human brain as FSA.
I think that if one is willing to give up duplicating the functionality of
humans
and can accept something less, but still quite complex, an FSA will do
quite nicely.
The thing is, I doubt the histories of real robots can be expressed and
compressed into the FSA that generated them since the actual behaviors
are generated by more than just the FSA but also by the world in which
it exists.
Now that I've said this, I'm not so sure. Maybe it's that the actual
output isn't identical to the theoretic output. Still, the new output
shouldn't be based upon theoretic output histories.
From: houlepn@my-dejanews.com
Subject: Re: An HLUT challenge.
Date: 10 Mar 1999 00:00:00 GMT
Message-ID: <7c6fu9$6k2$1@nnrp1.dejanews.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
X-Http-Proxy: 1.0 x13.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Wed Mar 10 19:08:02 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)
Sergio, I'm pretty sure we have been actually agreeing from
the very beginning of this thread more than you now think.
"Sergio Navega" <snavega@ibm.net> wrote:
> To Pierre-Normand and Daryl:
>
> Ok, guys, I finally managed to understand what you're proposing,
> and it is obvious. In fact, it is so obvious that I thought it
> would be a waste of Internet time to discuss that sort of thing.
> That explains, partially, why I took so much time to find out
> what you all were proposing. I forgot I was on comp.ai."philosophy".
Ok. I'm glad you now see the obvious. However, there are a few points
some of us have been trying to make and you did not appreciate yet.
These were about our disagreement with Niel and have much more to do
with AI than with philosophical hair splitting.
> I *agree* that if you have a HLUT containing all possible entries
> that can be sensed by a human during a finite amount of time
> and couple that with all possible reactions (which means,
> all kinds of outputs) that this HLUT is able to present the
> behavior of that modeled human in all situations of his life
> (whether today, past or future, it's all there!). And it is,
> obviously, finite in size, although really big (considering
> just vision, *one second* of the life of that human would
> require on the order of (10 ^ 10)!, ten to the tenth
> factorial entries in the table).
Two points here. The fact that there are more entries in the table
than there are electrons in the universe is not relevant for two
reasons: 1) Nobody want to implement the HLUT and 2) This is also
true for the HLUT of most non-trivial finite state machines. It
is true of DeepBlue for instance (Or a simulation of DeepBlue on a
true FSA).
The second point is that these discussions about HLUTs came from
somebody (Neil?) implying that 1: HLUTs can not learn 2: FSA are
formally equivalent to HLUTs hence 1 & 2 => 3: FSA can not model
human learning behavior. This is a false implication not only for
the extravagant 'Sergio HLUT' but also for any hypothetical AI
entity emulating 'Sergio level' abilities on a FSA.
In short. We were not trying to argue that HLUTs are useful
representations of Sergios but rather that it might not be impossible
to view (or approximate) Sergio as a reasonable FSA despite the fact
that it is counterintuitive to view the corresponding HLUTs as a learning
entity.
> Hence, what you all are proposing is something as mathematical
> (and in this instance, unuseful) as a definition without
> consequences. The HLUT you're referring contains all possible
We have been proposing to model the mind as a FSA.
> answers to all behaviors. Yes, you're right that it does not
> have to care about the atoms, velocities, etc. of the universe,
> such a knowledge would be worthless for the HLUT. It contains
Such a knowledge is worthless for most FSAs and humans I know.
> only the I/O pairs and history. It assumes that whoever did it,
> was able to put in its entries every possible interaction and
> outcome of that interaction.
Nobody needs to do that. The goal is to build a FSA with learning
abilities (or understand the mind/brain as one). The course of the
FSA's history is influhenced by all kind of subtle effects from and
to the environment. The fact that the FSA digitalizes its *raw*
inputs do not necessarily render it impotent. Viewing this FSA as a
HLUT is a red herring. The HLUT is just an alternate representation
of the FSA. This is the point some of us have been trying to make.
[SNIP]
Regards,
Pierre-Normand Houle
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or
Start Your Own
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 10 Mar 1999 00:00:00 GMT
Message-ID: <36e6e2c7@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
<7c6fu9$6k2$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 10 Mar 1999 21:23:19 GMT, 129.37.182.35
Organization: SilWis
Newsgroups: comp.ai.philosophy
houlepn@my-dejanews.com wrote in message
<7c6fu9$6k2$1@nnrp1.dejanews.com>...
>Sergio Navega wrote:
>> I *agree* that if you have a HLUT containing all possible entries
>> that can be sensed by a human during a finite amount of time
>> and couple that with all possible reactions (which means,
>> all kinds of outputs) that this HLUT is able to present the
>> behavior of that modeled human in all situations of his life
>> (whether today, past or future, it's all there!). And it is,
>> obviously, finite in size, although really big (considering
>> just vision, *one second* of the life of that human would
>> require on the order of (10 ^ 10)!, ten to the tenth
>> factorial entries in the table).
>
>Two points here. The fact that there are more entries in the table
>than there are electrons in the universe is not relevant for two
>reasons: 1) Nobody want to implement the HLUT and 2) This is also
>true for the HLUT of most non-trivial finite state machines. It
>is true of DeepBlue for instance (Or a simulation of DeepBlue on a
>true FSA).
>
I agree. It is easy to assemble a tree node FSA that cannot be
represented by any finite HLUT.
>The second point is that these discussions about HLUTs came from
>somebody (Neil?) implying that 1: HLUTs can not learn 2: FSA are
>formally equivalent to HLUTs hence 1 & 2 => 3: FSA can not model
>human learning behavior. This is a false implication not only for
>the extravagant 'Sergio HLUT' but also for any hypothetical AI
>entity emulating 'Sergio level' abilities on a FSA.
>
>In short. We were not trying to argue that HLUTs are useful
>representations of Sergios but rather that it might not be impossible
>to view (or approximate) Sergio as a reasonable FSA despite the fact
>that it is counterintuitive to view the corresponding HLUTs as a learning
>entity.
>
I'm kind of concerned to where this will lead us to and I would like,
if you agree, to ask you to refine your arguments for the FSA.
For instance, in which way do you think a *single* FSA, no matter
how complex, will be able to correspond to the intelligent behavior
of a human? I, for one, don't think it is possible.
Also, in which way you think that such architecture will always
present the same behavior? I have no reasons to believe that humans
will behave the same way, if it were possible to put them again in the
same circumstances. Quite the contrary, I have reasons to believe that
if this were true, much of our creative behavior would be impossible.
A human behavior will not repeat, but a FSA, under the same initial
conditions, will always repeat.
My current understanding of cognition and human behavior makes me
believe that we don't have a single mechanism like that, but maybe a
collection of them working in parallel, with an elusive
(and frequently random) criterion to decide which one will be
used to express the output (behavior).
That would be my strategy only if I was to use FSAs, but I haven't
much reasons to stick with this paradigm. I don't have many
(cognitive) indications that this is the way our brain operates.
So that's why I'm interested in your reasons to propose FSAs (other
than the equivalence with a HLUT :-).
>> Hence, what you all are proposing is something as mathematical
>> (and in this instance, unuseful) as a definition without
>> consequences. The HLUT you're referring contains all possible
>
>We have been proposing to model the mind as a FSA.
>
Is it a single FSA? I'm skeptical with this approach. As I said, a
collection of FSAs running in parallel may obtain more of my
acceptance, although even then it would not be my preferred
solution, because too much centered on "states" and "paths", and
what I think more plausible is centered around progressive
perceptual discrimination and parallel spreading of activations.
>> answers to all behaviors. Yes, you're right that it does not
>> have to care about the atoms, velocities, etc. of the universe,
>> such a knowledge would be worthless for the HLUT. It contains
>
>Such a knowledge is worthless for most FSAs and humans I know.
>
>> only the I/O pairs and history. It assumes that whoever did it,
>> was able to put in its entries every possible interaction and
>> outcome of that interaction.
>
>Nobody needs to do that. The goal is to build a FSA with learning
>abilities (or understand the mind/brain as one). The course of the
>FSA's history is influhenced by all kind of subtle effects from and
>to the environment. The fact that the FSA digitalizes its *raw*
>inputs do not necessarily render it impotent. Viewing this FSA as a
>HLUT is a red herring. The HLUT is just an alternate representation
>of the FSA. This is the point some of us have been trying to make.
>
About the HLUT, I agree. I would say that the HLUT is a fluorescent
and blinking red herring, if you know what I mean :-). But I think
the greatest problem of the HLUT is to conduct to the FSA as a
tentative solution just because the latter can be reduced to
the former.
It is important to note that *only* a fully loaded HLUT will be capable
of the intelligent behavior that we're talking about. A partial HLUT
cannot be guaranteed to work. In fact, depending on where the
"holes" are in the partial HLUT, it would fail miserably. That
remits me to the thought that only a fully developed FSA will be
intelligent. A partial FSA (and it should be *very* partial, if
we want to make it work using the matter in this universe) will
certainly also fail.
So in this regard you seem to be using the HLUT as an axiom to support
the development of the FSA strategy (I really may be misunderstanding
what you'd proposed, please correct me if I'm wrong). But this is like
trying to understand human cognition based on the definition of
natural numbers or set theory.
What seems to worry me more about this approach is that you seem to be
using as foundation one argument that is mathematical in nature. That
HLUT does not have nothing to do with the universe in which we live and
the capacity that we have of constructing such mechanisms.
My approach and, as much as I know, also of Neil's, is starting from
what seems to be our perception of the universe. In a way, my idea
is that we will get to the mathematics and formalities of reasoning
only after our intellect grows *above* the perceptual level.
What we seem to be observing today in the failed attempts of AI so
far, is very poor performance in recognition and perception (vision
systems struggle to recognize a cup) and very good logic and
mathematical abilities.
I'm trying to start with perception and recognition and then later
hope to develop the more formal methods of thinking. To my side
I count the natural abilities of all babies in the world: they
pretty much duplicate this same sequence every day.
Regards,
Sergio Navega.
From: houlepn@ibm.net
Subject: Re: An HLUT challenge.
Date: 11 Mar 1999 00:00:00 GMT
Message-ID: <7c9k42$vm3$1@nnrp1.dejanews.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7bvj56$5vv$1@nnrp1.dejanews.com> <36e3ee95@news3.us.ibm.net>
<7c22no$au3$1@nnrp1.dejanews.com> <36e53362@news3.us.ibm.net>
<7c6fu9$6k2$1@nnrp1.dejanews.com> <36e6e2c7@news3.us.ibm.net>
X-Http-Proxy: 1.0 x1.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Thu Mar 11 23:37:39 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.05 [en] (Win95; U)
"Sergio Navega" <snavega@ibm.net> wrote:
> houlepn@ibm.net (Pierre-Normand Houle) wrote:
> > Sergio Navega wrote:
> >> I *agree* that if you have a HLUT containing all possible entries
> >> that can be sensed by a human during a finite amount of time
> >> and couple that with all possible reactions (which means,
> >> all kinds of outputs) that this HLUT is able to present the
> >> behavior of that modeled human in all situations of his life
> >> (whether today, past or future, it's all there!). And it is,
> >> obviously, finite in size, although really big (considering
> >> just vision, *one second* of the life of that human would
> >> require on the order of (10 ^ 10)!, ten to the tenth
> >> factorial entries in the table).
> >
> > Two points here. The fact that there are more entries in the table
> > than there are electrons in the universe is not relevant for two
> > reasons: 1) Nobody want to implement the HLUT and 2) This is also
> > true for the HLUT of most non-trivial finite state machines. It
> > is true of DeepBlue for instance (Or a simulation of DeepBlue on a
> > true FSA).
>
> I agree. It is easy to assemble a tree node FSA that cannot be
> represented by any finite HLUT.
I would claim some FSA_prime could easily be build so as to model
your three node FSA quite satisfactorily. Granted this FSA_prime
would have a much bigger HLUT but this is a red herring. The
FSA_prime would be only slightly more complicated than the three
individual FSAs. The fact that this FSA_prime would be unlikely
to reproduce the same run as the three node machine from the same
initial state is another red herring in my opinion as this is true
also for an exact copy of the original three node machine.
Another way to put it: A computer can not solve the three body
problem but it can easily model it. The simulation soon diverge
from any real system but any two real systems will just as surely
diverge from each other. The simulation will presumably exhibit
the same attractors if it is accurate enough (say 6 decimal
positions). There is an absolute limit on the accuracy of weather
forecast based on simulations (or anything else for that matter).
This does not make realistic modelisation of the climate any more
difficult.
> > The second point is that these discussions about HLUTs came from
> > somebody (Neil?) implying that 1: HLUTs can not learn 2: FSA are
> > formally equivalent to HLUTs hence 1 & 2 => 3: FSA can not model
> > human learning behavior. This is a false implication not only for
> > the extravagant 'Sergio HLUT' but also for any hypothetical AI
> > entity emulating 'Sergio level' abilities on a FSA.
> >
> > In short. We were not trying to argue that HLUTs are useful
> > representations of Sergios but rather that it might not be impossible
> > to view (or approximate) Sergio as a reasonable FSA despite the fact
> > that it is counterintuitive to view the corresponding HLUTs as a learning
> > entity.
>
> I'm kind of concerned to where this will lead us to and I would like,
> if you agree, to ask you to refine your arguments for the FSA.
> For instance, in which way do you think a *single* FSA, no matter
> how complex, will be able to correspond to the intelligent behavior
> of a human? I, for one, don't think it is possible.
Why do you think it is not possible?
> Also, in which way you think that such architecture will always
> present the same behavior? I have no reasons to believe that humans
Daryl and me have explicitly presented non-deterministic HLUTs.
Remember my non-deterministic quantum mechanical HLUT?
http://www.dejanews.com/[ST_rn=qs]/getdoc.xp?AN=451772195
> will behave the same way, if it were possible to put them again in the
> same circumstances. Quite the contrary, I have reasons to believe that
> if this were true, much of our creative behavior would be impossible.
Since you never find yourself confronted twice with the same sensory
input *and the same past history* how could you know? Your creativity
might resides in your ability to come up with new solutions in later
circumstances rather than in your sensitivity to chaotic events. The
later just amounts to throwing dices.
> A human behavior will not repeat, but a FSA, under the same initial
> conditions, will always repeat.
The issue is whether this behavior is intelligent. A FSA emulating
DeepBlue will beat DeepBlue as often as DeepBlue beats it.
> My current understanding of cognition and human behavior makes me
> believe that we don't have a single mechanism like that, but maybe a
> collection of them working in parallel, with an elusive
> (and frequently random) criterion to decide which one will be
> used to express the output (behavior).
Now we are talking about implementation and I agree.
> That would be my strategy only if I was to use FSAs, but I haven't
> much reasons to stick with this paradigm. I don't have many
> (cognitive) indications that this is the way our brain operates.
> So that's why I'm interested in your reasons to propose FSAs (other
> than the equivalence with a HLUT :-).
I am saying FSAs are not ruled out despite their formal equivalence
with HLUTs. What you propose is both reasonable and seems close
enough to an FSA to be emulated by one. Let's work together, you
come up with the correct model and I put it into an FSA. We
share the Nobel but you will have had the hard job I would say :-)
> >> Hence, what you all are proposing is something as mathematical
> >> (and in this instance, unuseful) as a definition without
> >> consequences. The HLUT you're referring contains all possible
> >
> >We have been proposing to model the mind as a FSA.
>
> Is it a single FSA? I'm skeptical with this approach. As I said, a
> collection of FSAs running in parallel may obtain more of my
> acceptance, although even then it would not be my preferred
> solution, because too much centered on "states" and "paths", and
> what I think more plausible is centered around progressive
> perceptual discrimination and parallel spreading of activations.
Sorry, I should have said "I am proposing to view the mind as a
FSA at its low level implementation". Of course, there is no need to
stick at this level.
[snip]
> About the HLUT, I agree. I would say that the HLUT is a fluorescent
> and blinking red herring, if you know what I mean :-). But I think
> the greatest problem of the HLUT is to conduct to the FSA as a
> tentative solution just because the latter can be reduced to
> the former.
You are right. There are independent arguments for the FSA view.
But our argument was the other way around: The FSA view it not
ruled out just because it can be 'augmented' to a HLUT.
> It is important to note that *only* a fully loaded HLUT will be capable
> of the intelligent behavior that we're talking about. A partial HLUT
> cannot be guaranteed to work. In fact, depending on where the
That is why we have never been considering partial HLUTs. I don't
get the point of partial HLUTs. This is like objecting to a TM
computing Pi because you don't want any hole in the resulting
decimal expansion. Writing down the first 10^(10^10) digits
in the HLUT is quite a feat but building an FSA for doing this
is trivial. You can think of a similar example for DeepBlue.
> "holes" are in the partial HLUT, it would fail miserably. That
> remits me to the thought that only a fully developed FSA will be
> intelligent. A partial FSA (and it should be *very* partial, if
> we want to make it work using the matter in this universe) will
> certainly also fail.
I don't see why you think the FSA has to be so big.
> So in this regard you seem to be using the HLUT as an axiom to support
> the development of the FSA strategy (I really may be misunderstanding
> what you'd proposed, please correct me if I'm wrong). But this is like
Yes. I have been correcting you above. I agree to what follow. I hope
we are getting closer.
[snip]
Regards,
Pierre-Normand Houle
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or
Start Your Own
From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: An HLUT challenge.
Date: 13 Mar 1999 00:00:00 GMT
Message-ID: <7ce4e7$5nv@ux.cs.niu.edu>
References: <36e6e2c7@news3.us.ibm.net>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy
"Sergio Navega" <snavega@ibm.net> writes:
>houlepn@my-dejanews.com wrote in message
><7c6fu9$6k2$1@nnrp1.dejanews.com>...
>>The second point is that these discussions about HLUTs came from
>>somebody (Neil?) implying that 1: HLUTs can not learn 2: FSA are
>>formally equivalent to HLUTs hence 1 & 2 => 3: FSA can not model
>>human learning behavior.
I did not say it in that form. However, I happen to agree with the
conclusion, that an FSA cannot adequately model human learning
behavior.
>>
This
is a false implication not only for
>>the extravagant 'Sergio HLUT' but also for any hypothetical AI
>>entity emulating 'Sergio level' abilities on a FSA.
Perhaps you have an argument as to its falsity? Perhaps you have an
argument that you could successfully emulate 'Sergio level'
activities, particularly those activities we consider intelligent, on
an FSA. I haven't seen a persuasive argument to those conclusions.
Evidently we disagree on what constitutes learning and what
constitutes intelligence. I see them as both having to do with our
relationship with the world, rather than with formal operations that
can be emulated on an FSA.
>I'm kind of concerned to where this will lead us to and I would like,
>if you agree, to ask you to refine your arguments for the FSA.
>For instance, in which way do you think a *single* FSA, no matter
>how complex, will be able to correspond to the intelligent behavior
>of a human? I, for one, don't think it is possible.
I agree with Sergio, in that I would like to see the argument.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <7c0rmj$rbb@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>>More than that; for each *possible* history of inputs
>>for a human being, the HLUT gives an action that is an
>>intelligent response. So it contains much more information
>>than the history of i/o pairs for any one human's lifetime.
>>
>
>This is equivalent to think that the universe is deterministic
Not at all! Suppose that I'm rolling a six-sided die. Beforehand,
I make a plan: if I roll a 1, I'll do A, if I roll a 2, I'll do B,
...
This lookup table describing what I'll do for each possible
outcome of a die roll does *not* assume that the die is
deterministic.
[stuff deleted]
>Daryl, do you understand that what you're proposing would only
>be possible if the universe was deterministic?
That's completely false. Determinism has *nothing* to do with it.
>If this HLUT, by definition, will behave like Sergio and if
>this HLUT is able to behave like Sergio if it runs again, then
>that means that this HLUT have, as entries, all possible
>conditions of all possible atoms in the universe in all
>possible positions, velocities, interaction with each other,
>etc. Is that right?
No, that it not right. The entries don't correspond to states
of the world, they correspond to histories of inputs to Sergio.
Rather than saying "If the world is in state w, do x"
the HLUT says "If Sergio sees pattern p, and hears pattern q,
and smells pattern r, and tastes pattern s, then do x".
You talk about making a decision about whether to go outside
or stay in, based on the weather. Surely, the total pattern
of inputs to our senses is different in the two cases; we
heard a weather report predicting rain, or we looked outside
and say that it was raining, or we stuck our hand out the door
and felt rain. The decision about whether to go outside or not
is a function of the sensory inputs we have received in the
past. We don't need the HLUT to record anything about the
states of the molecules in clouds, we only need it to record
the history of sensory inputs received so far.
Let me describe how a hypothetical robot might behave
intelligently.
At any given time, the robot's state is described
by 4 items:
1. At any time, the robot has a current model for
the world. Let's assume it is a probabilistic model:
it contains the possible states of the world,
probabilities for those states, rules for how
states and probabilities change with time, etc.
Naturally, this model will lack the details of
the real world, but hopefully it is good enough.
2. The robot has a model of how the state
of the world affects the robot's sensory inputs.
We can assume that this is a probabilistic model,
as well.
3. The robot has a model for itself, what actions
it can take, and how those actions might affect
the world. This model is probabilistic, too.
4. The robot has an evolving set of goals, things
that it wishes to accomplish.
We can break up the intelligent processing for this
hypothetical robot into several steps:
1. The world is in some state w.
2. The robot receives input i.
3. Based on that input, the robot adjusts
his various models of the world and his
relationship to it.
4. The robot decides on a course of action that is
appropriate for his current understanding of the
of the world, and his current goals. The course of
action it decides on is not only appropriate for
the most likely state of the world (in his model),
but is also for other world states that have
significant probability.
5. Based on the course of action decided upon, the robot
makes an output o.
6. The robot receives more inputs, adjusts his models,
adjusts his course of action, and makes more outputs.
Now, there is absolutely nothing in this description of things
that depends on the world being deterministic. There is nothing
that assumes that the robot has a detailed model of the
universe, down to the level of the locations of individual
atoms.
But any such robot can be implemented by an HLUT. Regardless
of what complexity goes on inside the robot, adjusting models,
goals, courses of actions, etc., the external behavior of the
robot will have a simple form: for every sequence <i0,i1,i2,...iN>
of inputs, there is one or more possible outputs o. If the
set of all possible pairs (input sequence, output) of length
less than some maximum (corresponding to the expected lifetime
of the robot) is contained in an HLUT, then the above robot
will be equivalent to one that does the following:
1. Received input i.
2. Concatenate it onto the list of inputs received so far.
3. Using that list as an index, lookup the associated output
(if there is more than one, just pick one).
4. Make that output.
5. Go to 1.
(Small point: if the original robot was nondeterministic, then
the HLUT needs to be indexed by both the inputs received so far
and also the outputs made so far.)
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <36e437a1@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7c0rmj$rbb@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 8 Mar 1999 20:48:33 GMT, 200.229.242.158
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7c0rmj$rbb@edrn.newsguy.com>...
>Sergio says...
>
>>>More than that; for each *possible* history of inputs
>>>for a human being, the HLUT gives an action that is an
>>>intelligent response. So it contains much more information
>>>than the history of i/o pairs for any one human's lifetime.
>>>
>>
>>This is equivalent to think that the universe is deterministic
>
>Not at all! Suppose that I'm rolling a six-sided die. Beforehand,
>I make a plan: if I roll a 1, I'll do A, if I roll a 2, I'll do B,
>...
>
>This lookup table describing what I'll do for each possible
>outcome of a die roll does *not* assume that the die is
>deterministic.
>
This will not work if the die you're rolling is one in which you
don't know *how many possibilities* there are (I'd say that most
things in our life work this way: we must see what happened to
establish what to do). You can't know beforehand what will happen
when you direct a specially modulated laser beam toward a DNA in
a cell. You may win the nobel prize because of this or you may end
with a burnt tissue. A HLUT will never win a nobel prize.
>[stuff deleted]
>
>>Daryl, do you understand that what you're proposing would only
>>be possible if the universe was deterministic?
>
>That's completely false. Determinism has *nothing* to do with it.
>
It has to do if you claim that a HLUT may present intelligent
behavior. Don't think on past issues, think about the future.
How will a HLUT handle the future? Do you understand that to
be able to go well in the future, there's only *two* possibilities:
the HLUT knows beforehand all things that may come from it OR
the intelligent brain knows how to fit future experiences in
terms of past categorizations and then it uses the methods
that *worked well in the past*, and learn a new method when
things don't go right.
>>If this HLUT, by definition, will behave like Sergio and if
>>this HLUT is able to behave like Sergio if it runs again, then
>>that means that this HLUT have, as entries, all possible
>>conditions of all possible atoms in the universe in all
>>possible positions, velocities, interaction with each other,
>>etc. Is that right?
>
>No, that it not right. The entries don't correspond to states
>of the world, they correspond to histories of inputs to Sergio.
>
>Rather than saying "If the world is in state w, do x"
>the HLUT says "If Sergio sees pattern p, and hears pattern q,
>and smells pattern r, and tastes pattern s, then do x".
>
For such a thing to work, you'll have to agree with me that the
HLUT would need entries with all combinations of p, q, r, s and
x and a complete log of all previous runs. I agree that this is
large, but finite. What I'm saying is that this is equivalent of
having the entire universe (position of atoms, velocities, spins,
quarks, etc) in such a HLUT. And why this? Because I may be
affected (meaning, the HLUT will have to take into account) by
something that occurred billions of years ago. I wrote about
this in another post, tell me if you need another example.
>You talk about making a decision about whether to go outside
>or stay in, based on the weather. Surely, the total pattern
>of inputs to our senses is different in the two cases; we
>heard a weather report predicting rain, or we looked outside
>and say that it was raining, or we stuck our hand out the door
>and felt rain. The decision about whether to go outside or not
>is a function of the sensory inputs we have received in the
>past. We don't need the HLUT to record anything about the
>states of the molecules in clouds, we only need it to record
>the history of sensory inputs received so far.
>
Your decision of going to a walk is not a function only of your
sensory inputs of the moment. It is also a reflex of your previous
experiences. Now you will hurry to tell me that the vector H
used to address the HLUT *also* takes into consideration all past
entries of one's life. Ok, fine, that's an instance of a HLUT made
of *recorded* previous experiences. I know that such a beast
exists.
What I'm saying (please, note the difference) is that *in another
run*, this HLUT will not be able to provide "intelligent" behavior
because the world its in *now* is different than the world the
HLUT was build *before*. The old experiences do not mean much now.
Then you say to me that you'll take this new HLUT and meld with
the previous to build another HLUT. And I say that this new HLUT
will not be able to present intelligent behavior because the new
experiences it will be subject to, are different. Where will it
end? Only when the HLUT contains all the knowledge of all the
positions, velocities, spins, geez, everything in the universe!
The HLUT must predict the whole thing, or it will not be
intelligent. Even then, according to my notion of the word
"intelligence", this HLUT will not be one of them.
>Let me describe how a hypothetical robot might behave
>intelligently.
>
>At any given time, the robot's state is described
>by 4 items:
>
> 1. At any time, the robot has a current model for
> the world. Let's assume it is a probabilistic model:
> it contains the possible states of the world,
> probabilities for those states, rules for how
> states and probabilities change with time, etc.
> Naturally, this model will lack the details of
> the real world, but hopefully it is good enough.
> 2. The robot has a model of how the state
> of the world affects the robot's sensory inputs.
> We can assume that this is a probabilistic model,
> as well.
> 3. The robot has a model for itself, what actions
> it can take, and how those actions might affect
> the world. This model is probabilistic, too.
> 4. The robot has an evolving set of goals, things
> that it wishes to accomplish.
>
I'm glad you wrote this, Daryl. Because this puts the important
points in front of us. When you mention the word "probabilistic",
that means we group some events under the same ceiling: we're
doing categorization, reducing the complexity of something and
putting a compacted model in place of it. We're thinking about
causal models and ways to allow the robot to predict its
world. This is the way to go. But this is not a HLUT.
>We can break up the intelligent processing for this
>hypothetical robot into several steps:
>
> 1. The world is in some state w.
> 2. The robot receives input i.
> 3. Based on that input, the robot adjusts
> his various models of the world and his
> relationship to it.
> 4. The robot decides on a course of action that is
> appropriate for his current understanding of the
> of the world, and his current goals. The course of
> action it decides on is not only appropriate for
> the most likely state of the world (in his model),
> but is also for other world states that have
> significant probability.
> 5. Based on the course of action decided upon, the robot
> makes an output o.
> 6. The robot receives more inputs, adjusts his models,
> adjusts his course of action, and makes more outputs.
>
>Now, there is absolutely nothing in this description of things
>that depends on the world being deterministic. There is nothing
>that assumes that the robot has a detailed model of the
>universe, down to the level of the locations of individual
>atoms.
Fine! Ok, I agree! This model do not demand that the universe
be nondeterministic. I would propose a different set of operations,
but I have no problem, in principle, with your approach. But
you have to agree with me that this is *not* reducible to a
HLUT.
I would like to say that we can easily find some "points" to
change in the basic HLUT idea in such a way that it can present
some kinds of intelligent behavior. What I'm saying for quite
some posts now, is that without the use of special "techniques",
without the use of "tricks" (such as categorizatin), a HLUT
will not be intelligent.
>
>But any such robot can be implemented by an HLUT. Regardless
>of what complexity goes on inside the robot, adjusting models,
>goals, courses of actions, etc., the external behavior of the
>robot will have a simple form: for every sequence <i0,i1,i2,...iN>
>of inputs, there is one or more possible outputs o. If the
>set of all possible pairs (input sequence, output) of length
>less than some maximum (corresponding to the expected lifetime
>of the robot) is contained in an HLUT, then the above robot
>will be equivalent to one that does the following:
>
> 1. Received input i.
> 2. Concatenate it onto the list of inputs received so far.
> 3. Using that list as an index, lookup the associated output
> (if there is more than one, just pick one).
> 4. Make that output.
> 5. Go to 1.
>
>(Small point: if the original robot was nondeterministic, then
>the HLUT needs to be indexed by both the inputs received so far
>and also the outputs made so far.)
>
Oh, Daryl, back to the drawing board. This is not the same as the
probabilistic/causal model you presented earlier.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <7c1hjn$3fu@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7c0rmj$rbb@edrn.newsguy.com> <36e437a1@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>
>Daryl McCullough wrote in message <7c0rmj$rbb@edrn.newsguy.com>...
>>Not at all! Suppose that I'm rolling a six-sided die. Beforehand,
>>I make a plan: if I roll a 1, I'll do A, if I roll a 2, I'll do B,
>>...
>>
>>This lookup table describing what I'll do for each possible
>>outcome of a die roll does *not* assume that the die is
>>deterministic.
>>
>
>
>This will not work if the die you're rolling is one in which you
>don't know *how many possibilities* there are
In the HLUT case, we *do* know how many possibilities there
are. We know the "alphabet" of sensory inputs (seeing, hearing,
taste, touch, smell). Given finite resolution, there are only
finitely many different patterns possible in a finite lifetime.
>>[stuff deleted]
>>
>>>Daryl, do you understand that what you're proposing would only
>>>be possible if the universe was deterministic?
>>
>>That's completely false. Determinism has *nothing* to do with it.
>>
>
>It has to do if you claim that a HLUT may present intelligent
>behavior. Don't think on past issues, think about the future.
>How will a HLUT handle the future?
By assumption, it will handle the future exactly like you would.
>Do you understand that to be able to go well in the future,
>there's only *two* possibilities:
>the HLUT knows beforehand all things that may come from it
It *does*! There are only finitely many different possible
experiences a person can have in one lifetime. The HLUT includes
appropriate responses for all of them.
>>No, that it not right. The entries don't correspond to states
>>of the world, they correspond to histories of inputs to Sergio.
>>
>>Rather than saying "If the world is in state w, do x"
>>the HLUT says "If Sergio sees pattern p, and hears pattern q,
>>and smells pattern r, and tastes pattern s, then do x".
>>
>
>For such a thing to work, you'll have to agree with me that the
>HLUT would need entries with all combinations of p, q, r, s and
>x and a complete log of all previous runs. I agree that this is
>large, but finite. What I'm saying is that this is equivalent of
>having the entire universe (position of atoms, velocities, spins,
>quarks, etc) in such a HLUT.
No, it is not.
>And why this? Because I may be
>affected (meaning, the HLUT will have to take into account) by
>something that occurred billions of years ago. I wrote about
>this in another post, tell me if you need another example.
Your example is irrelevant. It doesn't matter what *caused*
the input, the only thing that matters is how Sergio responds
to it.
>Your decision of going to a walk is not a function only of your
>sensory inputs of the moment. It is also a reflex of your previous
>experiences.
Yes. All those previous experiences are part of the input
history fed to the HLUT!
>Now you will hurry to tell me that the vector H
>used to address the HLUT *also* takes into consideration all past
>entries of one's life.
Yes, exactly!
>What I'm saying (please, note the difference) is that *in another
>run*, this HLUT will not be able to provide "intelligent" behavior
>because the world its in *now* is different than the world the
>HLUT was build *before*. The old experiences do not mean much now.
Your behavior is only affected by the world through your experiences.
>Then you say to me that you'll take this new HLUT and meld with
>the previous to build another HLUT.
No! The HLUT *already* contains all possible input histories!
I don't have to meld it with another HLUT. The one HLUT covers
all possibilities.
>Where will it
>end? Only when the HLUT contains all the knowledge of all the
>positions, velocities, spins, geez, everything in the universe!
No. It ends when the HLUT contains an entry for every possible
input history that a human could ever receive in one lifetime.
>The HLUT must predict the whole thing, or it will not be
>intelligent.
That's not true. All that it needs to do is to predict how
a human would respond to hypothetical situations (described
in terms of input histories). It doesn't need to predict
*which* input histories the human will receive (which would
indeed require knowledge of the whole universe), it just
needs to be prepared for any possibility.
>>Let me describe how a hypothetical robot might behave
>>intelligently.
>>
>>At any given time, the robot's state is described
>>by 4 items:
>>
>> 1. At any time, the robot has a current model for
>> the world. Let's assume it is a probabilistic model:
>> it contains the possible states of the world,
>> probabilities for those states, rules for how
>> states and probabilities change with time, etc.
>> Naturally, this model will lack the details of
>> the real world, but hopefully it is good enough.
>> 2. The robot has a model of how the state
>> of the world affects the robot's sensory inputs.
>> We can assume that this is a probabilistic model,
>> as well.
>> 3. The robot has a model for itself, what actions
>> it can take, and how those actions might affect
>> the world. This model is probabilistic, too.
>> 4. The robot has an evolving set of goals, things
>> that it wishes to accomplish.
>>
>
>I'm glad you wrote this, Daryl. Because this puts the important
>points in front of us. When you mention the word "probabilistic",
>that means we group some events under the same ceiling: we're
>doing categorization, reducing the complexity of something and
>putting a compacted model in place of it. We're thinking about
>causal models and ways to allow the robot to predict its
>world. This is the way to go. But this is not a HLUT.
It's provably equivalent to an HLUT. *Any* program can be
converted into an equivalent HLUT. It's an elementary fact
about computer programs.
>>We can break up the intelligent processing for this
>>hypothetical robot into several steps:
>>
>> 1. The world is in some state w.
>> 2. The robot receives input i.
>> 3. Based on that input, the robot adjusts
>> his various models of the world and his
>> relationship to it.
>> 4. The robot decides on a course of action that is
>> appropriate for his current understanding of the
>> of the world, and his current goals. The course of
>> action it decides on is not only appropriate for
>> the most likely state of the world (in his model),
>> but is also for other world states that have
>> significant probability.
>> 5. Based on the course of action decided upon, the robot
>> makes an output o.
>> 6. The robot receives more inputs, adjusts his models,
>> adjusts his course of action, and makes more outputs.
>>
>>Now, there is absolutely nothing in this description of things
>>that depends on the world being deterministic. There is nothing
>>that assumes that the robot has a detailed model of the
>>universe, down to the level of the locations of individual
>>atoms.
>
>Fine! Ok, I agree! This model do not demand that the universe
>be nondeterministic. I would propose a different set of operations,
>but I have no problem, in principle, with your approach. But
>you have to agree with me that this is *not* reducible to a
>HLUT.
No, I do not agree. Every computer program with a finite
lifetime is equivalent in behavior to an HLUT.
>>But any such robot can be implemented by an HLUT. Regardless
>>of what complexity goes on inside the robot, adjusting models,
>>goals, courses of actions, etc., the external behavior of the
>>robot will have a simple form: for every sequence <i0,i1,i2,...iN>
>>of inputs, there is one or more possible outputs o. If the
>>set of all possible pairs (input sequence, output) of length
>>less than some maximum (corresponding to the expected lifetime
>>of the robot) is contained in an HLUT, then the above robot
>>will be equivalent to one that does the following:
>>
>> 1. Received input i.
>> 2. Concatenate it onto the list of inputs received so
far.
>> 3. Using that list as an index, lookup the associated
output
>> (if there is more than one, just pick one).
>> 4. Make that output.
>> 5. Go to 1.
>>
>>(Small point: if the original robot was nondeterministic, then
>>the HLUT needs to be indexed by both the inputs received so far
>>and also the outputs made so far.)
>>
>
>Oh, Daryl, back to the drawing board. This is not the same as the
>probabilistic/causal model you presented earlier.
Yes, it is. They are equivalent! This is a simple mathematical
fact---any program that only runs a finite period of time is
equivalent (in behavior) to a finite state machine, and any
finite state machine that only runs for a finite period of
time is equivalent to an HLUT!
Daryl
From: andersw+@pitt.edu (Anders N Weinstein)
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <7c1p7r$i56$1@usenet01.srv.cis.pitt.edu>
References: <7bkot5$qk5@ux.cs.niu.edu> <7c0rmj$rbb@edrn.newsguy.com>
<36e437a1@news3.us.ibm.net> <7c1hjn$3fu@edrn.newsguy.com>
Organization: University of Pittsburgh
Newsgroups: comp.ai.philosophy
In article <7c1hjn$3fu@edrn.newsguy.com>,
Daryl McCullough <daryl@cogentex.com> wrote:
>Sergio says...
>>
>>Daryl McCullough wrote in message <7c0rmj$rbb@edrn.newsguy.com>...
>
>In the HLUT case, we *do* know how many possibilities there
>are. We know the "alphabet" of sensory inputs (seeing, hearing,
>taste, touch, smell). Given finite resolution, there are only
>finitely many different patterns possible in a finite lifetime.
I would recommed not calling this an input "alphabet". Functionally,
the human brain may be best understood in some of its functions as an
analog device transforming continuously varying input signals to
continuously varying outputs. Most physical devices are normally
treated as instances of this form. In ths human case, the task of
keeping your balance as you walk, of dancing to a given rhythim or
simply repeating a sound in a higher key might be accomplished by
analog transformation. There need be no discrete "input alphabet"
naturally defined for any of these.
What there may be is only *your* idea *as a theorist* that you could,
if you wished, use an Analog/Digital converter to convert the analog
input to a digital stream, use a digital system on the input and use a
digital/analog converter on the output side to accomplish a similar
input output mapping with a much more complicated discrete state
machine working in the middle.
>It *does*! There are only finitely many different possible
>experiences a person can have in one lifetime. The HLUT includes
I don't see that this is true. You are using the non-physical
term "experience" here. You have not said how the space of possible
experiences varies with the space of possible "inputs". This is not
a simple matter.
Remember, there are some experiences that are not simply correlated
with low-level sensory features of the input signal. For example,
the experience of a sound as coming from a certain location is
not correlated with sensations of pitch or loudness. As JJ Gibson
said, it is a sensation-less experience. It co-varies with complex
relational features of total stimulation, i.e. some function defined
over the total physical stimulation.
More generally, we can have the experiences of finding a joke funny or
cliched, or offensive to our sensibilities, or even morally repugnant.
The individuation of "experiences" is not simple. The nature of the
*experiences* you have may be determined on your culturation.
It is even possible that the individuation of "experiences" as we
ordinarly use the term is in part relationally determined, so that it
is not even determined by *total* brain configuration. At any rate
that seems to me to be an open question. I would not presuppose that
ordinary psychological descriptions supervene on brain configurations
described in non-psychological terms.
So I think you should be careful throwing around the ordinary term
"experiences"; you have not done any work to entitle yourself to use it.
>>run*, this HLUT will not be able to provide "intelligent" behavior
>>because the world its in *now* is different than the world the
>>HLUT was build *before*. The old experiences do not mean much now.
>
>Your behavior is only affected by the world through your experiences.
Innumerous things affect your behavior besides your experiences. Presence of
LSD in the blood stream, for example. In a more basic way, almost all
of the neural development that affects your behavior are outside
of your experiences.
You really should be careful using a mentalistic word for non-mental
events. It gives the illusion that you are saying something about
psychology. But you are not. In fact, you are hardly saying
anything about anything.
>>The HLUT must predict the whole thing, or it will not be
>>intelligent.
>
>That's not true. All that it needs to do is to predict how
>a human would respond to hypothetical situations (described
>in terms of input histories). It doesn't need to predict
Again, "responding to situations" is ordinary language, language
you are not entitled to. A total stimulation is not a "situation".
In ordinary language it is possible that two brains on whose surfaces
are sprayed congruent patterns of physical energy are neverhteless
responding to different (real-world) situations.
>>Fine! Ok, I agree! This model do not demand that the universe
>>be nondeterministic. I would propose a different set of operations,
>>but I have no problem, in principle, with your approach. But
>>you have to agree with me that this is *not* reducible to a
>>HLUT.
>
>No, I do not agree. Every computer program with a finite
>lifetime is equivalent in behavior to an HLUT.
Sure, but I would suggest not every system is [usefully explained as]
a computer program. In particular, a human being is not obviously
a computer program.
There may be no useful definition of a discrete input and output
alphabet for a human being, for example.
In the ordinary sense human behavior is described in what are not
obviously digital terms. Again, the digitization is only your
theoretical imposition, which is no more suitable for human behavior
then for the behavior of a hurricane.
And there are even descriptions like "he mailed his rent check", which
characterize behavior in socially constituted terms. (You cannot mail your
rent check if you are not in a suitable social world, with ownership,
rent, money check-writing).
Just as you are not entitled to use the ordinary term 'experience",
I think you are not entitled to use the ordinary term "behavior".
I suspect you are trying to connect the mathematical existence of something
fitting a possibly screwy and useless conceptualization of human behavior
as digital to something *real* -- psychology, or at least some science, or
even worse, everyday characterizations in terms of 'experience"
and "behavior". But to make it stick you should really eschew these
terms to which you are not entitled.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <36e53c85@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <7c0rmj$rbb@edrn.newsguy.com>
<36e437a1@news3.us.ibm.net> <7c1hjn$3fu@edrn.newsguy.com>
<7c1p7r$i56$1@usenet01.srv.cis.pitt.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 9 Mar 1999 15:21:41 GMT, 166.72.21.89
Organization: SilWis
Newsgroups: comp.ai.philosophy
Anders N Weinstein wrote in message
<7c1p7r$i56$1@usenet01.srv.cis.pitt.edu>...
>In article <7c1hjn$3fu@edrn.newsguy.com>,
>Daryl McCullough <daryl@cogentex.com> wrote:
>
>>It *does*! There are only finitely many different possible
>>experiences a person can have in one lifetime. The HLUT includes
>
>I don't see that this is true. You are using the non-physical
>term "experience" here. You have not said how the space of possible
>experiences varies with the space of possible "inputs". This is not
>a simple matter.
>
>Remember, there are some experiences that are not simply correlated
>with low-level sensory features of the input signal. For example,
>the experience of a sound as coming from a certain location is
>not correlated with sensations of pitch or loudness. As JJ Gibson
>said, it is a sensation-less experience. It co-varies with complex
>relational features of total stimulation, i.e. some function defined
>over the total physical stimulation.
>
>More generally, we can have the experiences of finding a joke funny or
>cliched, or offensive to our sensibilities, or even morally repugnant.
>The individuation of "experiences" is not simple. The nature of the
>*experiences* you have may be determined on your culturation.
>
Anders, as much as I agree with your comments, they will not be
enough to counterbalance Daryl's proposition. He is claiming that
that HLUT have all kinds of possible experiences, with all possible
thoughts one might have due to all sort of inputs in all sort
of sequences and exibiting all sort of outputs. That, obviously,
would make a googolplex look like an atom looks if compared to
the andromeda galaxy. But it is "finite" in a purely mathematical
and philosophical sense. This is a "face only coin" and Daryl
is betting that when flipped this coin will give "face up".
What should be discussed is the merit of thinking that way
instead of reading Gibson and all other cognitive scientists.
Regards,
Sergio Navega.
From: andersw+@pitt.edu (Anders N Weinstein)
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <7c3m7f$4bl$1@usenet01.srv.cis.pitt.edu>
References: <7bkot5$qk5@ux.cs.niu.edu> <7c1hjn$3fu@edrn.newsguy.com>
<7c1p7r$i56$1@usenet01.srv.cis.pitt.edu> <36e53c85@news3.us.ibm.net>
Organization: University of Pittsburgh
Newsgroups: comp.ai.philosophy
In article <36e53c85@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
wrote:
>Anders N Weinstein wrote in message
><7c1p7r$i56$1@usenet01.srv.cis.pitt.edu>...
>>In article <7c1hjn$3fu@edrn.newsguy.com>,
>>Daryl McCullough <daryl@cogentex.com> wrote:
>
>Anders, as much as I agree with your comments, they will not be
>enough to counterbalance Daryl's proposition. He is claiming that
>that HLUT have all kinds of possible experiences, with all possible
>thoughts one might have due to all sort of inputs in all sort
>of sequences and exibiting all sort of outputs. That, obviously,
>would make a googolplex look like an atom looks if compared to
>the andromeda galaxy. But it is "finite" in a purely mathematical
>and philosophical sense.
That is what he thinks. I believe one can question this claim. At any
rate it should not be treated as obvious. It is something one does not
say of any arbitrary physical system, for example, since It is not
obvious even for a purely physical system involving mutally interacting
feedback loops by which it simultaneously modifies and adapts to the
physical environment in which it operates. So it should not be taken
to follow simply from the fact that a human being is a physical system
subject to physical law that there is an adequate finite delimitation
of the space of possibilities.
And it is *definitely* non-obvious when psychological descriptions are
used.
The notion of "all possible experiences" is not well defined and is not
a purely mathematical notion -- it uses the everyday psychological
concept of "experiences". The notion of "all possible behavior" is not
well defined and is not a purely mathematical notion -- it uses the
everyday concept of "behavior".
Moreover, the idea that there are "inputs" and "outputs" and that
the
whole function of mind can be reduced to that of transforming "inputs"
to outputs in a state-determined way is also tendentious, since not
everything is physical system is ordinarly described in these terms. As
Daryl defined it, e.g. the effect of chemicals, nutrients, magnetic
fields must be taken to be "inputs", even those these are not
considered "experiences" and indeed are not usually considered part of
psychology.
So even if your own misunderstanding has been cleared up, I still think
you are conceding too much to Daryl.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <7c3o0c$1at@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <7c1hjn$3fu@edrn.newsguy.com>
<7c1p7r$i56$1@usenet01.srv.cis.pitt.edu> <36e53c85@news3.us.ibm.net>
<7c3m7f$4bl$1@usenet01.srv.cis.pitt.edu>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
andersw+@pitt.edu (Anders Weinstein) says...
>That is what he thinks. I believe one can question this claim. At any
>rate it should not be treated as obvious.
It's not obvious, just true. 8^)
>It is something one does not say of any arbitrary physical system,
It could just as well be said of any arbitrary physical system.
It's not a useful thing to say, because if the number
of states is too big, it becomes impractical to enumerate them
all.
>So it should not be taken to follow simply from the fact that
>a human being is a physical system subject to physical law that
>there is an adequate finite delimitation of the space of
>possibilities.
I believe the corresponding claim for any physical system.
The space of distinguishable states for a system of finite
energy, volume, number of particles, etc. is finite. This
fact is used in statistical mechanics, where entropy is
defined as
S = log W
where W = number of possible microstates corresponding to
the same macrostate (as defined by bulk parameters such as
temperature, volume, pressure, etc.
>The notion of "all possible experiences" is not well defined and is not
>a purely mathematical notion -- it uses the everyday psychological
>concept of "experiences".
Speaking of quantifying over experiences is, I agree, loose talk.
What I really mean is that we quantify over all parameters that
contribute to subjective experience, namely, states of the brain,
and interactions with the world.
>The notion of "all possible behavior" is not well defined and
>is not a purely mathematical notion -- it uses the
>everyday concept of "behavior".
Once again, it is not necessary to have a precise notion of
behavior, if you are quantifying over all possible parameters
affecting behavior. Regardless of what you say a behavior is,
it has to be expressed through the medium of muscle movement,
since muscles are our only mechanism for interacting with the
world.
>Moreover, the idea that there are "inputs" and "outputs" and
that the
>whole function of mind can be reduced to that of transforming "inputs"
>to outputs in a state-determined way is also tendentious, since not
>everything is physical system is ordinarly described in these terms.
That's the computationalist premise, that behavior is usefully
described in those terms. You certainly need not accept
that premise.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: andersw+@pitt.edu (Anders N Weinstein)
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <7c44d5$5pt$1@usenet01.srv.cis.pitt.edu>
References: <7bkot5$qk5@ux.cs.niu.edu> <36e53c85@news3.us.ibm.net>
<7c3m7f$4bl$1@usenet01.srv.cis.pitt.edu> <7c3o0c$1at@edrn.newsguy.com>
Organization: University of Pittsburgh
Newsgroups: comp.ai.philosophy
In article <7c3o0c$1at@edrn.newsguy.com>,
Daryl McCullough <daryl@cogentex.com> wrote:
>andersw+@pitt.edu (Anders Weinstein) says...
>
>>It is something one does not say of any arbitrary physical system,
>
>It could just as well be said of any arbitrary physical system.
>It's not a useful thing to say, because if the number
>of states is too big, it becomes impractical to enumerate them
>all.
>>So it should not be taken to follow simply from the fact that
>>a human being is a physical system subject to physical law that
>>there is an adequate finite delimitation of the space of
>>possibilities.
>
>I believe the corresponding claim for any physical system.
>The space of distinguishable states for a system of finite
>energy, volume, number of particles, etc. is finite. This
>fact is used in statistical mechanics, where entropy is
>defined as
>
> S = log W
>
>where W = number of possible microstates corresponding to
>the same macrostate (as defined by bulk parameters such as
>temperature, volume, pressure, etc.
It's been a while since I studied physics, although that was my original
undergraduate major. I do however recall calculus being used. For example,
Maxwell's equations that some people wear on a T-shirt are differential
equations governing fields. But even so simple a law as that angle of
incidence equals angle of reflection involves attributing an infinite
disposition to a physical object. And even quantum mechanics does
not quantize everything, for example it does not quantize spatial
position and time. (I have heard of some speculative theories that take
that step, but have not heard they were widely adopted.)
If you want to say the science has shown that the universe in itself is
at bottom one big discrete-state system, I had not read this in the
Science Times, but I would defer to your greater expertise on these
results if you could in fact demonstrate entitlement to that authority.
But then you could have just cited this evidently well-known result and
I would have to stop arguing.
Although it is still true that you are moving from a micro-physical
level to a higher-level description when you talk about "experiences"
or psychology or learning. My impression is that there is some thriving
research using analog models of neuronal function, and that the possible
existence of a discrete-state approximation at some atomic level is as
besides the point as it is to explaining the function of the lens in the
eye.
There is also the question of: at what point does the stimulus start
to become "input" to the mind?
>>The notion of "all possible experiences" is not well defined and is
not
>>a purely mathematical notion -- it uses the everyday psychological
>>concept of "experiences".
>
>Speaking of quantifying over experiences is, I agree, loose talk.
>What I really mean is that we quantify over all parameters that
>contribute to subjective experience, namely, states of the brain,
>and interactions with the world.
I am not sure these are the only things that could contribute to
subjective experience in the ordinary sense. I am willing reject the
ide of the "local supervenience" of experiences on brain
configurations, since I think the individuation of experiences can
involve semantics and semantics can be relationally determined. I.e. I
think that psychological predicates could supervene on brain states in
the same way that economic value supervenes on the structure of pieces
of paper that constitute currency. That is, that two brains could be in
the same state, but not have the same experiences. That is one way to
draw the moral from Putnam's Twin Earth. Certainly I see no reason this
could not be.
BTW, I do not think that computational psychology as practiced
needs to deny this. For example, Tyler Burge has argued that Marr's
theory of vision has the same result as Twin Earth -- that identical
brains working in different normal environments might be doing
different things according to such a theory -- one could be detecting
edges, another one could be detecting shadows, say, even if the two
computations are syntactically identical. Yet surely this
is a paradigm of computationalist theory. Yet it also includes
"what is computed and why", a teleological element that might depend
on a relation to a normal environment.
I guess I do believe that subjective experiences in
the ordinary sense supervene on the total physical state of the world
and all its history. But I don't know if that is finite.
But even on the assumption of local supervenience, if you say the
space is finite you have left behind any hope of a computational description
in psychological terms. For example, you might have to reproduce
protein folding in digital terms if that makes a difference to function.
>>The notion of "all possible behavior" is not well defined and
>>is not a purely mathematical notion -- it uses the
>>everyday concept of "behavior".
>
>Once again, it is not necessary to have a precise notion of
>behavior, if you are quantifying over all possible parameters
>affecting behavior. Regardless of what you say a behavior is,
>it has to be expressed through the medium of muscle movement,
>since muscles are our only mechanism for interacting with the
>world.
But first, causal processes involved in interactive behavior involve
feedback loops in which causal chains loop inside and outside the organism,
such that the interface between them does not count as an interesting
"interface" at which something becomes "input".
More generallly, note that ordinary descriptions of behavior can
include such things as "throwing him a come-hither look" or "snubbing
him dead as she entered the room" or "rooting around in the drawer
looking for his pen". These do not supervene on brain state alone.
They supervene on the complex of moving body in a physical and may
depend on details of the social world as well. So cotypical muscular
contractions would constitute quite different behavior in different
environments.
Science often involves a search for the right descriptive vocabulary, for the
description where the patterns are. I am not sure that muscular contractions
and the like form a natural kind for the purposes of psychology. It
seems to me that ordinary descriptions like those above could be more
appropriate.
>>Moreover, the idea that there are "inputs" and "outputs"
and that the
>>whole function of mind can be reduced to that of transforming "inputs"
>>to outputs in a state-determined way is also tendentious, since not
>>everything is physical system is ordinarly described in these terms.
>
>That's the computationalist premise, that behavior is usefully>described in those
terms. You certainly need not accept
>that premise.
Sure, but are you trying to defend this confusion or just taking it as given?
And remember, my objection is that whenever you are pressed for
support, the *defense* of your thesis *shifts* from a strong and
controversial claim that requires a *useful* level of description (the
point at issue) to a weaker defense in terms of a different and
conceded *non-useful* level of description. E.g. perhaps you have to
start introducing quantum-mechanical levels of description and
hypotheses about the discreteness of basic physics solely to preserve
the vision of a finite space of discrete states, even though this level
of description is useless as *psychology*.
Aren't you shifting back and forth between an implausible
psychological-level claim and an atomic-physics level defense?
From: rickert@cs.niu.edu (Neil Rickert)
Subject: Re: An HLUT challenge.
Date: 11 Mar 1999 00:00:00 GMT
Message-ID: <7c978t$i2@ux.cs.niu.edu>
References: <7c3m7f$4bl$1@usenet01.srv.cis.pitt.edu>
<7c3o0c$1at@edrn.newsguy.com>
Organization: Northern Illinois University
Newsgroups: comp.ai.philosophy
daryl@cogentex.com (Daryl McCullough) writes:
>andersw+@pitt.edu (Anders Weinstein) says...
>>The notion of "all possible behavior" is not well defined and
>>is not a purely mathematical notion -- it uses the
>>everyday concept of "behavior".
>Once again, it is not necessary to have a precise notion of
>behavior, if you are quantifying over all possible parameters
>affecting behavior. Regardless of what you say a behavior is,
>it has to be expressed through the medium of muscle movement,
>since muscles are our only mechanism for interacting with the
>world.
At the embryonic stage there are no muscles. If your discrete state
space model of behavior is based on muscle movement, then clearly
there are changes that can occur which are outside the limit of your
model. How can you be sure that the changes involved with learning
can fit within your model?
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <36e5817c@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <7c1hjn$3fu@edrn.newsguy.com>
<7c1p7r$i56$1@usenet01.srv.cis.pitt.edu> <36e53c85@news3.us.ibm.net>
<7c3m7f$4bl$1@usenet01.srv.cis.pitt.edu>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 9 Mar 1999 20:15:56 GMT, 200.229.240.65
Organization: SilWis
Newsgroups: comp.ai.philosophy
Anders N Weinstein wrote in message
<7c3m7f$4bl$1@usenet01.srv.cis.pitt.edu>...
>In article <36e53c85@news3.us.ibm.net>, Sergio Navega <snavega@ibm.net>
wrote:
>>Anders N Weinstein wrote in message
>><7c1p7r$i56$1@usenet01.srv.cis.pitt.edu>...
>>>In article <7c1hjn$3fu@edrn.newsguy.com>,
>>>Daryl McCullough <daryl@cogentex.com> wrote:
>>
>>Anders, as much as I agree with your comments, they will not be
>>enough to counterbalance Daryl's proposition. He is claiming that
>>that HLUT have all kinds of possible experiences, with all possible
>>thoughts one might have due to all sort of inputs in all sort
>>of sequences and exibiting all sort of outputs. That, obviously,
>>would make a googolplex look like an atom looks if compared to
>>the andromeda galaxy. But it is "finite" in a purely mathematical
>>and philosophical sense.
>
>That is what he thinks. I believe one can question this claim.
I think it is possible to confront his claim when we question the
realizability of what he proposes with what we know from physics and
the amount of matter that can be found in this universe.
To build the HLUT he is offering, we would need much more atoms than
the universe is able to provide. In fact, I suspect that to represent
a few minutes of anyone's possible lives (vision, audition, touch,
going in, motor commands to limbs, vocal chords, etc, going out) with
such a breadth, we will need to use resources beyond what the Hubble
telescope can see.
>At any
>rate it should not be treated as obvious. It is something one does not
>say of any arbitrary physical system, for example, since It is not
>obvious even for a purely physical system involving mutally interacting
>feedback loops by which it simultaneously modifies and adapts to the
>physical environment in which it operates. So it should not be taken
>to follow simply from the fact that a human being is a physical system
>subject to physical law that there is an adequate finite delimitation
>of the space of possibilities.
>
>And it is *definitely* non-obvious when psychological descriptions are
>used.
>
>The notion of "all possible experiences" is not well defined and is not
>a purely mathematical notion -- it uses the everyday psychological
>concept of "experiences". The notion of "all possible behavior" is
not
>well defined and is not a purely mathematical notion -- it uses the
>everyday concept of "behavior".
>
Yes, I agree, and this says a lot about the problems we have about
"quantizing" subjective experiences. This is certainly related to
the problems faced by behaviorism: their intention was good, but
impractical and useless. The explanatory theories made by behaviorists
were rapidly substituted by more convincing cognitive explanations.
And today, cognitive science is the rule, behaviorism is the exception.
But the problems for psychology alone haven't gone away.
Some time ago I watched a video lecture where the guy (a
philosopher of science) said that psychology was not really a science
(he was joking, but you know what those jokes mean).
>Moreover, the idea that there are "inputs" and "outputs" and
that the
>whole function of mind can be reduced to that of transforming "inputs"
>to outputs in a state-determined way is also tendentious, since not
>everything is physical system is ordinarly described in these terms. As
>Daryl defined it, e.g. the effect of chemicals, nutrients, magnetic
>fields must be taken to be "inputs", even those these are not
>considered "experiences" and indeed are not usually considered part of
>psychology.
>
>So even if your own misunderstanding has been cleared up, I still think
>you are conceding too much to Daryl.
Who conceded, in fact, was my small "homunculus" mathematician, that
I have inside my skull (a remnant of my times of college). As I too
often "beat" this homunculus, I thought I could grant "him" a victory,
so he will let the rest of my mind work peacefully toward finding a
practical way to implement AI :-)
Regards,
Sergio Navega.
From: Jim Balter <jqb@sandpiper.net>
Subject: Re: An HLUT challenge.
Date: 11 Mar 1999 00:00:00 GMT
Message-ID: <36E77F28.6B85EED7@sandpiper.net>
Content-Transfer-Encoding: 7bit
References: <7bkot5$qk5@ux.cs.niu.edu> <7c0rmj$rbb@edrn.newsguy.com>
<36e437a1@news3.us.ibm.net> <7c1hjn$3fu@edrn.newsguy.com>
<7c1p7r$i56$1@usenet01.srv.cis.pitt.edu> <36e53c85@news3.us.ibm.net>
X-Accept-Language: en-US
Content-Type: text/plain; charset=us-ascii
Organization: Sandpiper Networks, Inc.
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Sergio Navega wrote:
>
> Anders N Weinstein wrote in message
> <7c1p7r$i56$1@usenet01.srv.cis.pitt.edu>...
> >In article <7c1hjn$3fu@edrn.newsguy.com>,
> >Daryl McCullough <daryl@cogentex.com> wrote:
>
> >
> >>It *does*! There are only finitely many different possible
> >>experiences a person can have in one lifetime. The HLUT includes
> >
> >I don't see that this is true. You are using the non-physical
> >term "experience" here. You have not said how the space of possible
> >experiences varies with the space of possible "inputs". This is not
> >a simple matter.
> >
> >Remember, there are some experiences that are not simply correlated
> >with low-level sensory features of the input signal. For example,
> >the experience of a sound as coming from a certain location is
> >not correlated with sensations of pitch or loudness. As JJ Gibson
> >said, it is a sensation-less experience. It co-varies with complex
> >relational features of total stimulation, i.e. some function defined
> >over the total physical stimulation.
> >
> >More generally, we can have the experiences of finding a joke funny or
> >cliched, or offensive to our sensibilities, or even morally repugnant.
> >The individuation of "experiences" is not simple. The nature of the
> >*experiences* you have may be determined on your culturation.
> >
>
> Anders, as much as I agree with your comments, they will not be
> enough to counterbalance Daryl's proposition. He is claiming that
> that HLUT have all kinds of possible experiences, with all possible
> thoughts one might have due to all sort of inputs in all sort
> of sequences and exibiting all sort of outputs. That, obviously,
> would make a googolplex look like an atom looks if compared to
> the andromeda galaxy. But it is "finite" in a purely mathematical
> and philosophical sense.
Yes, that's what the word "finite" *means*; any other "sense"
is a misuse of the word.
> This is a "face only coin" and Daryl
> is betting that when flipped this coin will give "face up".
>
> What should be discussed is the merit of thinking that way
> instead of reading Gibson and all other cognitive scientists.
What should be discussed is the idiocy of posing these as
*alternatives*. For the umpteenth time, HLUTs have no bearing on
issues concerning the development of AI or NI. One can "think that
way", i.e., conduct a valid analysis of HLUTs, without in any way
limiting one's reading or appreciation of cognitive scientists.
And one can understand cognitive science and it implications without
necessarily being imbecilic about the meaning of the word "finite".
Sheesh.
--
<J Q B>
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <36e53365@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36e139fd@news3.us.ibm.net>
<7c0rmj$rbb@edrn.newsguy.com> <36e437a1@news3.us.ibm.net>
<7c1hjn$3fu@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MIMEOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 9 Mar 1999 14:42:45 GMT, 166.72.21.141
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7c1hjn$3fu@edrn.newsguy.com>...
Daryl, may I ask you to read the answer I gave to Pierre?
I sort of concentrated my comments there. Know, beforehand,
that I grasped the gist of what you said and now I'm in a
situation where I can agree with most of your text. See you there.
Regards,
Sergio Navega.
From: Michael Edelman <mje@mich.com>
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <36E3F12A.C01220@mich.com>
Content-Transfer-Encoding: 7bit
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com>
Content-Type: text/plain; charset=us-ascii
Organization: Neutrinos 'R' Us
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote:
> ....What you seem to be saying is that the HLUT will fail
> to behave intelligently, and more specifically will fail
> to behave like Sergio after some period of time. What does
> that mean? To me, to fail to behave like Sergio is to perform
> some action A which Sergio would never have performed in
> similar circumstances. But by assumption, the output A
> *is* an action that Sergio would plausibly have performed
> in such circumstances! So by definition, the HLUT cannot
> fail to behave like Sergio (at least to the extent that
> the HLUT's "circumstances" are determined by the history
> of inputs it has received).
Unless our all-powerful HLUT creator has the ability to look into the
future, the HLUT can only map past I/O relationships for Sergio- so
in that sense, even a finite HLUT is impossible for Sergio unless we
also allow for hard determinism.
--
Michael Edelman http://www.mich.com/~mje
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <7c0vs9$82j@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36E3F12A.C01220@mich.com>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Michael says...
>Unless our all-powerful HLUT creator has the ability to look into the
>future, the HLUT can only map past I/O relationships for Sergio- so
>in that sense, even a finite HLUT is impossible for Sergio unless we
>also allow for hard determinism.
A device being impossible can mean two different things:
(1) it is impossible to construct such a device, and (2)
it is impossible for such a device to exist. Nobody is
arguing that an HLUT is feasible, (or even possible) to
construct. Instead, the argument is over whether it is
possible to exist.
The difference is this: (1) It is impossible to construct a
table listing the names of the next 20 American Presidents,
starting with year 2000. It is not impossible for such a
thing to exist, however. (2) It is impossible for a computer
program that solves the halting problem to exist, no matter
*how* it is constructed.
If you need to have some story for how an HLUT could exist
with the assumed properties, here's one:
Some aliens from a parallel universe travel to Earth and
kidnap Sergio. Using incredibly advanced technology (and
unimaginable resources) availably only in their own universe,
they make 10^(10^10) exact copies of Sergio and put them
into suspended animation for use later. They plug the Sergio
into a perfect virtual reality environment, complete
with sound, picture, "feelies", smells, and tastes that
are indistinguishable from the real world. They present
Sergio with a completely fictitious life, and make careful
notes of his reactions to the situations in which he finds
himself. After 150 years of this, they restore Sergio to his
original state of youth, reset his memories, and return him
to Earth.
Now, they take one of Sergio's clones out of cold storage
and submit him to a slightly different artificial history.
They take notes on what this copy does in his new situations,
for 150 years. Then, one by one, they take each of the copies
of Sergio, and subject it to a slightly different history.
Eventually, they have statistics on how Sergio would likely
react to every possible situation in a 150 year lifespan.
(Although the virtual reality simulation is very high resolution,
it is still discrete, and there are only finitely many different
possible histories.)
After going through all the copies of Sergio, they create a
Sergio table. For each possible situation (in
terms of input histories) that Sergio could experience,
this table describes Sergio's responses to that situation.
That is Sergio's HLUT.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <36e4379e@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36E3F12A.C01220@mich.com>
<7c0vs9$82j@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 8 Mar 1999 20:48:30 GMT, 200.229.242.158
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7c0vs9$82j@edrn.newsguy.com>...
>
>If you need to have some story for how an HLUT could exist
>with the assumed properties, here's one:
>
> Some aliens from a parallel universe travel to Earth and
> kidnap Sergio. Using incredibly advanced technology (and
> unimaginable resources) availably only in their own universe,
> they make 10^(10^10) exact copies of Sergio and put them
> into suspended animation for use later. They plug the Sergio
> into a perfect virtual reality environment, complete
> with sound, picture, "feelies", smells, and tastes that
> are indistinguishable from the real world. They present
> Sergio with a completely fictitious life, and make careful
> notes of his reactions to the situations in which he finds
> himself. After 150 years of this, they restore Sergio to his
> original state of youth, reset his memories, and return him
> to Earth.
>
> Now, they take one of Sergio's clones out of cold storage
> and submit him to a slightly different artificial history.
> They take notes on what this copy does in his new situations,
> for 150 years. Then, one by one, they take each of the copies
> of Sergio, and subject it to a slightly different history.
> Eventually, they have statistics on how Sergio would likely
> react to every possible situation in a 150 year lifespan.
> (Although the virtual reality simulation is very high resolution,
> it is still discrete, and there are only finitely many different
> possible histories.)
>
> After going through all the copies of Sergio, they create a
> Sergio table. For each possible situation (in
> terms of input histories) that Sergio could experience,
> this table describes Sergio's responses to that situation.
> That is Sergio's HLUT.
>
Nice story, Daryl, but I don't think it is enough.
First point, do you agree that the number of copies of Sergio
that you propose (10^(10^10)) may be insufficient to account for
the diversity of situations one can be subject to? Remember,
we're not talking here about the dimension or number of entries
that an input vector H possesses. We're talking here about the
*combinatorial* possibility of *every* atom in the universe
(a supernova explosion reported in a newspaper when Sergio
was 18 years old may have changed his interests from physics to
astronomy; this supernova is the result of something that
happened *millions* or *billions* of years ago and yet is
influencing Sergio's behavior today). If you want to say that
this is a very high number, but still finite, then, ok, you're
saying that our universe is also finite. Can a finite HLUT live
in an infinite universe? Obviously not. So one of your
preconditions for this HLUT is the finiteness of the universe.
Second point, all these Sergios will be useless to provide
a behavior comparable to the one in which the real Sergio
discovers a *new* law of physics, that could only be discovered
when the real Sergio *perceives* something in a specific experiment
that had never been thought or done before. Most of our scientific
discoveries so far were the result of a prepared brain and a
lucky opportunity.
Even if the HLUT had the whole universe in it (positions of atoms,
velocities, spins, etc) it does not have the ability to perceive
similarity among new things and old things. Unless you admit that
the universe is finite and the HLUT, being finite, is able to
contain it entirely.
That gigantic HLUT that you proposes will not behave like the
real Sergio (and that's a matter I can guarantee ;-)
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <7c1gpb$1j8@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36E3F12A.C01220@mich.com>
<7c0vs9$82j@edrn.newsguy.com> <36e4379e@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>First point, do you agree that the number of copies of Sergio
>that you propose (10^(10^10)) may be insufficient to account for
>the diversity of situations one can be subject to? Remember,
>we're not talking here about the dimension or number of entries
>that an input vector H possesses.
Yes, we are.
>We're talking here about the
>*combinatorial* possibility of *every* atom in the universe
No, we are not talking about that.
>(a supernova explosion reported in a newspaper when Sergio
>was 18 years old may have changed his interests from physics to
>astronomy; this supernova is the result of something that
>happened *millions* or *billions* of years ago and yet is
>influencing Sergio's behavior today).
So what? In the simulation, the aliens go through every
possible input that Sergio could receive, including a
fake report of a supernova explosion.
The HLUT only needs to go through every possible *input*
history that Sergio could possibly receive. It *doesn't*
have to consider all possible *causes* for that history.
>Second point, all these Sergios will be useless to provide
>a behavior comparable to the one in which the real Sergio
>discovers a *new* law of physics, that could only be discovered
>when the real Sergio *perceives* something in a specific experiment
>that had never been thought or done before. Most of our scientific
>discoveries so far were the result of a prepared brain and a
>lucky opportunity.
If the real Sergio, upon seeing a certain pattern, guesses
a new law of physics, then the copy, seeing the same pattern,
will guess the same law of physics. Of course, in the case
of the copy, the law of physics may be bogus (it may be just
an artifact of the simulation).
>Even if the HLUT had the whole universe in it (positions of atoms,
>velocities, spins, etc) it does not have the ability to perceive
>similarity among new things and old things. Unless you admit that
>the universe is finite and the HLUT, being finite, is able to
>contain it entirely.
The size of the universe has *nothing* to do with it. It is only
the size of the set of possible input histories that Sergio could
receive. That size is determined solely by (1) the resolution of
Sergio's sense organs, and (2) the length of Sergio's life. If
Sergio's sense organs are limited to 1 megabit/second, and he
lives 10^10 seconds, then the number of possible input histories
is (10^6)^(10^10) < 10^(10^11), regardless of how many atoms
there are in the universe.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: Michael Edelman <mje@mich.com>
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <36E53E3C.841DB94B@mich.com>
Content-Transfer-Encoding: 7bit
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36E3F12A.C01220@mich.com>
<7c0vs9$82j@edrn.newsguy.com>
Content-Type: text/plain; charset=us-ascii
Organization: Neutrinos 'R' Us
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote:
> Michael says...
>
> >Unless our all-powerful HLUT creator has the ability to look into the
> >future, the HLUT can only map past I/O relationships for Sergio- so
> >in that sense, even a finite HLUT is impossible for Sergio unless we
> >also allow for hard determinism.
>
> A device being impossible can mean two different things:
> (1) it is impossible to construct such a device, and (2)
> it is impossible for such a device to exist....
(2) is what I'm arguing. Like the HLUT in Searle's Chinese Room, it
assumes away a big chunk of the problem.
> If you need to have some story for how an HLUT could exist
> with the assumed properties, here's one:
>
> Some aliens from a parallel universe travel to Earth and
> kidnap Sergio. Using incredibly advanced technology (and
> unimaginable resources) availably only in their own universe,
> they make 10^(10^10) exact copies of Sergio and put them
> into suspended animation for use later. They plug the Sergio
> into a perfect virtual reality environment, complete
> with sound, picture, "feelies", smells, and tastes that
> are indistinguishable from the real world. They present
> Sergio with a completely fictitious life, and make careful
> notes of his reactions to the situations in which he finds
> himself. After 150 years of this, they restore Sergio to his
> original state of youth, reset his memories, and return him
> to Earth...
Hmm. I think even that makes certain assumptions about the world that
can't necessarily be supported. It's not enough to duplicate Sergio. You
have to duplicate his environment- which is to say the entire universe-
including the experiement. As we move away from Sergio the probability
that something in the universe will affect him decreases, but it's still
real. Suppose Sergio looks up in the sky and sees a supernova in another
galaxy- and decides to study astronomy?
No matter how long you observe Sergio or as many of his clones as you
choose, you still have only a finite number of observations and you are
trying to predict from an infinite number of alternative futures.
-- mike
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: An HLUT challenge.
Date: 09 Mar 1999 00:00:00 GMT
Message-ID: <7c3k1d$mpp@edrn.newsguy.com>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36E3F12A.C01220@mich.com>
<7c0vs9$82j@edrn.newsguy.com> <36E53E3C.841DB94B@mich.com>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Michael says...
>Daryl McCullough wrote:
>> Some aliens from a parallel universe travel to Earth and
>> kidnap Sergio. Using incredibly advanced technology (and
>> unimaginable resources) availably only in their own universe,
>> they make 10^(10^10) exact copies of Sergio and put them
>> into suspended animation for use later. They plug the Sergio
>> into a perfect virtual reality environment, complete
>> with sound, picture, "feelies", smells, and tastes
that
>> are indistinguishable from the real world. They present
>> Sergio with a completely fictitious life, and make careful
>> notes of his reactions to the situations in which he finds
>> himself. After 150 years of this, they restore Sergio to his
>> original state of youth, reset his memories, and return him
>> to Earth...
>
>Hmm. I think even that makes certain assumptions about the world that
>can't necessarily be supported. It's not enough to duplicate Sergio. You
>have to duplicate his environment- which is to say the entire universe-
>including the experiement.
No, that's not true. You only need to duplicate the possible
sensory patterns that Sergio could possibly receive. That is,
as I said, patterns of light received by his eyes, patterns of
sound received by his ears, patterns of touch felt on his skin,
etc.
>Suppose Sergio looks up in the sky and sees a supernova in another
>galaxy- and decides to study astronomy?
Sergio is inside a virtual reality environment, so he won't
ever see an *actual* supernova. But he will see a faked supernova
(or one of his copies will).
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: Michael Edelman <mje@mich.com>
Subject: Re: An HLUT challenge.
Date: 10 Mar 1999 00:00:00 GMT
Message-ID: <36E68AC8.987261D@mich.com>
Content-Transfer-Encoding: 7bit
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36E3F12A.C01220@mich.com>
<7c0vs9$82j@edrn.newsguy.com> <36E53E3C.841DB94B@mich.com>
<7c3k1d$mpp@edrn.newsguy.com>
Content-Type: text/plain; charset=us-ascii
Organization: Neutrinos 'R' Us
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote:
> Michael says...
>
>
> >Hmm. I think even that makes certain assumptions about the world that
> >can't necessarily be supported. It's not enough to duplicate Sergio. You
> >have to duplicate his environment- which is to say the entire universe-
> >including the experiement.
>
> No, that's not true. You only need to duplicate the possible
> sensory patterns that Sergio could possibly receive. That is,
> as I said, patterns of light received by his eyes, patterns of
> sound received by his ears, patterns of touch felt on his skin,
> etc.
The "possible" is not finite- and the HLUT is.
-- mike
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 10 Mar 1999 00:00:00 GMT
Message-ID: <36e6e2ca@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<7bpkbo$5r2@edrn.newsguy.com> <36E3F12A.C01220@mich.com>
<7c0vs9$82j@edrn.newsguy.com> <36E53E3C.841DB94B@mich.com>
<7c3k1d$mpp@edrn.newsguy.com> <36E68AC8.987261D@mich.com>
<7c65d3$1ns$1@remarQ.com> <36E6C1FD.FDEBC22A@mich.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 10 Mar 1999 21:23:22 GMT, 129.37.182.35
Organization: SilWis
Newsgroups: comp.ai.philosophy
Michael Edelman wrote in message <36E6C1FD.FDEBC22A@mich.com>...
>
>Gary Forbis wrote:
>>
>> My feeling is the signal is time encoded, that is information is carried
>> by how fast a nerve fires as well as it's firing at all. I think the
size
>> of the vocabulary will depend upon the brain's ability to establish
>> discriminatory circuits.
>
> Human language is productive, which is to say there are an infinite number
of
>possible statements that can be expressed. That alone rules out a finite
number
>of possible inputs to the HLUT.
>
Michael, as much as I would like to say you're right, the HLUT they are
proposing is capable of such a feat. During a finite life of a human,
there's a finite number of muscle and vocal chord controls to act and
lungs muscles to activate to produce a finite amount of chitchat
(well, that's sort of what women do, pushing our example to the limits :-)
Obviously, your remark only adds to the unimaginable size of that HLUT,
that I propose to call from now on h-HLUT (humongously-Huge LookUp Table).
But, as a mathematical and unreal construct, HLUTs are just that:
unreal.
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: An HLUT challenge.
Date: 08 Mar 1999 00:00:00 GMT
Message-ID: <36e4379c@news3.us.ibm.net>
References: <7bkot5$qk5@ux.cs.niu.edu> <36DE2082.C79E9952@sandpiper.net>
<7bmnkn$2kl@ux.cs.niu.edu> <36def9a9@news3.us.ibm.net>
<36DFA339.734E866F@sandpiper.net> <36dfeab7@news3.us.ibm.net>
<7bp2lm$492@edrn.newsguy.com> <36e02628@news3.us.ibm.net>
<7bp9qr$hif@edrn.newsguy.com> <36e0464c@news3.us.ibm.net>
<36E3F008.55986C62@mich.com> <7c0ubb$4fu@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 8 Mar 1999 20:48:28 GMT, 200.229.242.158
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7c0ubb$4fu@edrn.newsguy.com>...
>Michael says... What I'm inclined to believe is that, after
>
>>...Since the HLUT is finite, there's no reason
>>it wouldn't immediately encounter an input string (which is the
>>history of everything up to that decision point) not in memory, and
>>halt.
>
>By definition, the HLUT has a response for every possible
>(discretized) input history, up to a maximum history length
>of 150 years. At the end of 150 years, the HLUT dies.
>
Then, by definition, this HLUT is just a recording of one's intelligent
acts, and then (because of what I said elsewhere) it will not
behave intelligently if given another 150 years of life.
Regards,
Sergio Navega.
Back to Menu of Messages Sergio's Homepage
Any Comments? snavega@attglobal.net