Selected Newsgroup Message
From: "Sergio Navega" <snavega@ibm.net>
Subject: What I Think HLUTs Mean
Date: 15 Mar 1999 00:00:00 GMT
Message-ID: <36ed2b15@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 15 Mar 1999 15:45:25 GMT, 200.229.243.133
Organization: SilWis
Newsgroups: comp.ai.philosophy
This was supposed to be an answer to Pierre's message:
houlepn@ibm.net wrote in message <7ch9rg$a2g$1@nnrp1.dejanews.com>...
But I apologize again for not commenting the answers given by Pierre,
mostly because I have nothing to counterargue. Then, what follows
is a generic text addressed mainly to Pierre, Daryl, Bill Modlin,
Jim Balter, Gary Forbis and a bunch of other "silent" readers that
have been following the HLUT threads.
It's been a constant surprise for me that no matter the side I turn,
there's a HLUT, with its big open mouth, eagerly waiting to eat all
my "creative" excuses for not accepting it. So I decided to take a
look at the whole big picture, again. I had accepted that some
intuitive misunderstandings I had needed revision. I propose an
Audio-CD thought experiment later that shows the kind of revision
in concepts that I did.
In the process of reviewing again the whole story, I found what I
believe to be a relatively stable position regarding HLUTs. What
follows is my attempt to put that in words.
I have missed the point of HLUTs repeatedly. That's something new for
me, I usually miss one or (at most) two times a counter-intuitive point
before converging to something satisfying. But HLUTs have far outgrown
my average "stubbornness factor". Why HLUTs are so hard to swallow?
(I wouldn't be so concerned about this, but I recognize some important
arguments coming from intelligent people like Daryl, Jim and Pierre,
and arguing so emphatically about it).
What I'll expose then is that I have missed repeatedly the point about
HLUTs because *there's no point in HLUTs* (oohh, how original :-)
That may sound as awfully obvious for most of us (most concede that
HLUT is a bad thing to think about), but for me the meaning of
"pointless HLUTs" is more profound.
The real winner in this debate, I hope to convince all of you, was
Neil Rickert. Please, calm down, don't be mad at me, read on! ;-)
Neil have, since the beginning of my participation, declared how pointless
HLUTs were. I am surely not agreeing with Neil's theory, I have my own,
which differs substantially from his. But his point in invalidating
HLUTs is welcome and I'm glad to have learned by myself some reasons
to deny HLUTs even as thought-experiments (granted, I have some very
good material to influence me, in particular one brilliant post of Neil
to sci.psychology.consciousness named "Our plight in the world", which
I strongly invite Neil to repost here in c.a.p).
My main reason to invalidate the HLUT (or my acceptance of its claims)
is its unassailable nature, whose origin is its mixing of mathematical
models with non-mathematical assumptions. One can't do that.
One can't reason mathematically if one of the conditions is not
also "formalizable". But it's not just about behaviors. It's about
the nature of our world, the way we know it in 1999.
Lets take Isaac Newton, for example. When he proposed his law:
f = m . a
he was reasoning within a formal model. Anything he concluded about this
was something that could be verified experimentally. Any discrepancy
would force him to re-evaluate his theory. In no moment Newton mixed
things in his mathematical reasoning. It was always clear that his
law was a mathematical model which was *representing reality*, but was
very different from it. But the important fact is that, for Newton,
he could predict the world to any level of accuracy he wanted. All
that was necessary was the use of a greater abacus (or a greater
number of scribbling assistants).
Newtonian models had to be reviewed because of relativistic effects.
Einstein's new models again were a mathematical formalization that,
given any arbitrary accuracy, could predict what happened with the
universe. Then, f=m.a was not a good model and was substituted by
a different one. But the substitute could give us *any accuracy*
that we wanted. HLUTs depend crucially on "accuracies". And that's
the first most important point to refuse HLUTs.
The real point came with quantum physics. There, we finally found
that our mathematical models lost that "arbitrary accuracy" possibility.
We lost, to a great extent, our accurate predictive powers of the world.
It is no longer reasonable to make extensive mathematical derivations
to support theory development, if accuracy is a primary condition.
The deterministic universe we had until Einstein is *gone*. Einstein
said that "God does not play dice". "He" doesn't have to. He have
one hell of a good random number generator.
I may be seen as missing the point again, because this discussion seems
to be missing again the point on HLUTs. So let me propose an experiment
as counter-intuitive as HLUTs are but, in a subtle way, without the
problem I'm claiming we have on HLUTs. Stick with me.
The Story of The Audio CD
-------------------------
I've read elsewhere (probably it was Neil, again) the idea of using an
audio CD for thought-experiments. I'll start from that idea. What is
the size of a CD? Lets say that it is 600 Mb. That's 4,800 megabits.
Things are recorded in a CD using 16 bit words. So in a CD we have
300 million words. At a rate of 44 Khz, we have about 6,818 seconds
of sound which represents roughly 113 minutes (it is in fact much less
because of control information, etc).
Now take a CD with the 10th symphony of Shostakovich. Take another
one with "The Desert Music" of Steve Reich. Then another one with
Olivier Messian's "Quartet for the end of time". Take now all other
musics recorded on CDs so far. Put all the bits into a *gigantic*
CD. What could I say about this H-CD? It is *finite*. What else
could I say?
I can say that all music ever played and all music to be played
in the full existence of the mankind will be able to be recorded
on my H-CD. Not only that, but all the possible interpretations
of Verdi's Requiem, with all possible solo singers, with all
possible violins, all cellos and all trumpets and all infinite
variations of instruments (even the fart of the conductor, due
to something *very* bad he ate in today's lunch at "George's
fried fly", because the restaurant he wanted to go was closed,
because a lightning stroke on it yesterday).
And I claim (a very straightforward claim, in fact) that this
H-CD is FINITE. How it can possibly be finite if I have infinite
sets of instruments to play with? It is finite, indeed, as the
root of the question is, as is obvious to all of us, the finite
accuracy.
So what is the size of this hyper-BIG CD? Easy, take the
whole number of bits of the CD and treat it as a gigantic
binary number. Here's the size of the H-CD:
4,800,000,000
2
or 2 to the four thousand, eight hundred millionth power. That should
be enough to convince any non-believer. And what about the son of the
last human who have filled that H-CD? What if he decides to play
cello, as his father did, and then record another Richard Strauss'
"Death and Transformation"? Wouldn't that be another entry in that
CD? No, that would certainly be *equal* to a previous entry. With
that H-CD filled, it is guaranteed that, no matter how the boy plays
the cello, no matter what's his timing, or his instrument, some entries
in that H-CD will account not only for his entire performances but also
to any sequence of practice sessions he could imagine. That's a
straight consequence of the *finite* resolution of the CD recording
(one can question if 44 kHz is enough accuracy to capture everything,
but not the fact that, given a sampling rate and a fixed accuracy,
its results are finite).
So if I'm proposing that H-CD, why can't I accept a HLUT for human
behavior (or for intelligence)? Why I see no point in thinking
about HLUTs?
Because when things get into humans, we're not dealing with
mathematics alone. To say that our eyes or our ears have a limited
accuracy is imposing our mathematical way of looking at things
at something that transcends mathematics and go into the quantum
realm. Don't misunderstand me here, I'm not like those nuts who
propose quantum theories of consciousness. I'm talking about
the quantum nature of our world (brains included). On the CD,
accuracies are part of its definition: *I am sampling at 44 Khz
and storing at 16 bits!*. How can we say that about human organs
and still keep the *rigidity* that a mathematical concept demands?
To argue about HLUTs one must admit that, as with H-CD, there
will be a situation that, given finite accuracy of the HLUTs
of all possible humans (also finite, given fixed limits of height
and weight), we would have repetitions. To say HLUTs work is the
same as saying that the universe, for us, is finite (will repeat
experiences sometime). Isn't this fine for a night of chitchat
with Galileu?
Have you heard the blast of a large cannon staying close to it? Have
you felt the sound wave that strikes your body? Can you say that this
shock wave can be captured *entirely* by a digitization of all
inputs into our brain (including the vibration effects in the
tissue of our brain)? This may seem a very small and negligible
effect, but what concerns me is the *cumulative* effect. It does
not matter if you *define* mathematically that a HLUT presents all
behaviors of a human being, *including* those derived from the effects
of vibration in cerebral tissue. Doing that is thinking that we
can *reason* about this world as being deterministic and Newtonian.
Doing that is the same as denying quantum physics.
Is it interesting to think about this world in Newtonian ways?
Is it reasonable to think that anything that we derive from this
axiom (HLUTs) will *converge* to our world, instead of *diverge*
to a platonic, non-existent world? More and more mathematical
and less and less real?
When analyzing a human being in touch with a nondeterministic world,
those who claim for HLUTs are drawing an artificial line and saying
that it doesn't care if that brain is nondeterministic, it doesn't
matter if Beethoven's ears lost accuracy in the end if his life,
it doesn't care if chemicals enter the brain via the world and
alter it profoundly. It only cares that the behavior of that brain
can be accounted in terms of the outputs it gives. Is there any
way to prove that the outputs of a HLUT can be considered
intelligent, when acting in a nondeterministic world? I'm sure
I'll find someone who claims that this is reasonable.
Nothing derived from HLUTs, in my opinion, seem to be meaningful
(other than our learning of some facts of life). If one wants to
ground one's beliefs in behaviorism, then it is better to choose
another set of axioms, because HLUTs will not be a good way to
start. Unless you're arguing with Isaac Newton.
It all boils down, then, to a "religious" belief. You believe in HLUTs
or you don't. The believers cannot prove it works and the nonbelievers
cannot prove it don't. You believe in God or you don't. My current
position (subject to revision, if good evidences come in) is that
of a soft atheist. On both matters.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: What I Think HLUTs Mean
Date: 15 Mar 1999 00:00:00 GMT
Message-ID: <7cjdga$esn@edrn.newsguy.com>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
<36ed2b15@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio,
In my opinion, the usefulness of the HLUT is to force people to
look at their own prejudices. What it really boils down to is
the extent to which you think that the analog component of
intelligence is crucial. Many people assume that analog isn't
important, that a digital computer ought to be sufficient for
AI. But that assumption leads to the conclusion that an HLUT
is possible, and many of the same people find the HLUT
unpalatable.
So the implication that I think is important from the HLUT
thought experiment is this:
HLUT is impossible <--> intelligence is essentially analog
I think everyone agrees that there is an analog component to
an intelligent system. At the least, sensors and actuators
have to have an analog component. The question is whether
these analog components can be factored off, to leave the
bulk of reasoning purely digital.
Some other parts of your post bring up once again the
idea that quantum mechanics or nondeterminism dooms
the possibility of HLUTs. That is just not so. As a
matter of fact, it is quantum mechanics that provides
the ultimate limits on accuracy of any sense organ
which makes an HLUT possible. If the accuracy of
senses was infinite (which would be possible in a
Newtonian universe) then it would always be possible
to detect the difference between an analog signal
and a discrete approximation. But with quantum mechanics,
it is the notion of "continuum" that is the approximation.
Nothing measurable is actually continuous.
I don't understand why you think nondeterminism has
anything whatsoever to do with whether HLUTs are
possible or impossible. The HLUTs pressuppose
finite state machines, but not deterministic
finite state machines.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What I Think HLUTs Mean
Date: 15 Mar 1999 00:00:00 GMT
Message-ID: <36ed7e08@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
<36ed2b15@news3.us.ibm.net> <7cjdga$esn@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 15 Mar 1999 21:39:20 GMT, 129.37.182.6
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7cjdga$esn@edrn.newsguy.com>...
>Sergio,
>
>In my opinion, the usefulness of the HLUT is to force people to
>look at their own prejudices.
I am a living proof of the value of this. I have reviewed a lot
of concepts I had (wrongly) because of the time I spent thinking
on HLUTs. For that matter, HLUTs have demonstrated at least
one useful aspect. But I suspect that we can't do anything more
than that.
For example, I didn't have it clear the reasoning behind that
Audio-CD example. The problem starts to appear when we try to
transpose that to intelligence, a concept that goes in the
opposite direction than a HLUT appears to indicate. All my
intuitive definitions of intelligence pressupose, among
other things, the discovery of the solution of problems
to which *we don't have a ready answer*. That was one of the
concepts I had reviewed and, wisely, maintained intact.
>What it really boils down to is
>the extent to which you think that the analog component of
>intelligence is crucial. Many people assume that analog isn't
>important, that a digital computer ought to be sufficient for
>AI. But that assumption leads to the conclusion that an HLUT
>is possible, and many of the same people find the HLUT
>unpalatable.
>
>So the implication that I think is important from the HLUT
>thought experiment is this:
>
> HLUT is impossible <--> intelligence is essentially analog
>
>I think everyone agrees that there is an analog component to
>an intelligent system. At the least, sensors and actuators
>have to have an analog component. The question is whether
>these analog components can be factored off, to leave the
>bulk of reasoning purely digital.
>
I can tell you with confidence that I'm not concerned with
the "analog component" of intelligence. I am in fact one of
the defensors of the "digital" view of the brain. In that
regard, I know that I may have some questions to solve
with Neil's vision, that seems to stand over these aspects.
My stubbornness with HLUTs, however, stems on different grounds.
>Some other parts of your post bring up once again the
>idea that quantum mechanics or nondeterminism dooms
>the possibility of HLUTs. That is just not so. As a
>matter of fact, it is quantum mechanics that provides
>the ultimate limits on accuracy of any sense organ
>which makes an HLUT possible. If the accuracy of
>senses was infinite (which would be possible in a
>Newtonian universe) then it would always be possible
>to detect the difference between an analog signal
>and a discrete approximation. But with quantum mechanics,
>it is the notion of "continuum" that is the approximation.
>Nothing measurable is actually continuous.
>
I am sympathetic to your comments, up to the point of
a theoretical, platonic universe. My strongest criticisms
is in bringing a mathematical, physically unrealizable
idea in a down to earth, FSA-like idea and hope that
this FSA will be "buildable" in the universe we're living.
It may even be buildable, it may even be well within
the reach of a conventional computer, if that FSA is
concocted in the right manner. What I'm afraid is,
given its physically unreasonable origin and given that
no general principle was deployed (a theory or a law),
then we would obtain, very probably, unreasonable
physical results. Know GIGO?
It all seems a diversion, which focus attention on one axiom
(the existence of that HLUT) to justify things that go
outside the realm of the mathematics. We can show in
a convincingly manner that there is no finite number large
enough to be greater than all prime numbers. We can prove
that to be true according to the axioms we have in
arithmetic. We can't take intelligence, which is a
*real* world aspect, and expect to ground it in a
mathematical "axiom-like" invention such as HLUTs.
In the same way, it is not reasonable to mathematically
transform a HLUT into a FSA (a mathematically correct
operation) and still hope that it will work in the
real world. We can't be sure of that. In fact, we
can expect, in my vision, a very high probability
of failure, given the unreal nature of the starting point.
>I don't understand why you think nondeterminism has
>anything whatsoever to do with whether HLUTs are
>possible or impossible.
If that means we're talking about the possibility of
construction, then in my opinion HLUTs are, *by definition*,
impossible :-)
But, seriously, my concern is that anything we derive
from HLUTs may have mathematical validity, but will not
have "real world" validity. I don't want to go back to
the drawing board any more than necessary.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: What I Think HLUTs Mean
Date: 15 Mar 1999 00:00:00 GMT
Message-ID: <7ck5gk$a6d@edrn.newsguy.com>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
<36ed2b15@news3.us.ibm.net> <7cjdga$esn@edrn.newsguy.com>
<36ed7e08@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>>So the implication that I think is important from the HLUT
>>thought experiment is this:
>>
>> HLUT is impossible <--> intelligence is essentially
analog
>>
>>I think everyone agrees that there is an analog component to
>>an intelligent system. At the least, sensors and actuators
>>have to have an analog component. The question is whether
>>these analog components can be factored off, to leave the
>>bulk of reasoning purely digital.
>>
>
>I can tell you with confidence that I'm not concerned with
>the "analog component" of intelligence.
In that case, I think that your objections to the HLUT
are unfounded. Neil Rickert has a real objection, in that
the HLUT puts ultimate limits on what can be perceived
that Neil thinks don't apply to humans.
>I am in fact one of
>the defensors of the "digital" view of the brain. In that
>regard, I know that I may have some questions to solve
>with Neil's vision, that seems to stand over these aspects.
>My stubbornness with HLUTs, however, stems on different grounds.
Sorry, Sergio, but I just can't understand your different
grounds. To me, the HLUT is simply an exploration of the
consequences of the the digital view of the brain. Nothing
more.
>...We can't take intelligence, which is a
>*real* world aspect, and expect to ground it in a
>mathematical "axiom-like" invention such as HLUTs.
I think you are still misunderstanding the point
of HLUTs. Nobody proposes "grounding" intelligence
in an HLUT. Nobody has even suggested that the HLUT
gives us any insight whatsoever into the nature
of intelligence. You are once again looking for
a point that isn't there.
To repeat: Nobody, absolutely nobody, suggests that
the HLUT is a good model for intelligent behavior,
or that it is a good way to think about intelligence,
or that it is a good way to approach building an AI
system.
>In the same way, it is not reasonable to mathematically
>transform a HLUT into a FSA (a mathematically correct
>operation) and still hope that it will work in the
>real world.
We agree that the HLUT could not be built in the real
world.
>We can't be sure of that. In fact, we
>can expect, in my vision, a very high probability
>of failure, given the unreal nature of the starting point.
That doesn't make any sense to me. The definition
of the HLUT is that it behaves (within a discrete
approximation) the same as a human would in the
same (again, to within a discrete approximation)
situation. I can understand Neil Rickert's objection,
that a discrete approximation may not be good enough,
but I can't understand your objection, that it
might fail to even give a good approximation to
human behavior. By definition, that won't happen.
>>I don't understand why you think nondeterminism has
>>anything whatsoever to do with whether HLUTs are
>>possible or impossible.
>
>If that means we're talking about the possibility of
>construction, then in my opinion HLUTs are, *by definition*,
>impossible :-)
>
>But, seriously, my concern is that anything we derive
>from HLUTs may have mathematical validity, but will not
>have "real world" validity.
What has anyone derived from the possibility of an HLUT?
(Other than its possibility?)
>I don't want to go back to the drawing board any more than
>necessary.
Sergio, my recommendation to you is to stop starting threads
about HLUTs. They really are not particularly relevant to
anything.
Note that all the threads about HLUTs were started by HLUT
opponents: you and Neil Rickert, mostly. The proponents
(me and Jim Balter, and others) are willing to discuss them,
but we don't think that they are particularly relevant to
AI.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What I Think HLUTs Mean
Date: 16 Mar 1999 00:00:00 GMT
Message-ID: <36ee6c7f@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
<36ed2b15@news3.us.ibm.net> <7cjdga$esn@edrn.newsguy.com>
<36ed7e08@news3.us.ibm.net> <7ck5gk$a6d@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 16 Mar 1999 14:36:47 GMT, 166.72.21.154
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7ck5gk$a6d@edrn.newsguy.com>...
>Sergio says...
>>
>>I can tell you with confidence that I'm not concerned with
>>the "analog component" of intelligence.
>
>In that case, I think that your objections to the HLUT
>are unfounded. Neil Rickert has a real objection, in that
>the HLUT puts ultimate limits on what can be perceived
>that Neil thinks don't apply to humans.
>
I'm much more close to your viewpoint than Neil seem to be.
Interestingly, I keep an equally critic vision than him.
>>I am in fact one of
>>the defensors of the "digital" view of the brain. In that
>>regard, I know that I may have some questions to solve
>>with Neil's vision, that seems to stand over these aspects.
>>My stubbornness with HLUTs, however, stems on different grounds.
>
>Sorry, Sergio, but I just can't understand your different
>grounds. To me, the HLUT is simply an exploration of the
>consequences of the the digital view of the brain. Nothing
>more.
>
As I said earlier, I am more confortable with such things
as the proof that we don't have a number such that there's
no greater prime number. This is mathematical and fully
grounded in ariths axioms. I can use this fact to help
prove a zillion of other things in mathematics and still
keep confidence in my results. This is not only a real
thing (in the mathematical world), but something with
a potential to build other results.
My problem is that I cannot use HLUTs to support anything
else, because I run into the risk of going more and more
out of reality, because to think about HLUTs we need to
think about tables greater than the real universe. HLUTs
are not even good if approximated, they only work if
they are fully loaded. I can devise *dozens* of small
algorithms that give good *interactive* performance and
that, to be transformed into HLUTs, would need the
andromeda galaxy as memory. But that's obvious.
Is it a lack of imagination on my part? No, it is just
my conviction that we can't give a chance to mix
mathematical world with real world.
>>...We can't take intelligence, which is a
>>*real* world aspect, and expect to ground it in a
>>mathematical "axiom-like" invention such as HLUTs.
>
>I think you are still misunderstanding the point
>of HLUTs. Nobody proposes "grounding" intelligence
>in an HLUT. Nobody has even suggested that the HLUT
>gives us any insight whatsoever into the nature
>of intelligence. You are once again looking for
>a point that isn't there.
>
Sorry, what I'm seeing are defenses of HLUTs to
derive FSAs as possible implementation candidates and to find
behaviorist stimulus/response systems as a good model of
the brain. HLUTs were frequently used to support the
viewpoint that what's important in intelligence is only
the behavior it can produce. I question that assertion.
Up to now, what I've heard from the defensors of HLUTs are
FSAs and behaviorist S/R. There are several passages in which
you and Pierre leave that clear.
>To repeat: Nobody, absolutely nobody, suggests that
>the HLUT is a good model for intelligent behavior,
>or that it is a good way to think about intelligence,
>or that it is a good way to approach building an AI
>system.
>
In that sense, I agree fully.
>>In the same way, it is not reasonable to mathematically
>>transform a HLUT into a FSA (a mathematically correct
>>operation) and still hope that it will work in the
>>real world.
>
>We agree that the HLUT could not be built in the real
>world.
>
For me, that's not the greatest problem. The greatest problem
is that HLUTs mix, in its definition, mathematical assumptions
about accuracy and real world concepts such as intelligence.
Modeling intelligence in mathematical terms is not what I'm
calling problematic. Modeling intelligence in mathematical
terms that cannot be brought back to reality is what I see
as a problem.
>>We can't be sure of that. In fact, we
>>can expect, in my vision, a very high probability
>>of failure, given the unreal nature of the starting point.
>
>That doesn't make any sense to me. The definition
>of the HLUT is that it behaves (within a discrete
>approximation) the same as a human would in the
>same (again, to within a discrete approximation)
>situation. I can understand Neil Rickert's objection,
>that a discrete approximation may not be good enough,
>but I can't understand your objection, that it
>might fail to even give a good approximation to
>human behavior. By definition, that won't happen.
>
The HLUT, by definition, would be intelligent, I don't
question that anymore. What I question is every single
bit of conclusion that we can derive from that. Please
read again the phrase I put under discussion:
>>In the same way, it is not reasonable to mathematically
>>transform a HLUT into a FSA (a mathematically correct
>>operation) and still hope that it will work in the
>>real world.
>>We can't be sure of that. In fact, we
>>can expect, in my vision, a very high probability
>>of failure, given the unreal nature of the starting point.
Now you're telling me that a FSA, correctly derived from
that HLUT which was assembled by omnisciently *predicting*
*all* *possible* interactions in the world, will have a
chance of working, if this FSA gets implemented in a computer?
I don't see any way of using *any* kind of practical
derivation from HLUTs leading us to anything successful.
Having HLUTs in one's mind as something that works leaves
the opportunity to think of unimplementable methods. I
refuse to accept that.
>
>>I don't want to go back to the drawing board any more than
>>necessary.
>
>Sergio, my recommendation to you is to stop starting threads
>about HLUTs. They really are not particularly relevant to
>anything.
>
>Note that all the threads about HLUTs were started by HLUT
>opponents: you and Neil Rickert, mostly. The proponents
>(me and Jim Balter, and others) are willing to discuss them,
>but we don't think that they are particularly relevant to
>AI.
>
Agreed 100%. I promise that, regarding HLUTs, I'll be just
reactive ;-)
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What I Think HLUTs Mean
Date: 16 Mar 1999 00:00:00 GMT
Message-ID: <36ee6c82@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
<36ed2b15@news3.us.ibm.net> <7ckdhh$kp$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 16 Mar 1999 14:36:50 GMT, 166.72.21.154
Organization: SilWis
Newsgroups: comp.ai.philosophy
houlepn@my-dejanews.com wrote in message <7ckdhh$kp$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>[snip]
>
>Sergio, I'll skip to your Audio CD example because I don't quite get
>your point about chaos or quantum uncertainty making humans
>superiors to FSAs. Maybe you could provide a more explicit example
>of what you mean.
>
Sometimes I'm more verbose than necessary. I'll try to be as direct
as I can:
a) HLUTs are intelligent and able to represent intelligent behavior of
any human being, by definition.
b) HLUTs cannot be said to be intelligent unless they are completely
filled (partial HLUTs are not guaranteed to work, they don't degrade
gracefully).
c) HLUTs demand omniscience to be built. They demand that we break
every concept we have of reality, because we have to assume there's a way
to predict all future actions (all future responses to inputs), HLUTs
are mathematical and imaginary constructions, by that reasons and
also because they'll need more matter than exist in the known universe.
d) HLUTs can be mathematically transformed into FSAs
e) Any FSA originary from a HLUT will have to be built with full
omniscience of the universe, incurring in the same problems of the latter.
f) Omniscience (and future predictions of all actions) breaks our
concept of a quantum and indeterminate world.
g) FSAs (built from HLUTs) break our concepts of quantum and
indeterminate world.
h) FSAs (built from HLUTs) are as unreal as HLUTs.
i) FSAs (in general) cannot be proven to be a good solution only
because they can be the "counterpart" of a hypothetical HLUT.
Choose another way to say FSAs are good, without using HLUTs as axioms.
j) If FSAs are to be a model of intelligence, then one would be better
off starting from a different principle than HLUTs to justify it.
k) Maybe Bloxy is right after all.... just kidding ;-)
>> The Story of The Audio CD
>> -------------------------
>>
>> I've read elsewhere (probably it was Neil, again) the idea of using an
>> audio CD for thought-experiments. I'll start from that idea. What is
>> the size of a CD? Lets say that it is 600 Mb. That's 4,800 megabits.
>> Things are recorded in a CD using 16 bit words. So in a CD we have
>> 300 million words. At a rate of 44 Khz, we have about 6,818 seconds
>> of sound which represents roughly 113 minutes (it is in fact much less
>> because of control information, etc).
>>
>> Now take a CD with the 10th symphony of Shostakovich. Take another
>> one with "The Desert Music" of Steve Reich. Then another one with
>> Olivier Messian's "Quartet for the end of time". Take now all other
>> musics recorded on CDs so far. Put all the bits into a *gigantic*
>
>Ok. don't forget Richard Strauss' "Don Quixote" and Bach's "B
Minor
>Mass" though.
>
Well, in that case I also cannot forget Gustav Holst's "The Planets",
some pieces of Philip Glass and Prokofiev's pieces for for piano.
>> CD. What could I say about this H-CD? It is *finite*. What else
>> could I say?
>>
>> I can say that all music ever played and all music to be played
>> in the full existence of the mankind will be able to be recorded
>> on my H-CD. Not only that, but all the possible interpretations
>> of Verdi's Requiem, with all possible solo singers, with all
>> possible violins, all cellos and all trumpets and all infinite
>> variations of instruments (even the fart of the conductor, due
>> to something *very* bad he ate in today's lunch at "George's
>> fried fly", because the restaurant he wanted to go was closed,
>> because a lightning stroke on it yesterday).
>>
>> And I claim (a very straightforward claim, in fact) that this
>> H-CD is FINITE. How it can possibly be finite if I have infinite
>> sets of instruments to play with? It is finite, indeed, as the
>> root of the question is, as is obvious to all of us, the finite
>> accuracy.
>>
>> So what is the size of this hyper-BIG CD? Easy, take the
>> whole number of bits of the CD and treat it as a gigantic
>> binary number. Here's the size of the H-CD:
>>
>>
4,800,000,000
>>
2
>
>I do not agree the H-CD is a good analogy for any meaningful
>FSA/HLUT.
That's correct, the H-CD was just a first step to the HLUT.
> What would you think of a chess computer that outputs
>any legal move at random? The HLUT of such a computer would
>be the listing of all possible chess games. Such a HLUT would indeed
>exhibit no intelligence about chess beyond mere knowledge of the game
>rules. This is not the case with the HLUT of DeepBlue. Not only is
>DeepBlue's HLUT much smaller (although less compressible) than the
>"legal chess HLUT" but it might also manage to beat Kasparov more
>often than not. Likewise, the typical output of the J. S. Bach HLUT
>(defined similarly as our earlier Einstein HLUT) would sound just
>as pleasant to your ears than any J.S. Bach modern recording on
>ancient instruments. The H-CD you propose here however will seldom
>perform anything else than white noise.
>
The only component missing from H-CDs to transform it into a full-fledged
HLUT is the association with a behavior. In fact, I had in mind
to write about that in that post, but I was afraid of making it too
long. I was going to propose a robot in which artificial, CD-like ears
were provided and that, given an input and a history of inputs of
sounds, the robot would behave in a certain way. It is not difficult
to be convinced that this robot (could be a blind human, if we managed
to get through the accuracy problem with a cochlear implant with CD
quality, for instance) would have its entire behavior predictable by
a HLUT. The problem is that even with such a contrived situation,
one would need an unrealistic size of HLUT, which adds to my
argument that HLUTs are not good starting points for any kind of
consideration involving intelligence.
>> or 2 to the four thousand, eight hundred millionth power. That should
>> be enough to convince any non-believer. And what about the son of the
>> last human who have filled that H-CD? What if he decides to play
>> cello, as his father did, and then record another Richard Strauss'
>> "Death and Transformation"? Wouldn't that be another entry in that
>> CD? No, that would certainly be *equal* to a previous entry. With
>> that H-CD filled, it is guaranteed that, no matter how the boy plays
>> the cello, no matter what's his timing, or his instrument, some entries
>> in that H-CD will account not only for his entire performances but also
>> to any sequence of practice sessions he could imagine. That's a
>> straight consequence of the *finite* resolution of the CD recording
>> (one can question if 44 kHz is enough accuracy to capture everything,
>> but not the fact that, given a sampling rate and a fixed accuracy,
>> its results are finite).
>
>The H-CD you propose is basically the HLUT of a FSA producing
>random numbers or counting from 1 to 2^4,800,000,000.
>
Yes, that's what I thought, and that is just one column of a potential
HLUT that responded with a possible behavior.
>> So if I'm proposing that H-CD, why can't I accept a HLUT for human
>> behavior (or for intelligence)? Why I see no point in thinking
>> about HLUTs?
>
>Because you seem to infer the uselessness of all FSAs from the uselessness
>of one trivial FSA? Correct me if i'm wrong.
>
No, what I reluct to accept is that FSAs are good answers *because* of
HLUTs. Correct me if I'm wrong, but that's pretty much what I understand
from your defense of HLUTs and FSAs.
>> Because when things get into humans, we're not dealing with
>> mathematics alone. To say that our eyes or our ears have a limited
>> accuracy is imposing our mathematical way of looking at things
>> at something that transcends mathematics and go into the quantum
>> realm. Don't misunderstand me here, I'm not like those nuts who
>> propose quantum theories of consciousness. I'm talking about
>> the quantum nature of our world (brains included). On the CD,
>> accuracies are part of its definition: *I am sampling at 44 Khz
>> and storing at 16 bits!*. How can we say that about human organs
>> and still keep the *rigidity* that a mathematical concept demands?
>> To argue about HLUTs one must admit that, as with H-CD, there
>> will be a situation that, given finite accuracy of the HLUTs
>> of all possible humans (also finite, given fixed limits of height
>> and weight), we would have repetitions.
>
>You seem to infer the uselessness of the HLUT of a human being from
>the uselessness of the combined HLUTs of all possible human beings.
>(assuming the possibility for them to be arbitrarily dumb and still
>worth of being called human beings) This is no more reasonable than
>inferring the uselessness of DeepBlue from the uselessness of the set
>of all input history/output pairs of all possible chess computers.
>
My greatest sin so far have been my excessively verbose answers.
I can try to subsume what I mean by this: In my vision, HLUTs do not
allow us to conclude nothing about nothing, because they mix, in
their definitions, mathematical terms with real world terms in an
unreal manner in the indeterminate world we live.
>> When analyzing a human being in touch with a non deterministic world,
>> those who claim for HLUTs are drawing an artificial line and saying
>> that it doesn't care if that brain is non deterministic, it doesn't
>> matter if Beethoven's ears lost accuracy in the end if his life,
>> it doesn't care if chemicals enter the brain via the world and
>> alter it profoundly. It only cares that the behavior of that brain
>> can be accounted in terms of the outputs it gives.
>
>I don't quite get your point here. Traditionally, there have been no
>way to assess what was going on inside the mind of fellow human
>beings but through behavior. If we can understand the internal
>mechanics better, whether via a FSA model or any other neuro-
>phisiological, social and/or cognitive model I expect to see a shift
>away from the unique focus on mere external behavior. The HLUT is
>a red herring again.
>
This is an important point. That shift you mention is at least
one century old. The behaviorist vision of assessing intelligence
only from behavior, although apparently more scientific, started
to be debunked since Santiago Ramón y Cajal began his detailed
study of the nervous system, earlier this century. It's been
now more than 20 years in which PET scans and fMRI are giving
important indications of how the brain works. So the shift is
in fact old-hat, and it is usual today to consider intelligence
in terms of internal processes (mental states) and its relation
with behavior, not only stimulus/response. HLUTs are not only
unreal, they are out of fashion.
>> Is there any
>> way to prove that the outputs of a HLUT can be considered
>> intelligent, when acting in a non deterministic world? I'm sure
>> I'll find someone who claims that this is reasonable.
>>
>> Nothing derived from HLUTs, in my opinion, seem to be meaningful
>> (other than our learning of some facts of life). If one wants to
>> ground one's beliefs in behaviorism, then it is better to choose
>> another set of axioms, because HLUTs will not be a good way to
>> start. Unless you're arguing with Isaac Newton.
>>
>> It all boils down, then, to a "religious" belief. You believe in HLUTs
>> or you don't. The believers cannot prove it works and the nonbelievers
>> cannot prove it don't. You believe in God or you don't. My current
>
>The believer can prove it works if digital computers achieve
>human level intelligence or if neuroscientists and cognitive
>psychologists successfully account for human intelligence with
>a model that can be confirmed through computer simulations.
>
I agree with that, except that what will be proven is that HLUTs
are useful visions to intelligence with the same importance as
that cockroach from that Stockolm's counter is important to the
weather of Florida.
Regards,
Sergio Navega.
From: houlepn@ibm.net
Subject: Re: What I Think HLUTs Mean
Date: 16 Mar 1999 00:00:00 GMT
Message-ID: <7cmnsk$2p7$1@nnrp1.dejanews.com>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
<36ed2b15@news3.us.ibm.net> <7ckdhh$kp$1@nnrp1.dejanews.com>
<36ee6c82@news3.us.ibm.net>
X-Http-Proxy: 1.0 x6.dejanews.com:80 (Squid/1.1.22) for client 207.96.163.34
Organization: Deja News - The Leader in Internet Discussion
X-Article-Creation-Date: Tue Mar 16 23:01:47 1999 GMT
Newsgroups: comp.ai.philosophy
X-Http-User-Agent: Mozilla/4.51 [en] (Win98; I)
"Sergio Navega" <snavega@ibm.net> wrote:
> a) HLUTs are intelligent and able to represent intelligent behavior of
> any human being, by definition.
Agreed. A HLUT defined as mimicking the behavior (or range of possible
behaviors) of a specific human being is pointless.
> b) HLUTs cannot be said to be intelligent unless they are completely
> filled (partial HLUTs are not guaranteed to work, they don't degrade
> gracefully).
Agreed.
> c) HLUTs demand omniscience to be built. They demand that we break
> every concept we have of reality, because we have to assume there's a way
> to predict all future actions (all future responses to inputs), HLUTs
> are mathematical and imaginary constructions, by that reasons and
> also because they'll need more matter than exist in the known universe.
Agreed.
> d) HLUTs can be mathematically transformed into FSAs
Agreed.
> e) Any FSA originary from a HLUT will have to be built with full
> omniscience of the universe, incurring in the same problems of the latter.
... unless the HLUT was the representation of some reasonable FSA in
the first place.
> f) Omniscience (and future predictions of all actions) breaks our
> concept of a quantum and indeterminate world.
> g) FSAs (built from HLUTs) break our concepts of quantum and
> indeterminate world.
Only if it attempts to predict with absolute accuracy rather than provide
an approximate model.
> h) FSAs (built from HLUTs) are as unreal as HLUTs.
Maybe I gave you the impression that I wanted to build an FSA from
the extravagant Sergio's or Einstein's HLUT. Instead, I have been
saying that you can not argue for the inadequacy of computational
models of intelligence (or learning abilities) from imagined
limitations of HLUTs.
> i) FSAs (in general) cannot be proven to be a good solution only
> because they can be the "counterpart" of a hypothetical HLUT.
Agreed.
> Choose another way to say FSAs are good, without using HLUTs
> as axioms.
Sure!
> j) If FSAs are to be a model of intelligence, then one would be better
> off starting from a different principle than HLUTs to justify it.
Agreed. And if intelligence is to be shown to be forever out of the reach
of FSAs one would be better off not starting saying silly things about
HLUTs ;-)
[The Story of The Audio CD snipped]
> No, what I reluct to accept is that FSAs are good answers *because* of
> HLUTs. Correct me if I'm wrong, but that's pretty much what I understand
> from your defense of HLUTs and FSAs.
You are wrong. I have been defending FSAs against attacks misdirected
against misunderstood HLUTs. I remind you of what I said earlier:
# "The second point is that these discussions about HLUTs came from
# somebody (Neil?) implying that 1: HLUTs can not learn 2: FSA are
# formally equivalent to HLUTs hence 1 & 2 => 3: FSA can not model
# human learning behavior. This is a false implication not only for
# the extravagant 'Sergio HLUT' but also for any hypothetical AI
# entity emulating 'Sergio level' abilities on a FSA." 1999/03/10
# "The fact that the FSA digitalizes its *raw*
# inputs do not necessarily render it impotent. Viewing this FSA as a
# HLUT is a red herring. The HLUT is just an alternate representation
# of the FSA. This is the point some of us have been trying to make."
# 1999/03/10
# [Sergio]
# > But I think
# > the greatest problem of the HLUT is to conduct to the FSA as a
# > tentative solution just because the latter can be reduced to
# > the former.
#
# [PNH]
# You are right. There are independent arguments for the FSA view.
# But our argument was the other way around: The FSA view it not
# ruled out just because it can be 'augmented' to a HLUT." 1999/03/11
#
# > [Sergio]
# > So in this regard you seem to be using the HLUT as an axiom to support
# > the development of the FSA strategy (I really may be misunderstanding
# > what you'd proposed, please correct me if I'm wrong). But this is like
#
# [PNH]
# Yes. I have been correcting you above. I agree to what follow. I hope
# we are getting closer. 1999/03/11
Now I hope we *really* are getting closer ;-)
[snip]
> > The believer can prove it works if digital computers achieve
> > human level intelligence or if neuroscientists and cognitive
> > psychologists successfully account for human intelligence with
> > a model that can be confirmed through computer simulations.
>
> I agree with that, except that what will be proven is that HLUTs
> are useful visions to intelligence with the same importance as
> that cockroach from that Stockolm's counter is important to the
> weather of Florida.
Then we have nothing else to argue about this matter.
Regards,
Pierre-Normand Houle
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or
Start Your Own
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What I Think HLUTs Mean
Date: 17 Mar 1999 00:00:00 GMT
Message-ID: <36f01bec@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
<36ed2b15@news3.us.ibm.net> <7ckdhh$kp$1@nnrp1.dejanews.com>
<36ee6c82@news3.us.ibm.net> <7cmnsk$2p7$1@nnrp1.dejanews.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 Mar 1999 21:17:32 GMT, 200.229.240.138
Organization: SilWis
Newsgroups: comp.ai.philosophy
houlepn@ibm.net wrote in message <7cmnsk$2p7$1@nnrp1.dejanews.com>...
>"Sergio Navega" <snavega@ibm.net> wrote:
>
>[lots of agreed stuff sniped]
>
>> h) FSAs (built from HLUTs) are as unreal as HLUTs.
>
>Maybe I gave you the impression that I wanted to build an FSA from
>the extravagant Sergio's or Einstein's HLUT. Instead, I have been
>saying that you can not argue for the inadequacy of computational
>models of intelligence (or learning abilities) from imagined
>limitations of HLUTs.
>
Agree. In fact, I'm one of those who's looking for computational
methods to achieve intelligence and I think I fought so much
against HLUTs exactly because I'm convinced we'll be able to
eventually "mechanize" intelligence.
>
>> j) If FSAs are to be a model of intelligence, then one would be better
>> off starting from a different principle than HLUTs to justify it.
>
>Agreed. And if intelligence is to be shown to be forever out of the reach
>of FSAs one would be better off not starting saying silly things about
>HLUTs ;-)
>
Indeed.
>[The Story of The Audio CD snipped]
>
>> No, what I reluct to accept is that FSAs are good answers *because* of
>> HLUTs. Correct me if I'm wrong, but that's pretty much what I understand
>> from your defense of HLUTs and FSAs.
>
>You are wrong. I have been defending FSAs against attacks misdirected
>against misunderstood HLUTs. I remind you of what I said earlier:
>
># "The second point is that these discussions about HLUTs came from
># somebody (Neil?) implying that 1: HLUTs can not learn 2: FSA are
># formally equivalent to HLUTs hence 1 & 2 => 3: FSA can not model
># human learning behavior. This is a false implication not only for
># the extravagant 'Sergio HLUT' but also for any hypothetical AI
># entity emulating 'Sergio level' abilities on a FSA." 1999/03/10
>
># "The fact that the FSA digitalizes its *raw*
># inputs do not necessarily render it impotent. Viewing this FSA as a
># HLUT is a red herring. The HLUT is just an alternate representation
># of the FSA. This is the point some of us have been trying to make."
># 1999/03/10
>
># [Sergio]
># > But I think
># > the greatest problem of the HLUT is to conduct to the FSA as a
># > tentative solution just because the latter can be reduced to
># > the former.
>#
># [PNH]
># You are right. There are independent arguments for the FSA view.
># But our argument was the other way around: The FSA view it not
># ruled out just because it can be 'augmented' to a HLUT." 1999/03/11
>#
># > [Sergio]
># > So in this regard you seem to be using the HLUT as an axiom to support
># > the development of the FSA strategy (I really may be misunderstanding
># > what you'd proposed, please correct me if I'm wrong). But this is like
>#
># [PNH]
># Yes. I have been correcting you above. I agree to what follow. I
hope
># we are getting closer. 1999/03/11
>
>Now I hope we *really* are getting closer ;-)
>
Not only closer, but exactly on the mark!
>[snip]
>
>> > The believer can prove it works if digital computers achieve
>> > human level intelligence or if neuroscientists and cognitive
>> > psychologists successfully account for human intelligence with
>> > a model that can be confirmed through computer simulations.
>>
>> I agree with that, except that what will be proven is that HLUTs
>> are useful visions to intelligence with the same importance as
>> that cockroach from that Stockolm's counter is important to the
>> weather of Florida.
>
>Then we have nothing else to argue about this matter.
>
I agree. I want to thank you for your patience during our debate and
I hope it was clear that my arguments were directed toward the
uselessness of the HLUT, and nothing about the claimants. Even if
HLUTs are not able to flourish any reasonable conclusion, I have been
able to learn some useful points with its discussion. At least, next
time I enter a similar discussion, my points will be reasonable
from the beginning. As the subjects in this newsgroup are very
redundant and cyclic, I'll probably have that chance in a few months.
Regards,
Sergio Navega.
From: Jim Balter <jqb@sandpiper.net>
Subject: Re: What I Think HLUTs Mean
Date: 17 Mar 1999 00:00:00 GMT
Message-ID: <36F01BF3.192AABC0@sandpiper.net>
Content-Transfer-Encoding: 7bit
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
<36ed2b15@news3.us.ibm.net> <7ckdhh$kp$1@nnrp1.dejanews.com>
<36ee6c82@news3.us.ibm.net>
X-Accept-Language: en-US
Content-Type: text/plain; charset=us-ascii
Organization: Sandpiper Networks, Inc.
Mime-Version: 1.0
Newsgroups: comp.ai.philosophy
Sergio Navega wrote:
> c) HLUTs demand omniscience to be built. They demand that we break
> every concept we have of reality, because we have to assume there's a way
> to predict all future actions (all future responses to inputs), HLUTs
> are mathematical and imaginary constructions, by that reasons and
> also because they'll need more matter than exist in the known universe.
In chaos theory there are state spaces where the probability of being
within an area of the space is proportional to the size of the space.
When we pour a layer of hot water over a layer of cold water, the water
becomes lukewarm and never returns to separate layers because the size
of the state space for lukewarmness is vast compared to the size of
the space for hot and cold layers. The water never will return to that
state; its a guarantee you can count on. Nonetheless, it is *possible*.
The state space for hot and cold layers "exists"; it is not empty.
If we were start to create a lookup table according to some random
process, it is *possible* that we would end up with the first entries
of a HLUT for Sergio Navega. You can be guaranteed that we won't,
but it is *possible*, in quite the same sense that it is possible
for lukewarm water to separate into hot and cold layers. No concept
of reality needs to be broken in order to entertain this possibility.
All that is required is an ability to think abstractly in a way that
is quite familiar to theoreticians in mathematics and physics.
It requires understanding the difference between "vanishingly likely"
and "impossible", the same difference between "very very big"
and "infinite". Failing to make this distinction leads to such
mistakes as thinking the the law of entropy is mistaken because
it can't be proved. In fact, the law of entropy isn't a law
in the logical sense; entropy can theoretically reverse, but it
generally doesn't, because reversal is vanishingly likely.
Pointing out that that it would require omniscience in order to
provide a *formal* guarantee that one can build a HLUT is pointless,
since no such claim was made and nothing rests on such a claim.
The only claim is that a HLUT is *possible* in the same sense that
it is *possible* for lukewarm water to separate out into separate
hot and cold layers. (And those who claim that such a thing is not
possible because there is a physical law against it display yet
another, but related, set of conceptual confusions.)
--
<J Q B>
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What I Think HLUTs Mean
Date: 17 Mar 1999 00:00:00 GMT
Message-ID: <36f02df9@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
<36ed2b15@news3.us.ibm.net> <7ckdhh$kp$1@nnrp1.dejanews.com>
<36ee6c82@news3.us.ibm.net> <36F01BF3.192AABC0@sandpiper.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 17 Mar 1999 22:34:33 GMT, 129.37.183.192
Organization: SilWis
Newsgroups: comp.ai.philosophy
Jim Balter wrote in message <36F01BF3.192AABC0@sandpiper.net>...
>Sergio Navega wrote:
>
>> c) HLUTs demand omniscience to be built. They demand that we break
>> every concept we have of reality, because we have to assume there's a way
>> to predict all future actions (all future responses to inputs), HLUTs
>> are mathematical and imaginary constructions, by that reasons and
>> also because they'll need more matter than exist in the known universe.
>[snip]
>
>If we were start to create a lookup table according to some random
>process, it is *possible* that we would end up with the first entries
>of a HLUT for Sergio Navega. You can be guaranteed that we won't,
>but it is *possible*, in quite the same sense that it is possible
>for lukewarm water to separate into hot and cold layers.
It is possible that the computer in front of me is reading from its
memory right now the exact string of bytes that, if fed to the neurons
that control my fingers, would produce the tickling on the keyboard
of the text I'm writing. This is in fact similar to what you've said.
It is possible. I can find a neurophysiologist who can wire that
for me, and we only have to deal with the probability of that
happening while my Windows 95 is running. I have no problem with
that kind of possibility. If we don't have the technology to
do that today, we'll have it in 10, 20 or 500 years. There's
nothing in the laws of physics we know today that prevents me
from doing that.
What I said is that HLUTs are constructions that demand omniscience.
Mathematical omniscience? No, real world omniscience. To fit
inside our minds, HLUTs demand that we think of a method to
fill the table. Why should I worry with this? Isn't this just a
game of possibility? No, we're talking about real world here,
because *behavior*, one of the things that is part of the HLUT
definition, can only be understood in real world. I can think
of a dozen really silly things which are as reasonable as HLUTs,
if I step aside the real world. Angels dancing in pins would be
believable next to what I could conceive.
What's the difference between HLUTs and, for instance, the fact that
Lim 1/x = oo
x->0
The difference is that the statement above can be seen to be
true within a set of initial hypothesis of mathematics. I don't
expect to find infinity when I put my calculator to invert 10^(40!).
I don't need nothing more to be in peace with the above assertion.
To be in peace with the idea of HLUTs, one must think how it can be
that a mathematical concept can be axiomatized in terms of reality.
But why we must axiomatize HLUTs in reality?
Because HLUTs talk about *behavior*, and that's *not* a mathematical
concept, it is a real world concept! To behave like Albert Einstein
you have to put your tongue out when in front of a certain photographer.
How can you say something mixing mathematics and real world without
using some kind of real world grounding? I'm not questioning HLUTs,
I accept their possibility. My argument (c) above, that you took
out of context, was just saying that HLUTs demand omniscience to be
reasonable. There's nothing about accepting its possibilities or not
in this phrase. You agree with it or not.
> No concept
>of reality needs to be broken in order to entertain this possibility.
This is the same thing as postulating that our souls go to heaven or
hell, depending on our *behavior* here on earth.
Regards,
Sergio Navega.
From: daryl@cogentex.com (Daryl McCullough)
Subject: Re: What I Think HLUTs Mean
Date: 16 Mar 1999 00:00:00 GMT
Message-ID: <7clu7r$9eq@edrn.newsguy.com>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
<36ed2b15@news3.us.ibm.net> <7ckdhh$kp$1@nnrp1.dejanews.com>
<36ee6c82@news3.us.ibm.net>
Organization: CoGenTex, Inc.
Newsgroups: comp.ai.philosophy
Sergio says...
>This is an important point. That shift you mention is at least
>one century old. The behaviorist vision of assessing intelligence
>only from behavior, although apparently more scientific, started
>to be debunked since Santiago Ramón y Cajal began his detailed
>study of the nervous system, earlier this century. It's been
>now more than 20 years in which PET scans and fMRI are giving
>important indications of how the brain works.
I disagree. There hasn't been a shift away from using behavior
to *assess* intelligence. Nobody uses PET scans or MRIs to figure
out who is admitted into Harvard. They use behavioral tests--how
well does the applicant do on tests, how well do they answer
oral questions from an interviewer, etc.
What has been pretty much completely rejected is using
behaviorism to *understand* the brain. There is a big
difference between (A) using behavior as a criterion
for intelligence, and (B) using behavior as (the sole)
tool for exploring *how* the brain works. (B) has
been rejected (you *can't* infer mechanisms from
behavior alone) but not (A).
>So the shift is in fact old-hat, and it is usual today
>to consider intelligence in terms of internal processes
>(mental states) and its relation with behavior, not
>only stimulus/response.
The HLUT is not about stimulus/response, at least not
in the sense of Skinnerian behaviorism. In a stimulus/response
model, the response is a function of only the last stimulus,
while the HLUT allows it to be a function of the entire
history of inputs so far.
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: What I Think HLUTs Mean
Date: 19 Mar 1999 00:00:00 GMT
Message-ID: <36f25043@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net> <7ch9rg$a2g$1@nnrp1.dejanews.com>
<36ed2b15@news3.us.ibm.net> <7ckdhh$kp$1@nnrp1.dejanews.com>
<36ee6c82@news3.us.ibm.net> <7clu7r$9eq@edrn.newsguy.com>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 19 Mar 1999 13:25:23 GMT, 200.229.240.132
Organization: SilWis
Newsgroups: comp.ai.philosophy
Daryl McCullough wrote in message <7clu7r$9eq@edrn.newsguy.com>...
>Sergio says...
>
>>This is an important point. That shift you mention is at least
>>one century old. The behaviorist vision of assessing intelligence
>>only from behavior, although apparently more scientific, started
>>to be debunked since Santiago Ramón y Cajal began his detailed
>>study of the nervous system, earlier this century. It's been
>>now more than 20 years in which PET scans and fMRI are giving
>>important indications of how the brain works.
>
>I disagree. There hasn't been a shift away from using behavior
>to *assess* intelligence.
Sorry, I misused that word, I wanted to say something on the
lines of "understanding", not assessing. The latter can only be
done through behavior (so far).
>Nobody uses PET scans or MRIs to figure
>out who is admitted into Harvard. They use behavioral tests--how
>well does the applicant do on tests, how well do they answer
>oral questions from an interviewer, etc.
>
I agree. In fact, given our current knowledge about the brain,
anything different from behavioral tests would be as dangerous
as, for instance, "graphology", the "assessment" of one's
personality by the way handwriting looks like. I prefer to be a
hard-core behaviorist than to use such a crap.
>What has been pretty much completely rejected is using
>behaviorism to *understand* the brain. There is a big
>difference between (A) using behavior as a criterion
>for intelligence, and (B) using behavior as (the sole)
>tool for exploring *how* the brain works. (B) has
>been rejected (you *can't* infer mechanisms from
>behavior alone) but not (A).
>
I agree entirely.
>>So the shift is in fact old-hat, and it is usual today
>>to consider intelligence in terms of internal processes
>>(mental states) and its relation with behavior, not
>>only stimulus/response.
>
>The HLUT is not about stimulus/response, at least not
>in the sense of Skinnerian behaviorism. In a stimulus/response
>model, the response is a function of only the last stimulus,
>while the HLUT allows it to be a function of the entire
>history of inputs so far.
>
I have some doubts about this. Any behaviorist analysis of
stimulus/response must someway account for previous experiences
of the organism (in fact, conditioning, as used by behaviorists,
is highly dependent on the incorporation of *previous* stimulus
in order to explain the conditioned response to a similar
stimulus, Pavlov's dogs are simple example of this).
Regards,
Sergio Navega.
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: Why HLUTs are not intelligent
Date: 18 Mar 1999 00:00:00 GMT
Message-ID: <36f0fc01@news3.us.ibm.net>
References: <36e80e7c@news3.us.ibm.net> <7c9qnu$5g3$1@nnrp1.dejanews.com>
<36e92939@news3.us.ibm.net> <7ccpk5$o25$1@nnrp1.dejanews.com>
<36ea8c14@news3.us.ibm.net>
<Pine.BSF.4.05.9903180114010.25912-100000@systemet.cybercity.dk>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 18 Mar 1999 13:13:37 GMT, 129.37.182.242
Organization: SilWis
Newsgroups: comp.ai.philosophy
James Avery wrote in message ...
>
>On Sat, 13 Mar 1999, Sergio Navega wrote:
>
>> Thought Experiment
>> ------------------
>>
>> Suppose we take the HLUT of Sergio to a different universe
>> (all you guys that are discussing HLUTs cannot say that
>> this is an "unreasonable" thought experiment!).
>>
>> Suppose, also, that this universe works with different
>> laws of physics than the one we're in. As an example,
>> in this contrived universe if you drop a rock it will
>> not fall on the ground, it will raise up to a height
>> of 4 meters and stabilize there (don't ask me why,
>> it's just the way that universe works!).
>>
>> It is obvious (I hope you all agree!) that the HLUT
>> of Sergio in the previous universe (ours) will be *useless*
>> in this universe. All laws of physics are different,
>> all "models" the HLUT has do not correspond to the
>> circumstances of this new universe. So, that HLUT
>> is useless in this universe.
>
>
>Ah! You seem blissfully to have forgetten the H in HLUT. :)
>
Yes, sadly I forgot the H! It was not the first time, may have been
the second or third in these threads. The fact is that, during my
"learning" phase of what HLUTs meant, I tried to get out of
its arms around me like a fish out of water.
Now I understand what HLUTs mean: Nothing.
You can't reason about HLUTs because they are neither mathematical
neither real world, they are contrived to be successful,
just like creationist explanations of the world.
Do I agree that HLUTs represent all possible behaviors?
Of course they do. Can we extract anything useful from HLUTs?
Only that mathematicians are also human, and that means that
some of them eventually follow certain religions.
>[snip obvious part]
>Last time I read comp.ai.philosophy, btw, was in January or so. It feels
>like I never left! :)
>
I bet with you that three months from now you will also have that
sensation again. However, you'll find me arguing differently, and that
may make a difference on the destinies of some silly threads (I, for
one, will not start threads about HLUTs anymore :-).
Regards,
Sergio Navega.
Back to Menu of Messages Sergio's Homepage
Any Comments? snavega@attglobal.net