Selected Newsgroup Message
From: "Sergio Navega" <snavega@ibm.net>
Subject: Re: How to modify Copycat's slipnet.
Date: 23 Jan 1999 00:00:00 GMT
Message-ID: <36a9d3a1@news3.ibm.net>
References: <36A89772.6C11@NoSpam.casema.net>
<36a9392d.21745017@news.select.net>
X-Notice: should be reported to postmaster@ibm.net
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3
X-Complaints-To: postmaster@ibm.net
X-Trace: 23 Jan 1999 13:50:25 GMT, 129.37.183.106
Organization: SilWis
Newsgroups: alt.fan.hofstadter,comp.ai,comp.ai.alife
Joshua Scholar wrote in message <36a9392d.21745017@news.select.net>...
>
>Let this a lesson to all: don't post text created in text editor that
>puts carrage returns at the end of every line, the result is
>unreadable.
>
>In order to read Stephan's post I had to spend 10 minutes reformatting
>it. In order to save others the trouble I'm posting the reformatted
>text here:
>
Thank you, Joshua, this definitely helped me in taking a second
look at this interesting post.
steve wrote in message <36A89772.6C11@NoSpam.casema.net>...
>Hello people,
>
>This posting contains a preliminary notion concerning Hofstadter's and
>Mitchell's copycat project (Hofstadter and FARG, 1995) and what is
>known in the AI community as reinforcement learning. I would greatly
>appreciate any form of feedback that you may wish to give. From here
>on references to Hofstadter or copycat concern 'Fluid Concepts and
>Creative Analogies' (Hofstadter and FARG, 1995).
>
>My background:
>I am an Artificial Intelligence student at the university of Utrecht
>in Utrecht, the Netherlands, currently looking for a subject to write
>a graduate thesis on. I believe Hofstadter's work on creativity,
>analogy, perception and representation, and the success of his ideas
>in computer models such as copycat, to lie at the very heart of AI, or
>at least where it's very heart should be.
>
I agree that Hofstadter's ideas are very close to the core of a
capable AI architecture. I only regret that it somewhat stopped in
time, with few researchers thinking about further developments.
Your effort, then, is commendable and I wish you luck.
>Notion:
>It is desirable to extend a copycat-like architecture, which exists of
>a slipnet and a workspace filled with codelets, with an element that
>is able to modify the slipnet in such a way that new concepts (nodes)
>can be incorporated into it. A new concept is a node whose effect
>cannot be brought about by any viewpoint and is therefore not some
>combination of already present concepts. Later I will present a short
>example.
>
I agree that this is really the first and most important point
in which Copycat should be altered.
>Why:
>Why would such a concept generating module be needed? I can think of a
>few reasons:
>- In more complex domains, as opposed to the letter-like microdomain
>Hofstadter uses, it will be infeasible and undesirable to construct
>the slipnet in advance. It will not be clear what the set of concepts
>to use should be, what the depth of each concept should be, how the
>nodes should be linked, and how the links should be weighted and
>labelled. Of course these decisions can be made empirically, but it
>would be advantageous if the system could make the decisions itself.
>
I agree entirely. In my vision, this has ramifications to the
innateness of language debate. Several cognitive scientists assume
that we are born with a significant knowledge of language in advance.
That's similar to a previously loaded slipnet. During my studies,
I found that the hypothesis of innateness have several serious
problems. If that is correct, our brain would pretty much be
equivalent to an almost empty slipnet, and concept creation,
obviously, would be a very important part of the mechanism.
>- Complex domains change in such a way that sometimes new concepts are
>needed in order to have significant behaviour. Also the conceptual
>depth, i.e. how far a concept is from being directly perceivable in a
>situation, should be context dependant to meet changes or pitfalls in
>the environment. Consider, for example, repetitive behaviour, i.e.
>behaviour the system exhibits when stuck in a local optimum: In the
>chain that let to the repetition the conceptual depths of the concepts
>involved play a role. Of an architectural element capable of extending
>the slipnet we may expect a way of determining conceptual depths and
>thus a means out of repetition. Hofstadter addresses the use of themes
>to avoid repetition (The Metacat Project:.., 1998).
>
>- There is evidence to suggest that some higher order categories
>(concepts) are grounded in what is called categorical perception, or
>CP (See, f.e. Tijsseling A.G. A hybrid framework for categorisation,
>1994. Or, for more on CP, see Harnad, S. Categorical Perception: The
>groundwork of Cognition, 1990). Some CP properties are the result of
>learning during the life of an individual, but not all. CP properties
>are also a result of evolution. This means that there is a level where
>conceptual decisions (decision about concepts) are being made
>non-conceptually, namely randomly (a mutation is random, of course
>natural selection decides which mutations are viable). In copycat all
>workspace objects, i.e. perceptual constructs, are instances of
>slipnet nodes. There are no random constructs that, conceptually, lie
>outside of the systems dormant repertoire.
>
That seems to be a good way to go ahead. Category perception and
creation is in my opinion one of the most important aspects of any
intelligent entity, perhaps even more important than the analogical
mapping that Copycat proposes to solve. It is not conceivable that
Copycat can't benefit from this category creation.
Also, regarding the use of randomness, Copycat is important because
it was one of the first to seriously propose a place for it. Parallel
terraced scan is not only an algorithm idea, it is in my opinion
something with psychological plausibility. And here randomness
plays an important part. I just find that there's more to explore
with small amounts of randomness: it is in "tentative" category
creation. Throughout time, some of these "attempted categories"
would receive reinforcements from the experiences of the entity
with the environment and they will grow up (or fade away) according
to its usefulness.
>- In short: I believe that AI systems, if they wish to become more
>successful, must make use of fluid concepts. To have fluid concepts
>requires one to have the architectural equivalence of a slipnet and
>workspace. Any real world domain in which the system performs will
>however be so complex that constructing the entire slipnet in advance
>will be undoable. A system capable of modifying or constructing its
>slipnet will be usable in larger domains, and it will also be able, to
>a certain degree, to cope with changes in its environment.
>
And that's exactly what needs to be done, that's exactly what current
AI systems fail to deliver. The example of Expert Systems is specially
appropriate: loaded with lots of fixed knowledge, without adaptive
capacity, they start their way to be useless in the day after
implantation.
>Example:
>In Hofstadter's letter domain we could imagine a change in the
>environment in the following way: Say, for some reason, we would find
>a regularity based on an alphabetical distance of two, estethically
>pleasing. We would therefore want copycat to answer the following
>question; drh -> drj, aff -> ?, with; afh, since j is the second
>letter after h (j is successor-successor of h). Since all workspace
>constructs are instances of nodes in the slipnet, and a concept for
>'double successor' is not present, solutions based on an alphabetical
>distance of two will not be found. A system capable of finding the
>'double successor' concept, or some other concept which happens to be
>important, will also be capable of achieving significant behaviour in
>other domains, starting with a rudimentary, or even empty, slipnet.
>That, at least, is the hope.
>
I think you're exactly on the mark. Copycat would improve significantly
if allowed to do that. The secret, IMHO, is allowing Copycat to perceive
regularities. That means to add an "interaction module" where Copycat
could "see" its environment. Throughout interaction with environment
and perception of things that repeat (for instance, the use of this
double successor concept in more than one instance) Copycat should
perceive that it could be interesting to maintain a temporary
version of this concept. Then, after enough confirmations, it could
be "officially" incorporated into the slipnet.
>How:
>How should a concept generating module look like? And how can it be
>incorporated into a copycat-like architecture? In this paragraph I
>will give some functional musings. The module I have in mind is not a
>separate component which you could stick on a copycat-like
>architecture, only on a functional level can you speak of a module.
>It's workings must take place in the same structure which gives rise
>to the probabilistic, statistically significant, emergent behaviour of
>copycat.
>
>As mentioned above, for a concept to be new it must not be some
>combination of already existing concepts, they just don't provide the
>information. A new concept must therefore be born in the workspace.
>Only codelets modify the contents of the workspace. There are two
>types of codelets; bottom-up and top-down ones. Top-down codelets are
>spawn by nodes in the slipnet and act as proxies for the current
>viewpoint in the slipnet. Bottom-up codelets are seekers and look for
>interesting potential perceptual constructs.
>[snip]
I can agree with your overall strategy but I find it lacking a
*very important* component, so important that may be the difference
between success and failure. I'll try to explain in clear terms.
One of the fundamental missions of anything that proposes to be
intelligent is the understanding of its surrounding world. Without a
link with its world, the mechanism will not reflect intelligence,
the way we understand it. A link with the world will provide a form
of *interaction* with the world. That's the essential thing Copycat
is missing. It is too much self-contained, it works only on the
perceptions already built in by their creators (the concepts).
Obviously, in the context we're discussing here, interaction
doesn't mean being able to answer in English our questions neither
to move a robotic head to better "pick up" information from a scene.
But it is sort of similar.
Suppose Copycat started with just some concepts. How can it "learn"
the remaining missing concepts? Through interaction with the "world",
and world may be, here, an operator in a keyboard. In a sense,
Copycat should be able to capture enough information from this
world in order to allow it to "model" it, and then create more.
Using some kind of primitive "language", our task would be to
supply examples, associations, clues that lead Copycat to conclude
(induce would be a better term) a number of tentative concepts.
Whose of those concepts would survive in the long term? I think
you've also sketched the solution:
>We are now in the domain of reinforcement learning, or tabula rasa
This is where I see reinforcement learning being applied. Not only
inside Copycat itself, but between it and the environment.
What could such an architecture do? I haven't worked on this, but
I believe it could do the following:
a) Learn morphological structure
impress, impression, impressibility, impressionable
b) Learn past tense of verbs
walk, walked { regular, add 'ed' }
open, opened
beat, beat { irregular,
equal }
cut, cut
send, sent { irregular,
change last letter to 't' }
build, built
sing, sang { irregular,
change vowel }
sting, stung
c) Suggest new words, subject to some pre-requisites
Say, 'respect' and 'the act of being or having'
giving 'respectableness'
d) Learn addition, subtraction, multiplication, division, just like
a child do. The starting point must, obviously, be minimum (I guess
that knowing to count up to 4 and having the concept of successor
will be enough).
This suggests another point where Copycat should be altered: it must
control some sort of long-term memory, where it can store concepts and
partial discoveries.
All what I said so far are mere speculations, and some ideas
are pretty difficult to develop. But I guess it is a good way
to continue Copycat's tradition.
For quite some time I kept Copycat as the most interesting
architecture with clear concern for analogical reasoning and
a clever discovery method.
Unfortunately, after studying some aspects of psychology and
neuroscience, I perceived that the complements on Copycat are
very important, to the point where I started suspecting that instead
of "patching" Copycat one would do better to "disassemble" it and
use its parts (which means, use its fundamental ideas) in another
architecture.
Regards,
Sergio Navega.
Back to Menu of Messages Sergio's Homepage
Any Comments? snavega@attglobal.net