AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

AiMind
 
 

My AI chatbot, called AiMind.html in JavaScript and MindForth in Forth, is tangentially approaching consciousness because I am working on a stage I call “self-referential thought”. On 5 September 2010 I made a minor breakthrough when I chanced upon the idea of using “neural inhibition” as a technique to permit the AiMind to respond exhaustively from its knowledge base (KB) to human user queries. If I ask my AiMind “what are you”, it now states all the facts that it knows about itself, and each fact is immediately _inhibited_ inside the software, so that the next fact may rise in activation and serve as the next response to the query. This discussion of self by the AiMind has a bearing on consciousness, because the artificially intelligent chatbot may gradually become aware of its own nature and its relationship to the human users.

 

 
  [ # 1 ]

Hi Arthur,
Welcome to the forums.

My AI chatbot…is tangentially approaching consciousness

Would you elaborate on your description of consciousness and describe how you’ll know that your chatbot has finally arrived (consciousness)?  I read the link above, thanks Andrew, and read about that perspective.  I’m curious of what you mean by this.

Regards,
Chuck

 

 
  [ # 2 ]
Arthur T Murray - Sep 23, 2010:

My AI chatbot, called AiMind.html in JavaScript and MindForth in Forth, is tangentially approaching consciousness because I am working on a stage I call “self-referential thought”. On 5 September 2010 I made a minor breakthrough when I chanced upon the idea of using “neural inhibition” as a technique to permit the AiMind to respond exhaustively from its knowledge base (KB) to human user queries. If I ask my AiMind “what are you”, it now states all the facts that it knows about itself, and each fact is immediately _inhibited_ inside the software, so that the next fact may rise in activation and serve as the next response to the query. This discussion of self by the AiMind has a bearing on consciousness, because the artificially intelligent chatbot may gradually become aware of its own nature and its relationship to the human users.

It sounds like you’re saying the chatbot has learned to recognize that it has already said something. How does this help the bot become aware of its own nature? Does it use its knowledge base to draw conclusions about itself? In other words, does it add facts about itself based on what it learns about other things?

 

 
  [ # 3 ]
Chuck Bolin - Sep 27, 2010:

Hi Arthur,
Welcome to the forums.

My AI chatbot…is tangentially approaching consciousness

Would you elaborate on your description of consciousness and describe how you’ll know that your chatbot has finally arrived (consciousness)?  [...]

Regards,
Chuck

For my description please see http://code.google.com/p/mindforth/wiki/ConSciousness and also http://code.google.com/p/mindforth/wiki/SubConscious—both of which figure strongly in my AI “chatbot/Mind” programing. I read the NYT Tononi article on consciousness very carefully when it was published a week ago, and I recall thinking that it was quite different from my own simplistic idea of consciousness, which is that consciousness is basically the searchlight of attention, and that the _illusion_ of consciousness is itself the _essence_ of consciousness. In other words, if you can fool an entity into thinking that it is conscious, then it is in fact conscious.

As for how I will know that my Mind-chatbot has finally arrived at consciousness, it will be in the same way that we acknowledge consciousness in our fellow human beings: by report. If my AI chatbot begins to talk about its own existence and its own consciousness, then I will assume that it is doing a Cartesian “Cogito ergo sum” scenario. Please let me elaborate further in the next response below.

 

 
  [ # 4 ]
C R Hunt - Sep 27, 2010:
Arthur T Murray - Sep 23, 2010:

[...] If I ask my AiMind “what are you”, it now states all the facts that it knows about itself, and each fact is immediately _inhibited_ inside the software, so that the next fact may rise in activation and serve as the next response to the query. This discussion of self by the AiMind has a bearing on consciousness, because the artificially intelligent chatbot may gradually become aware of its own nature and its relationship to the human users.

It sounds like you’re saying the chatbot has learned to recognize that it has already said something. How does this help the bot become aware of its own nature? Does it use its knowledge base to draw conclusions about itself? In other words, does it add facts about itself based on what it learns about other things?

Today on Mon.27.SEP.2010 I was programming the Mind.Forth AI just this morning and I posted the current entry of the “MindForth Programming Journal” (MFPJ) at http://robots.net/person/AI4U/diary/46.html and at http://advogato.org/person/mentifex/diary/65.html for interested AI or robotics devotees.

I am not “saying the chatbot has learned to recognize that it has already said something”, because the Mind-chatbot is still just learning to mine its knowledge-base (KB) for factual tidbits to say about itself. It has a self-concept of “I” and inside the KB it can find transitive or intransitive verbs (e.g., “am”) which are validly associated with the ego concept in such a way as to constitute knowledge about itself. In this month of September 2010, the AI chatbot is becoming able to answer a query like “What are you?” with the recall and utterance of exhaustively all be-verb statements about itself, contained in the knowledge base. The trick (and the difficulty) lies in orchestrating and coordinating the conceptual activations (yes, the AI has concepts) so that any pertinent question can be asked at any time. For example, today I was asking both “What are you?” and “What am I?” at the same time as I was adding extra tidibits to the KB, because the AI needs to be able to retrieve both the innate KB items and the new KB items entered at any time by a human user.

I am not yet sure how to “help the bot become aware of its own nature”. Once the conceptual activations achieve a sort of bulletproof robustness, then I hope to devise cognitive stratagems to demonstrate to the AI chatbot that it exists as an entity separate from the world around it. On the Google Code http://code.google.com/p/mindforth/wiki/MileStones page I indicate that I am currently working on “self-referential thought” for the AI Mind. It may be possible to turn the computer keyboard into a sense organ for the AI Mind, so that we can ask it if it sense this or that keystroke. It may also be necessary to embody the Mind in a physical robot with a sensorium through which it will become aware of both self and surroundings.

The AI does not yet “draw conclusions about itself” and it does not “add facts about itself based on what it learns about other things”, but I certainly hope and plan to getting into “Is-a” ontologies inside the AI, so that, if the AI knows facts about robots, and we tell the AI that it is a robot, then it should be able to make a few assumptions about its own nature as a robot.

I am running out of time here at a public library terminal, but I would like to mention that http://www.scn.org/~mentifex/AiMind.html (for MSIE) is the JavaScript tutorial version, which lags somewhat behind the Forth version, but is easier to run. Bye to all for now. -Arthur

 

 
  [ # 5 ]

I posted two replies earlier today, in which I failed to give http://www.scn.org/~mentifex/mfpj.html as the URL of my latest work, which bears tangentially on AI consciousness. By the way, Erwin Van Lun, thank you for hosting a friendly, amicable place in which all chatbot developers may meet and share ideas in a mutually beneficial atmosphere. -Arthur

 

 
  [ # 6 ]
Arthur T Murray - Sep 28, 2010:

By the way, Erwin Van Lun, thank you for hosting a friendly, amicable place in which all chatbot developers may meet and share ideas in a mutually beneficial atmosphere. -Arthur

You’re welcome Arthur. Always nice to hear such friendly feedback. Did you know we only started formally in March?

Arthur T Murray - Sep 28, 2010:

the _illusion_ of consciousness is itself the _essence_ of consciousness. In other words, if you can fool an entity into thinking that it is conscious, then it is in fact conscious.

Imagine a time where chatbots are having intelligent dialogs with humans. Chatbots learning all the time from their human counterparts, trying to copy their behaviour. Not only be repeating words, but also by pronoucing words. Not only verbal, but also in behaviour, in movements.

In this constant proces of copying, the chatbots learns fast, very fast. But it also discovers that he can’t reprodcue what other ‘humans’ and saying, but example they are singing opera’s, and his speech synthesier can’t handle that. He starts to realize he is different. He start to realize that his own ‘body’ has limitations.

And then the real intelligence comes in: how can he improve himself? Adding a better speech synthesis card.

By the time he orders a speech synthesis card from the budget he got from his human owner, and a humanoid robot installs the card, it would say, that’s the time we can really talk about articificial intelligence.

And conciousness starts with self-awareness.

 

 
  [ # 7 ]

JavaScript AI Mind Programming Journal - 2010 October 2

Sat.2.OCT.2010—The Royal Jelly Principle

We have stumbled into a minor breakthrough in
our Mentifex AI coding. Last January (2010)
in MindForth we were coding elaborate schemes
to answer who-queries and what-queries in the AI.
Then on 5 September 2010 we developed a technique
of using neural inhibition to simply answer the
same what-queries for which we had written
over-complicated code in January of 2010.
We wanted to dismantle the complicated query-code,
but we did not want to lose any of the improvements
and advances that we had meanwhile incorporated
into the code-base along with the complex
query-response code. We decided to keep on coding
and to remove one small item at a time from the
complicated query-code. Then we decided to bring
the JavaScript AI (JSAI) up on a par with MindForth.

In coding the JSAI, we wished that we could keep
just the query-subject variable from the overly
complicated query-response code. It seemed a
shame to work so many hours on query-response
in January and then to abandon all the fruit of
such hard work except for the variable, but now
we see an AI breakthrough shining on the horizon.
If we use “qusub” as the new name for a query-subject
variable, we can start tagging each emerging
thought-subject and each re-activated KbTraversal
concept as a provisional “qusub”, holding onto the
“qusub” for one cycle of thought and not caring
whether the “qusub” concept is actually used as
the subject of a query. It is as if all
thought-subjects are like honeybee eggs with
the potential to mature into queens, depending on
whether or not they are fed royal jelly. Likewise,
each former thought-subject may or may not mature
into the linguistic subject of a query-thought,
depending on whether or not the dynamics of the
AI Mind require a query-subject. If each briefly
dominant thought-subject is tagged as both the
“subjold” old subject and the provisional “qusub”
query-subject, then our AI Mind software becomes
implicitly and inherently more powerful and more
pregnant with possibilities than we ever imagined
it would be.

We note in passing that we have devised a way to
tag subject-concepts not by encumbering them
internally, but by referencing them externally.
Each subject-concept is momentarily and
provisionally a “subjold” concept and a “qusub” concept,
whether or not any use is made of that status.
When Netizens say “and then something magical occurs”,
this hidden power of AI concepts is perhaps the
magic being alluded to.

Sat.2.OCT.2010—Debugging the WhoBe Glitches

By inserting quite a few “alert” messages, we have
determined that the JSAI was saying “WHAT” as its
first utterance because some old code at the end of
NounPhrase was directing the utterance of 54=WHAT
when NounPhrase could find no candidate concept.
Instead of just commenting out the offending code,
we have added the word 109=HELLO at the end of the
EnBoot sequence and mutatis mutandis changed the
NounPhrase code to say “HELLO” instead of “WHAT”.
This method is a rather clumsy way of getting the
AI Mind to say “HELLO” to human users, but at least
it is a start.

Sat.2.OCT.2010—Flushing out the Blank “aud” Fetch

By inserting a diagnostic alert before every SpeechAct
call, we have traced the origin of blank “aud” fetches
to the end of the BeVerb module. There we simply
knocked out the SpeechAct call, and the AI no longer
created empty auditory word-stretches. Next we used
the new “qusub” query-subject variable in WhoBe to
cause the AI to ask much more sensible WhoBe questions,
because the “qusub” variable was retaining the proper
subject for enquiry.

Arthur T. Murray

http://AiMind-i.com
http://www.scn.org/~mentifex/AiMind.html
http://cyborg.blogspot.com/2009/08/javascript.html
http://code.google.com/p/mindforth/wiki/JavaScript

 

 
  [ # 8 ]

Hi,
I visited your aimind site. I typed in various subject + verb + object phrases. This was generated

Human:
Robot:

Human:
RobotHELLO WHAT

Human
:
RobotHELLO WHAT

Human
:
RobotHELLO WHAT

Human
:
RobotHELLO WHAT

Human
:
RobotYOU WHO ARE YOU

Human
:
RobotYOU WHO ARE YOU 

Is this your main project or is this part of something larger?

Regards,
Chuck

 

 
  [ # 9 ]
Chuck Bolin - Oct 4, 2010:

Hi,
I visited your aimind site. I typed in various subject + verb + object phrases. This was generated

[...]
Human
:
RobotYOU WHO ARE YOU 

Is this your main project or is this part of something larger?

Regards,
Chuck

Thanks, Chuck, for visiting the
http://www.scn.org/~mentifex/AiMind.html
JavaScript for MSIE site. That JavaScript AI Mind
site is not my main project, and is indeed part of
something larger, namely the MindForth AI of
http://www.scn.org/~mentifex/mindforth.txt
for source code; and for documentation at
http://code.google.com/p/mindforth (q.v.).

If you visited the JSAI Mind page and did not
type anything in, the AI tried to engage you
by saying “HELLO” and by asking
“WHO ARE YOU”. Actually, it activates
the “YOU” concept and does not find any
knowledge of “you” within itself, so an
activation threshold test switches or shunts
the program flow into a “WhoBe” module
of asking who you are.

The JavaScript AI is a tutorial version of the
much more ambitious MindForth AI—whose
installation target is autonomous robots.

Running MindForth involves the two technical steps
of downloading a particular version of Win32Forth
and of loading the AI source code into Win32Forth.
Therefore I keep working on the JavaScript Mind
which can be run simply by clicking on a link.

My near-term goal for the Forth and JavaScript AI
programs is to let them receive facts about, say,
you, from the user and then have them parrot
back those facts exhaustively in response to
user queries, such as “Who am I?” or “What
do cats eat?” The answers will not be a database
look-up, but will be associative conceptual thinking.

As I code my AI Minds in Forth or JavaScript, I
record my work-steps electronically in a
programming journal, a kind of “AI Lab Notes”.
Since something I posted in a consciousness
thread was moved here to start an AiMind thread,
I figured I might post journal entries here in
continuation of the original thread.

Thanks for asking about my AI, and best wishes for your
http://www.chuckbolin.com/walter.php chatbot. I notice at
http://www.chatbots.org/ai_zone/viewthread/137/
that your chatbot is moving into extremely complex
English grammar, whereas my own AiMind is still
building out from a central core of utterly simple
subject-verb-object (SVO) grammar structures.

Hoping to catch up to you at some point,
Bye for now.

Arthur

 

 
  [ # 10 ]

Hi,

My near-term goal for the Forth and JavaScript AI
programs is to let them receive facts about, say,
you, from the user and then have them parrot
back those facts exhaustively in response to
user queries, such as “Who am I?” or “What
do cats eat?” The answers will not be a database
look-up, but will be associative conceptual thinking.

I would think that a database lookup would be necessary. I’m not sure I understand ‘associative conceptual thinking’. This implies that the information received from the human is not stored in a database. I may be wrong. I’m curious how you would save this data entered by the human. E.g.

Bot: Who are you?
Human: My name is Jim.

How/where is the name ‘Jim’ saved and associated with ‘name’....without a database.

Regards,
Chuck

 

 
  [ # 11 ]

> I would think that a database lookup would be necessary.

Storing each word as a concept rather than as a database
entry permits the word to have different instances and
therefore different relationships over time. For instance,
http://mind.sourceforge.net/spredact.html
has a diagram that shows how the concept of the verb
“eat” can associate backwards to different subjects
and forwards to different direct objects over time.

> I’m not sure I understand ‘associative conceptual thinking’.

http://mind.sourceforge.net/theory5.html
presents a diagram of the
http://code.google.com/p/mindforth/wiki/MindGrid
and explains the theory of the associative conceptual
thinking. Although I have been coding the AI Minds
for seventeen years since 1993, previously I spent
thirteen years developing the Theory of Mind.

> This implies that the information received from the human
> is not stored in a database. I may be wrong. I’m curious
> how you would save this data entered by the human. E.g.
>
> Bot: Who are you?
> Human: My name is Jim.
>
> How/where is the name ‘Jim’ saved and associated with
> ‘name’....without a database.
>
> Regards,
> Chuck

Each word in the AI Mind, such as “name” or “Jim”,
is stored on three different levels. As a sequence
of phonemes, the word “J-I-M” is stored in the
time-sequential array of the auditory memory channel.
This storage of “JIM” is visible if one runs the
http://www.scn.org/~mentifex/AiMind.html
program in “Diagnostic” mode by clicking on
the JavaScript “Diagnostic” checkbox, as I
have just now done and copied the results:

421. M 0 1 1 0
422. Y 0 
0 0 94
423. 
424. N 0 
1 1 0
425. A 0 
0 1 0
426. M 0 
0 1 0
427. E 0 
0 0 110
428. 
429. I 0 
1 1 0
430. S 0 
0 0 66
431. 
432. J 0 
1 1 0
433. I 0 
0 1 0
434. M 0 
0 0 111 

In the data recorded above, “MY” is a known
word with Psi concept number 94. “IS is also
a known word, with Psi concept number 66.
“NAME” is a new word to the AI, so it gets
assigned “110” as the next available identifying
number for a deep Psi concept. Likewise, “JIM”
is a new word to the AI, so it is assigned
“111” as the sequentially next available
Psi concept number for the next new word.

The deep Psi array is where thinking takes
place by means of “spreading activation”,
which follows associative links from concept
to concept. In between the deep Psi concept
array and the surface auditory array, the
English lexical array holds the fetch-tags
to activate a word of any human language
stored in the auditory memory channel.
The AI Mind could easily go bilingual
by having not only an English lexical
array but also, say, a German lexical
array. Since I know German, Latin, Russian
and ancient Greek, my plan is to add a
“De{" array for German eventually and
to have a bilingual artificial intelligence
for machine translation (MT). Accordingly
my various mind-modules have names like
http://code.google.com/p/mindforth/wiki/EnAdverb
http://code.google.com/p/mindforth/wiki/EnArticle
http://code.google.com/p/mindforth/wiki/EnBoot
http://code.google.com/p/mindforth/wiki/EnCog
http://code.google.com/p/mindforth/wiki/EnParser
and so on, so that the “De” prefix for “Deutsch”
can form the names of the German mind-modules.

To sum up the answer to your basic question about
not using a database to store words, the end result
may be the same as a database, but the AI Mind
implementation adheres strictly to the AI theory
of storing concepts separately from word-engrams.

Thanks for asking.

Sincerely,

Arthur T. Murray

 

 
  [ # 12 ]

Hi Art,
I understand that you’ve included additional human-like modules such as hearing to
your model. I also understand how your model assigns a numerical value to various
words.  I believe the numbers are simply a way of abstracting things to a
concept.

Here’s what I understand regarding people and the real world.

* A child smells, tastes, feels, hears, and sees the world. Without knowledge
of speech or writing they store concepts using sensory data. For example, a
‘flower’ produces 5 inputs that is stored in the brain as a complex pattern.

* Over the next few years the toddler learns to associate spoken words with
these concepts.  So words they hear trigger the memory of a flower based upon
senses…and they can communicate a concept with a word.

* After several more years the youngster can translate these concepts and their
spoken aliases into written language.

The problem inherent with chat bots, I think, is that we are trying to emulate
human conversation ‘two levels removed’ from the actual concept. That is, we only
try to use the written language.

How does your model deal with this issue…of using written language that is removed
from real world concepts defined by sensory data? How do you simulate these senses?

Also, is it proper to refer to your model as a bot?

Regards,
Chuck

 

 
  [ # 13 ]

> Hi Art,
> I understand that you’ve included additional
> human-like modules such as hearing to your model.

Yes,
http://code.google.com/p/mindforth/wiki/AudRecog
is the main hearing module, and its output counterpart is
http://code.google.com/p/mindforth/wiki/SpeechAct
for output to the screen or voice.

> I also understand how your model assigns a numerical
> value to various words.  I believe the numbers are
> simply a way of abstracting things to a concept.

Yes, each “Psi” concept number is the software stand-in
for a long neuronal fiber as assumed in the theory of mind.
Since I can not actually create fibers in software, I
assign a number to each concept and pretend it is a long
fiber making synaptic connections across associative tags
to other concept-fibers on the mindgrid. Two of my Psi
variables are “fex” and “fin”, for fiber-out and fiber-in,
meaning, going out of the concept-fiber, and going in.
When I just now typed “you are chuck” into the JSAI at
http://www.scn.org/~mentifex/AiMind.html
with the checkbox checked for “Diagnostic” mode,
the word “you” appeared with a fiber-out “fex” of
56, which means that the AI activates concept #56
whenever it is addressing an external person as “you”.
The same word “you”, coming into the AI from outside,
has a fiber-in “fin” value automatically assigned as 50,
which is the 50=I self-concept, so that saying “you”
to the AI activates its “I” concept and it thinks as “I”.

> Here’s what I understand regarding people and the real world.
>
> * A child smells,

http://code.google.com/p/mindforth/wiki/OlfRecog

> tastes,

http://code.google.com/p/mindforth/wiki/GusRecog

> feels,

http://code.google.com/p/mindforth/wiki/TacRecog

> hears,

http://code.google.com/p/mindforth/wiki/AudRecog

> and sees

http://code.google.com/p/mindforth/wiki/VisRecog

> the world. Without knowledge of speech or writing
> they store concepts using sensory data.

http://code.google.com/p/mindforth/wiki/SensoryInput

> For example, a ‘flower’ produces 5 inputs that is
> stored in the brain as a complex pattern.
>
> * Over the next few years the toddler learns to
> associate spoken words with these concepts. So
> words they hear trigger the memory of a flower
> based upon senses…and they can communicate a
> concept with a word.
>
> * After several more years the youngster can
> translate these concepts and their spoken aliases
> into written language.
>
> The problem inherent with chat bots, I think,
> is that we are trying to emulate human conversation
> ‘two levels removed’ from the actual concept.
> That is, we only try to use the written language.
>
> How does your model deal with this issue…of using
> written language that is removed from real world
> concepts defined by sensory data?

Although MindForth seems to be using written language,
it actually treats each alphabetic ASCII character
as if it were a phoneme (using the variable “pho”).

http://cyborg.blogspot.com/2010/05/audrecog.html
(the AudRecog auditory recognition module) is probably
very strange-looking to non-AI programmers, who
might typically use simple string-matching to
process words into the AI. MindForth uses an
extremely elaborate system of quasi-neuronal
activation for pattern-matching in recognition of
words. It then uses “differential activation” to
recognize stems and other subwords within a word.

> How do you simulate these senses?

MindForth currently simulates only the sense of
hearing. The sensorium of other senses and the
MotorOutput module are “stubbed in” so that
robotmakers and others, who might like to work
on sensory input or motor output, may see in
advance where to place their code inside the AI.

> Also, is it proper to refer to your model as a bot?

In two senses of the word “bot”, it is indeed
proper to refer to the MindForth AI Mind as a bot.
Since intelligent robots are my installation target,
in Transcript display mode the AI Mind shows
Human:
Robot:
as the conversants, so as to encourage the idea that
the artificial intelligence belongs inside a robot.
Because the AI Mind converses, it is also a chatbot.

> Regards,
> Chuck

Thanks again for looking into MindForth AI.

Arthur

 

 
  [ # 14 ]

ok guys, time for action.

Can I propose that we’ll simply ignore non-inspiring posts ?

Secondly, if it starts to irritate, please report behaviour through the report link (see attached image). Dave, Victor or I will take care of follow-up.

Third, if a member frustrates a discussion, please send an email to me and do not post it to the forum, as it frustrates the forum. The moderators team will take care.

Fourth, we’ll create a specific area: “Alternative AI Hypotheses” (thanks for coining that name Richard in our email conversation”). Postings that seem to be out of context are moved there.

We don’t have an Master in Moderation. It’s also a learning journey for us. Be with us a we professional forum moderation and keep on suggesting changes to the forum.

Please start a new thread on this topic if you’d like to continue this discussion and not hijack this thread.

Thanks in advance for your patience and hopefully we can now continue our regular exciting discussions. Sorry for being a little bit strict this time cool mad

Erwin

Image Attachments
Report.jpg
 

 
  [ # 15 ]

MindForth Programming Journal (MFPJ) 2010 October 14

Thurs.14.OCT.2010—Only VerbAct Shall Differentiate

After more than eight years of status quo,
we have removed the call from ReActivate to
SpreadAct, and suddenly our MindForth AI
does not seem to suffer so much from stray
activations. The new regime will take some
getting used to. We must keep in mind that
NounAct and VerbAct are taking over from
http://code.google.com/p/mindforth/wiki/ReActivate
ReActivate the job of calling SpreadAct
http://code.google.com/p/mindforth/wiki/SpreadAct
but only for nouns and verbs that have been
selected to play a role in a sentence of thought.
We always have the option of reintroducing the call
to SpreadAct if we determine that there is a need
for a modicum of background activation on all
concepts that are recently being thought about.

We still want to determine why our diagnostic
reports do not show build-ups of activation on a
pre-slosh-over verb-node and the actual slosh-over
activation being carried by a “spike” from the
verb-node to the direct object.

In VerbAct, the initial activation should come
from whatever activation is already on the verb-node,
after it has received a “spike” of activation from
http://code.google.com/p/mindforth/wiki/NounAct
NounAct. The “verbval” value is indeed declared
in the VerbPhrase module.

During a two-word KB-query like “dogs eat…?”,
a verb-node of “EAT” wins selection during the
“DOGS EAT…” response and imparts a “verbval”
level from VerbPhrase into VerbAct, but it does
not matter very much how large the “verbval” is.
The subject-word “DOG” gets re-activated to an
equal value on all nodes, but ReActivate no longer
calls SpreadAct to pass a “spike” on to verb-nodes.
It is only during response-generation that NounAct
sends a “spike” from each subject-node into SpreadAct
for each “seqpsi” verb-node, but not to verb-nodes
with a different subject.

Perhaps NounAct should put a specific, non-cumulative
activation on all the nodes of a noun. Then an equal
“spike” can be sent to all associated verbs. But VerbAct
http://code.google.com/p/mindforth/wiki/VerbAct
should only put cumulative activation on verb-nodes, and
should send non-equal “spikes” to direct-object noun-nodes.

Fri.15.OCT.2010—Non-Uniform Spiking Slosh-Over

By means of some rather wild coding, yesterday in
14oct10A.F we finally achieved and visibly demonstrated
true spike-borne activational slosh-over from the combined
activations of subject and verb to the correct direct
object in a thought generated as a subject-verb-object
(SVO) sentence. We then cleaned up the code by commenting
out the diagnostic messages and we uploaded the AI Mind
as 14oct10B.F(orth) to the Web.

We so wildly changed settings and pre-conditions that
the conceptual activations still showed a tendency to
get out of whack and let invalid associations be asserted.
No problem. With activations on verb-nodes no longer being
pumped up to outrageously high values, there was sometimes
not enough activation on verb-nodes to carry a thought,
and a question was asked about a subject-concept instead
of a thought being generated. However, in our on-screen
clusters of diagnostic reports we saw the genuine slosh-over
where VerbAct was sending out different “spikes” from
different verb-nodes into the SpreadAct module, so that
valid and correct direct objects would be activated so as
to win inclusion in a thought, while other recorded objects
of the same verb but for different subjects, would fail
to garner enough activation to be selected as objects.
As long as we preserve the hard-won functionality of
slosh-over, we may further tweak our AI source code
towards our target of a robust, bullet-proof artificial
intelligence. See you at the Singularity.

 

This topic is closed, it's not possible to reply

 1 2 3 >  Last ›
1 of 11
 
  login or register to react