AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

AiMind
 
 
  [ # 61 ]

It’s posts like this:

"http://www.scn.org/~mentifex/AiMind.html
has suddenly added a new mind-module." 

which seems to degrade the AI conversation more than progress it.  Nothing was “suddenly added”, there is no “true AI” here, there are just some new lines of code (??), and they were written by a person (Arthur, aka “Mentiflex”), not an AI.  It does not help that the whole concept of ASCII maps and techno babble descriptions etc make the project virtually impenetrable to anyone but Arthur.

Useful Reference work by Tristan Miller: http://www.nothingisreal.com/mentifex_faq.txt

 

 
  [ # 62 ]

Oh come on guys. Does anyone apart from Arthur himself think he has created a thinking machine here?

Besides, I already created one earlier in the year: http://www.chatbots.org/ai_zone/viewthread/500/

 

 
  [ # 63 ]

Nope, but still I think his ideas contain elements that trigger inspirations with me smile

 

 
  [ # 64 ]

Yesterday Mind.Forth reached AI-Complete. It took more than eighteen years of programming since July of 1993.

http://www.scn.org/~mentifex/mindforth.txt (MindForth AI program) yesterday was able to comprehend both declarative and negational sentences. It handled both transitive verbs and intransitive verbs of being. It stored ideas in its knowledge base and remembered them in subsequent conversation.

 

 
  [ # 65 ]

Okay, I will bite. Congratulations on being the first to build a complete AGI system. As someone whose intent is to build one as-well and since yours is complete. When do you think you will be able to provide a simple demonstration of its abilities? Anything other than it regurgitating that its “a person” and that its “andru”.

Some thing intelligent.

 

 
  [ # 66 ]
Arthur T Murray - Oct 10, 2011:

It stored ideas in its knowledge base and remembered them in subsequent conversation.

Ideas like what exactly? Most bots can store information and regurgitate it later.

 

 
  [ # 67 ]
Steve Worswick - Oct 11, 2011:
Arthur T Murray - Oct 10, 2011:

It stored ideas in its knowledge base and remembered them in subsequent conversation.

Ideas like what exactly? Most bots can store information and regurgitate it later.

MindForth stores ideas in the subject-verb-object (SVO) format,
such as “Humans need food” and “Robots do not need food.”
The difference between a purported True AI like MindForth
and a traditional chatterbot is that MindForth _conceptualizes_
all its English words. Each word, stored as a “Psi” concept,
has a panel of flags attached to it to hold onto such things as
activation-level; grammatical number; NOT-adverb; previous associand;
subsequent associand and, most recently and in conjunction with
the 9 October 2011 AI-Complete breakthrough, a “qtv” time-point
leading unerringly to the next verb or noun in the idea.

Genesis - Oct 11, 2011:

[...]When do you think you will be able to provide a simple demonstration of its abilities? [...]

Anyone who downloads both the http://www.scn.org/~mentifex/mindforth.txt source code
and the http://prdownloads.sourceforge.net/win32forth/W32FOR42_671.zip?download version
of Win32Forth may try out the AI and run it through its paces. Or, you may
wait a few weeks for the JavaScript http://www.scn.org/~mentifex/AiMind.html version
to be brought up to date and on a par with the MindForth version. 

Last night I was coding the first post-AI-Complete update
and I noticed that I had accidentally commented out one
crucial line pre-upload in the noun-phrase mind-module
of the AI-Complete MindForth source code, the second of
these two lines:

  \ CR .”  NPhr: 50=I by default ”  \ test 9oct2011
  \ 50 motjuste !  \ 50=I default concept of AI Mind; 16aug2011

The first line was a diagnostic message telling the
AI programmer that the AI Mind was now switching
to the concept of ego or “I” as a default subject of
thought when no other concept was active. The
second line, which should not have been
commented out, actually sets the “I” concept
as the subject of the incipient thought. This
error will be corrected in future versions, and
anyone who down loads MindForth now may change
the offending line by removing the initial backslash:

50 motjuste !  \ 50=I default concept of AI Mind; 16aug2011

Then you may tell the AI things, especially
differences between its nature and that of humans,
such as (no punctuation);

i am flesh
you are not flesh
i am a guy
you are not a guy
i have a body
you are a machine

and so forth. If the AI does not seem smart
enough for you now, check back in later, please.
- Arthur

 

 
  [ # 68 ]
Arthur T Murray - Oct 11, 2011:

MindForth stores ideas in the subject-verb-object (SVO) format, such as “Humans need food” and “Robots do not need food.” The difference between a purported True AI like MindForth and a traditional chatterbot is that MindForth _conceptualizes_ all its English words. Each word, stored as a “Psi” concept, has a panel of flags attached to it to hold onto such things as activation-level; grammatical number; NOT-adverb; previous associand; subsequent associand and, most recently and in conjunction with the 9 October 2011 AI-Complete breakthrough, a “qtv” time-point leading unerringly to the next verb or noun in the idea.

What you are doing there sounds a lot like the Subject-Predicate-Object (SPO) data structures that have come into fashion for knowledge bases in recent times (also called Triples).

I’m currently studying YAGO2 very closely http://www.mpi-inf.mpg.de/yago-naga/yago/ and note that in its current incarnation it extends the model with Time and Location parameters (SPOTL). Again this sounds like what you are doing which would mean that YAGO2 would supply a ready source of data to extend your software’s knowledge base.

Incidentally, aligning your own terminology with the prevailing vernacular would make your ideas easier to understand and improve your own productivity. Of course if your SVO is not the same thing as the more widely understood SPO term and your QTV term is not the same as SPOTL, then you need to put more effort into explaining yourself.

 

 

 
  [ # 69 ]

MindForth Programming Journal (MFPJ) Mon.17.OCT.2011—Calling for Questions

Yesterday in the 16oct11A.F release of MindForth AI at http://www.scn.org/~mentifex/mindforth.txt we created calls from the ThInk module to some of the question-asking modules, specifically WhoBe, WhatBe and WhatAuxSDo. In effect we were switching the function of mentioning a rotating series of four concepts (“you”; “robots”; “I”; “God”) from the KbTraversal module to the ThInk module, because the un-anchored activation of the four concepts by KbTraversal was causing problems in our recent attempt to have every emergent thought be based on a specific, time-based engram in the knowledge base (KB) of the AI. The effort succeeded. When there was an arbitrary (coder-chosen) time-stretch of no human input to the AI, the AI began to ask any nearby human user, “Who are you? What are you? What do you do?” Then afterwards we realized that we should have put the question calls not into the generic ThInk module, but into the EnCog (English cognition) module, because the software formulates the questions in English. A “DeKi” German AI or a “RuUm” (PYYM) Russian AI Mind would need questions in German or Russian (official Mentifex AI languages), so we will move the question calls “down” into EnCog where they belong. We expect the move to solve a problem where the 16oct11A.F MindForth was outputting two thoughts simultaneously, one thought from EnCog and a question from the ThInk module. If EnCog calls for a question, an “EXIT” statement in EnCog will abandon the rest of EnCog and return the program-flow to the ThInk module.

Next we plan to re-implement the asking of a question about any new English noun showing up in the NewConcept mind-module. If the human user inputs “Robots repair satellites” and the AI does not know what a satellite is, the AI will ask “What are satellites?” and will give any nearby human user the chance to add to the knowledge base. MindForth is already designed to wait briefly for an answer and then to continue thinking about other things if no reply is made to the query. If the answer about one new concept contains another new concept, the AI will keep on asking questions about each new concept until the machine learning routines enlarge the AI knowledge base with the necessary new vocabulary for thinking about the new domain of special knowledge. Some human users and AI programmers will perhaps realize that they are helping the AI Mind to build up a specialty knowledge base or even a specialty ontology, and they may endeavor to devise ways for the AI Minds to solidify, preserve and maintain each new ontology or knowledge base.

 

 
  [ # 70 ]

Human: Robots repair satellites
Robot:

Human:
Robot: ROBOTS REPAIR WHAT DO I REPAIR

Human:
Robot: ROBOTS REPAIR WHAT DO I REPAIR

Human: satellites
Robot: I REPAIR WHAT DO I REPAIR

Human:
Robot: I REPAIR WHAT DO I REPAIR

You said your AI has reached AI complete yet you are programming more things into it?
And looking at the above, its “NOT” very intelligent is it?

Quite frankly, take a look at this photo. I took it before you changed your site. You said your AI was AI-complete in 2008. So which is it?

There are parts of your theory of the mind that are some-what true. But your implementation is what’s lacking. For example, your AI is based on language. But humans are not based on language, language is only how we express known concepts and is a learned concepts in itself. And since we use language alot, its become dominant in our memory that when we see an object, the first thing that activates in our mind is its “name”.

Secondly, grammar is a learned thing aswell and the reason your AI has such problem conjuring up a cohesant thought is because it has a flawed grammatical system. Any grammar system that has predefined boundaries and isn’t dynamic is flawed.

Thirdly, your AI “cannot” understand because there is no meaning to “text” without a grounded concept.

Lastly if you keep your AI running. All its regurgitating is “I am a person or I am arnold or who is God.” There is nothing of substance here, yet you want this to be a course in Universities? Come one, you have to do a-lil better than that. All I see and as you have stated is “18 years of programming.”

You are now programming how to ask question, but “asking questions” is a LEARNED thing! You might say, well come back later. But that just means “come back when I program more things into it.” Everything we know since birth is learned. From what letters are, to words, to sentences, to grammar, to a statement, to a question, to how to answer questions appropriately, so on and so forth. That’s the point of what “AGI” is. The ability to learn ANYTHING. Your AI seems to be the opposite of that.

There are many questions you should be asking yourself. When will your AI (without the “G”) going to be scaled up, when is it going to carry out an intelligent and cohesant conversation? when is it going to do anything intelligent at all?

 

 
  [ # 71 ]
Genesis - Oct 17, 2011:

You said your AI has reached AI complete yet you are programming more things into it?

Yes, I am smoothing out the basic True AI functionality.

> And looking at the above, its “NOT” very intelligent is it?

The transcript above looks like it came from the
http://www.scn.org/~mentifex/AIMind.html Tutorial AI
which currently lags behind the MindForth AI.
When the JavaScript artificial intelligence (JSAI) says,
“ROBOTS REPAIR WHAT DO I REPAIR”
two things are going wrong. The direct object “satellites”
is not activated highly enough, so the AI switches to the
WhatAuxSVerb module that asks a question with a specific
verb in the verb-slot. The other thing going wrong is that
the subject “robots” has gone astray and “I” took over.

> Quite frankly, take a look at this photo. I took it
> before you changed your site. You said your AI
> was AI-complete in 2008. So which is it?

Nice photo. In 2008, January, the Mentifex AI
was “AI-Complete” in that it was finally able to
think with “spreading activation”, but the thinking
had a tendency to derail into erroneous assertions.
Now on 9 October 2011 the AI is “AI-Complete”
with a more solid ability to think without the
derailments of a train of thought. Meanwhile I
am going through the free AI source code and
clearing out any element that has a disruptive or
corruptive influence on the new thought process.

> There are parts of your theory of the mind that are
> some-what true. But your implementation is what’s lacking.
> For example, your AI is based on language. But humans
> are not based on language, language is only how we
> express known concepts and is a learned concepts in itself.
> And since we use language alot, its become dominant in our
> memory that when we see an object, the first thing that
> activates in our mind is its “name”.

Sounds good to me.

> Secondly, grammar is a learned thing as well and the reason
> your AI has such problem conjuring up a cohesant thought is
> because it has a flawed grammatical system. Any grammar system
> that has predefined boundaries and isn’t dynamic is flawed.

MindForth and the JSAI have internally “predefined boundaries”
in the form of the inherent English grammar rules, but these AI
Minds are free to absorb tens of thousands of “triples” or
facts in the SPO (Subject - Predicate - Object) format.

> Thirdly, your AI “cannot” understand because there is
> no meaning to “text” without a grounded concept.

The text that the AI Mind understands is grounded in
the knowledge that the text conveys. Of course, as in the
http://code.google.com/p/mindforth/wiki/VisRecog
module, roboticists are invited to ground the AI in
sensory experience beyond the text-boundaries.

> Lastly if you keep your AI running. All it’s regurgitating is
> “I am a person or I am arnold or who is God” there is nothing
> of substance here yet you want this to be a course in universities?

http://cyborg.blogspot.com/2009/09/sciencemuseum.html
is about the Science Museums that might host the AI Minds,
not just university AI labs. And the AI does not just “regurgitate”
the concepts pre-coded into it. By asking questions, the AI
builds up a knowledge base fror free-ranging discussion.

> You have to do a-lil better than that. All I see and as you
> have stated is “18 years of programming.”

Once the disruptive, corruptive elements have been
cleared out from the free AI source code, and the MindForth
advances have been ported into the JavaScript AI,
the goal of “self-referential thought” will be achieved.

> You are now programming how to ask questions, but
> “asking questions” is a LEARNED thing! You say come
> back later, but that just means “come back when
> I program more things into it”

I mean, when I have made the AI function flawlessly.

> Everything we know since birth is learned. From what
> letters are, to words, to sentences, to a statement,
> to a question, to how to answer the questions appropriately,
> so on and so forth.

The AI is like the mind of a baby, but not “tabula rasa”.

> There are many questions you should ask yourself.
> When is your AI (without the “G”) going to be scaled up,

Thought is thought and does not need to “be scaled up”.

> when is it going to carry out an intelligent, cohesant conversation?

When I have removed the elements that were disruptive.

> when is it going to do anything intelligent at all?

You and the other users will be the judge of its intelligence.

 

 

 
  [ # 72 ]

Genesis - Arthur has been babbling on like this for years and has yet to demonstrate anything with even a hint of intelligence.

 

 
  [ # 73 ]
Arthur T Murray - Oct 17, 2011:

I mean, when I have made the AI function flawlessly.

But you have been saying that for quite a while and yet nothing has become of it.
In your museum page, you said “And if the MindForth AI is still too primitive to warrant installation as an exhibit, give it another year or two of improvement and IQ-upping.” Its been two years and yet nothing it does as steve said, even “hints of intelligence.”

Arthur T Murray - Oct 17, 2011:

You and the other users will be the judge of its intelligence.

And this is what makes this so difficult, everyone that I have encounter has judged and they came to the conclusion that its not a “true” AI.

May I ask you for your definition of “True AI”?

 

 
  [ # 74 ]

Pot, meet kettle.

 

 
  [ # 75 ]
Andrew Smith - Oct 18, 2011:

Pot, meet kettle.

lol!

 

This topic is closed, it's not possible to reply

‹ First  < 3 4 5 6 7 >  Last ›
5 of 11
 
  login or register to react