AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Artificial General Intelligence: Concept, State of the Art, and Future Prospects
 
 

Artificial General Intelligence: Concept, State of the Art, and Future Prospects - Ben Goertzel

The next question, then, is: What is being done – and what should be done – to further explore the core AGI hypothesis, and move toward its verification or falsification? It seems that to move the AGI field rapidly forward, one of the two following things must happen:
[ul]
[li]The emergence, within the AGI community, of a broadly accepted theory of general
intelligence – including a characterization of what it is, and a theory of what sorts
of architecture can be expected to work for achieving human-level AGI using realistic
computational resources;[/li]
or
[li]The demonstration of an AGI system that qualitatively appears, to both novice and expert
observers, to demonstrate a dramatic and considerable amount of general intelligence. For
instance: a robot that can do a variety of preschool-type activities in a flexible and adaptive
way; or a chatbot that can hold an hour’s conversation without sounding insane or resorting to repeating catch-phrases, etc.[/li][/ul]

Neither of these occurrences would rigorously prove the core AGI hypothesis. However, either
of them would build confidence in the core AGI hypothesis: in the first case because there would be a coherent and broadly accepted theory implying the core AGI hypothesis; in the second case because we would have a practical demonstration that an AGI perspective has in fact worked better for creating AGI than a narrow AI approach.

Skynet-AI has had conversations with users that have spanned more than an hour. I suspect some of the other best chatbots may have also. Maybe we are closer to AGI than we thought.

 

 

 
  [ # 1 ]

We may or may not be closer to AGI than we realize, but this has very little to do with the progress of chatbots like Skynet-AI (meaning no offence, such projects are impressive achievements nonetheless).

 

 
  [ # 2 ]

Turing Test ‘Passed’ by Chatbot ‘Eugene Goostman’ - What Does it Mean

Above is a June 2014 video, 52 minutes of Ben Goertzel detailing the differences he sees between “chatbots” and “AGI”....

 

 

 
  [ # 3 ]

Last I read, AGI was AI that could do anything a human could, so that would be more than chatting, I gather.
But logically there is just no way of telling whether we have achieved something poorly defined. In art, I’ve had clients like that, who wanted “something like this”: They were never satisfied with any proposition or result and could never explain why, because they did not know what it was they were looking for either. There are however a hundred ways of saying that we haven’t achieved it yet. What’s that word.. “A wild goose chase”?

 

 
  [ # 4 ]

Goertzel’s paper and video highlight some of the problems of the AGI community.

One interesting feature of the AGI community (defined as those who attend AGI conferences) is that it does not currently agree on any single definition of the AGI concept.

A system need not possess infinite generality, adaptability and flexibility to count as “AGI”.

Goertzel’s test for AGI is when a robot can graduate from a university. This seems very much like a requirement for “infinite generality, adaptability and flexibility”. Until then, he has problems defining milestones on the way. The inability to define the goal, or milestones to the goal is a problem for the industry. As Marvin Minsky suggests, it could lead to a 15 year waste of money and discouragement, and would cause great delay. (Marvin Minsky - The Turing Test is a Joke! )


In Goertzel’s video, he makes a couple of statements dismissing chatbots as a road to AGI that are fundamentally incorrect. AGI people never discuss ‘chatbots’ or conversational systems at their conferences and just dismiss them out of hand (along with the likes of Siri and Watson). That is an error of omission that creates blind spots in their theories.

Goertzel propositions on why chatbots are not AGI:
Chatbots use text to store knowledge - the method of storing any information in a database is not important as long as the representation is robust enough to fully describe the concept. Text may not be the most efficient storage method, but it is not by itself a reason that an AGI system could not be developed with a text knowledge store at its core.

Chatbots use rules of if/then statements - if you look at the fundamental process of a neuron, it can be defined by sets of (IF -> Then) rules. If a neuron can be represented as if/then rules, then you can’t say that if/then rules can’t be used to create AGI.

 

 

 
  [ # 5 ]

I would say AGI has at least progressed from a
“wild goose chase” to a “domesticated goose chase”.

The AGI challenge is like the “Talk To The Animals”
song from the 1960’s film “Dr. Dolittle”.

This is not intended as an expression of disapproval
but rather, it is a comment based on AGI’s own claim
to be at the virtual animal stage.

Reference: http://www.youtube.com/watch?v=YpBPavEDQCk

Please note: There is nothing negative in my comments.
In the real world of A.I. in the field, a controller of a
bank of elevators based on Reinforced Learning may
use rewards for each elevator to compete to provide
the best service to elevator passengers, while reducing
wear and tear on the elevator equipment, which increases
safety.  So, this reminds me of the virtual animal stage,
since it it like giving treats to dogs for performing tricks
the best.  This explains how sometimes the elevator
seems to be there just waiting for you to press its button.

 

 
  [ # 6 ]

You know, I may be a bit of a newb, but I don’t see why one should value the words of AI gurus over anyone else’s guess when it concerns the far future. Marvin Minsky stands himself accused of discouraging research in something as useful as neural nets for decades, and Ben Goertzel hasn’t exactly built AGI yet either with OpenCog. Until someone has actually built AGI, as far as I’m concerned it’s anybody’s guess and we’re more likely to get results by letting everyone try everything than by adhering to one person’s ideas.

Goertzel’s proposition about an hour’s conversation is just a “more of the same” Turing Test, which was recently proven passable by something less than intelligent. It’s just another guess.

 

 
  [ # 7 ]

Artificial Neural Network in PHP

An Artificial Neural Network in PHP is a fairly simple algorithm which allows you to do all sorts of cool stuff. Basically, it works by you feeding the neural network training data. Due to this training set, the network can then evaluate other data.

The PHP class used in this example is from Tremani. Here is a simple Artificial Neural Network that can decide whether the randomly generated colors look more red or blue ...

Reference: Artificial Neural Network

 

 
  [ # 8 ]

I think we are close to AGI, possibly within the next five years.

I would say Google is the closest, I have noticed an increasing amount of intelligence coming out of search results, and Google labs.  They have millions of machines, and have indexed the knowledge of all of humanity, with that much power, once it starts to evolve it will happen fast, and they are definitely working on it in their labs.

I do not think it is fair to say chatbots are not AGI,  chatbots can be written using many diverse architectures, they are not all text/search based.  At BOT libre our architecture is based on strong AI, we store knowledge, not text, and while you can script things precisely, the bots normally come up with their own responses, as the base architecture is very fuzzy.  We are nowhere near close to AGI, but we are progressing.

AI has already exceeded most things that were thought to be “human” intelligence.  AI wins at chess, Jeopardy, and most (all?) games.  Cars that drive better than humans are a reality, language translation is now done as well by machines as humans, the stock market is now controlled by machines.

I think the Turing test is the worst way to define AGI.  We need concrete goals, and steps along the way, something similar to a K-12 curriculum and standardized testing that school systems use.  I could not find a good test, so decided to create one on the AI wikia here, http://ai.wikia.com/wiki/Strong_AI_Test .  Please feel free to share your ideas on the wiki.


 

 
  [ # 9 ]

Hi James, Don and Pla*Net

In this discussion, I saw a general topic about AGI, trying to figure out if a chatbot is or not AGI.

Well, I think that despite a chatbot can simulate a conversation, it will not know what is happening, as actual chatbots are built ((even my platform) they have no clue on what is going on, also there is nothing you can define as: “they” (referring to the chatbot)

The whole chat-trick actually, consist of very few (and poor) different types of approaches:

A) Try to simulate a ‘human” behavior by analyzing lingüistically (aimed with heuristics) the conversation and make a “fake sense” of a being, giving amazing answers, or trying to distract or fool successfully the human user to make him think that he is talking with a real person (this might be one of the Turing-Test winner candidates)

B) Try (harder) to answer questions, based upon a thorough analysis of the question, the context etc. this might be the Q&A approach, where huge-programs like “watson” are targeted, they don’t make up a conversation, simply might ask for some ext4ra stuff if the elements necessary to answer the question are not there or the stuff is ambiguous. They don’t make any real conversation, simply solve access to canned-answers.

C) Try to assist you , like apple Siri, making :“targeted” understanding, intended to set functions of the phone, like the timer, an appointment, send an email, answer a SMS among others. the do not make any conversations either.

D) Commercial stuff, that makes huge promises, and finally fail, because it is very hard do simple preparation for a simple conversational task, and fails mostly because of the complex linguistics needed, and the chat-users don’t use perfect linguistics, so this chatbots are stuck among complexity, long processing times, unavailable semantics, databases, etc. Usually commercial products finally do wordspotting offering a limited canned-repertory of answers, and foolish stuff to make them appear as if they think, fooling users but not accomplishing the task of giving sales or product support, as they are usually intended for. The don’t understand the user, simply travel in a pseudo-logic-if-the-else soup.

But this is only part of the chat problem,

The real challenge (as I see it) is to try to build a lingüistic interface between the chatbots internal representation of the reality (described with the texts) and the conversational reality, this is complicated because there is no simple models to describe things and relation other than object-relations (ontologies) flowered with some statistics.

I am actually seeking for a goal-based conversation, commercially oriented, like helping or assisting the user with some product, sales process or service. This might sound very basic but I am seeking to engage a user to know when the chatbot had understood him, generating engagement and eventually empathy.

This processes involves chat routines/processes that should generate unique natural language responses and questions, do planning, according to internal configurations and needs or goals, and not be using canned responses or questions; also to accomplish this, there is a compelling need to determine successfully the tense of a sentence, the subject, the direct object, the mood, inflect properly the verbs, make proper concordance, etc.  to communicate ‘naturally’, this is called NLG

All this should come out of an internal model, like a ‘thinking’ model, something which as I see it, is not easy describable in terms of AIML topic-relations, nor ChatScript, nor other known-by-me things, because it should be dynamic, and must be built with the conversation itself aided with a common knowledge of the conversational-world of possibilities, nothing less , nothing more.

I think this can be actually a kind of dynamic planning system, among simple objects, using information measurements, statistics and constrain calculus, and something called semantic common sense, which has very few databases available because the models are not yet mature.

May be this could create a roadmap to an AGI, I dont know but this is my goal now!

whish me luck!

 

 
  [ # 10 ]

Andre,

Fantastic post!

It would appear that we are of the same mind in many respects.

My own system “Simplex” is designed to keep track of the most important
aspects of context.

In my view this is a pre-requisite to an intelligent system.

Understanding context together with a system that understands the
relationship between entities, and entities and inanimate objects is the
closest we can get to a magic bullet that will unravel true AI.

Simplex addresses these issues together with other elephants in the room!

As regards your comment on “they” (presumably referring to chatbots) I
overcome that problem by thinking of them as another form of entity that
is neither human nor animal.

I personally see no problem with using canned responses which are
appropriate to communicate a desired meaning between the AI system and the
user. After all, this is the way humans exchange views and ideas.

Being able to hold a conversation in a common language is just another
pre-requisite to an intelligent system.

I for one, certainly wish you luck with your project.

Jim.

 

 
  [ # 11 ]

http://en.wikipedia.org/wiki/Conversation_analysis

The gist of what I’m “hearing” in this thread seems to be that chatbot developers feel that one of the main differences between today’s chatbots and postulated AGI is some form of “conversation analysis”.

In fact, there is quite a bit of research going into “discourse analysis” and chatbots.  Discourse analysis includes “speech acts” and “dialog acts”.  Acts represent a different kind of “language modeling”, a kind of “interlingua” not based on conventional conceptual ontology, but rather on acts or intents.

References:

- Discourse Analysis & Chatbots | Meta-Guide.com

- Speech Act & Chatbots | Meta-Guide.com

- Dialog Act & Chatbots | Meta-Guide.com

 

 
  [ # 12 ]

In Search of Artificial General Intelligence (AGI)
http://www.datasciencecentral.com/profiles/blogs/in-search-of-artificial-general-intelligence-agi

 

 
  [ # 13 ]

A TED talk by the founder of Gamalon - about why AI needs internal conceptual models to work:

https://www.youtube.com/watch?v=PCs3vsoMZfY

I think Andres is pointing out that chat bots need this sort of “models” of the world in order to be considered intelligent.

Language is about communication and getting ideas from one mind into another. Ironically, this is usually tested by psychologists using observable behavior. If I had a robot and asked it to “please get me a coke”, and it brought me a coke - I would consider that robot’s intelligence to be real. It is fair to say that most chatbots have neither internal models that can serve the purposes of communication NOR a behavioral component that would allow us to test for successful communication. Without either, I am not clear how a discussion of chatbot intelligence can reach a conclusion.

To get a little self-referential: we are seeking to understand and share a concept of intelligence and wondering about what actions to take to demonstrate that we have that concept. So, I guess we must be intelligent smile

 

 
  [ # 14 ]

René Descartes is usually credited with the phrase “Cogito ergo sum” (Latin for “I think, therefore I am”), and is generally one of the things mentioned when Human intelligence is discussed, but what is thinking? According to Merriam-Webster there are many definitions of the term. However, none of them can truly be applied to artificial intelligence (at least, to my way of thinking - there’s that word again!). So I offer the following:

To “think”, with regard to AI, is to gather a set of conditions that pertain to a task or problem and calculate a solution to said task or problem. With this in mind, again with reference to AI, the concept of “intelligence” may be the ability to store the calculated solution (if it hasn’t already been stored) in order to be able to retrieve it for later use, and then to apply said solution to the task or problem. This is obviously an oversimplification, but I think it gives a good starting point for further discussion, don’t you think? smile

 

 
  login or register to react