AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Is it time to create an AI standard language?
 
Poll
Is there a need for a new AI/Chatbot language
Yes - Create a new independent language 9
Yes - But, I would prefer AIML was just enhanced 8
Yes - But, it should be based on a current language 3
No - each developer is best doing his own thing 5
No - I don’t think one language cold do it all 3
No - It is a waste of time 2
Total Votes: 30
You must be a logged-in member to vote
 
  [ # 16 ]
Dave Morton - Apr 23, 2011:

I’m not in agreement there. At least, not if my understanding of 8pla’s assertion is correct. I believe that 8pla is referring to human intelligence when he referred to Natural Intelligence. If this is not the case, perhaps some clarification of meaning is in order. However, if I’m correct, then Strong Artificial Intelligence only has the potential of becoming greater, and that potential is a long way from being realized.

I think what Erwin means is that the moment that ‘strong AI’ comes into existence, it will just about immediately grow far beyond human thinking capabilities. Current super-computers are already far ahead of humans when it comes to pure processing power. So just imagine that we create a program that can think, reason and make decisions like humans, but with the added capacity of todays (or even better; tomorrow’s) super-computers.

However, what I meant to say with my own statement is this: I’m convinced that we don’t need a specific ‘AI-language’ to create strong AI. We only need ‘code’ to build the AI operating system, that will do nothing more that handling IO and ‘run the database’. Everything else will emerge, IF we build the correct datamodel. My idea is that the neural network in our own brain is NOT a super-computer but instead a super-database (a very dynamic one as well).

 

 
  [ # 17 ]
Hans Peter Willems - Apr 24, 2011:
Dave Morton - Apr 23, 2011:

I’m not in agreement there. At least, not if my understanding of 8pla’s assertion is correct. I believe that 8pla is referring to human intelligence when he referred to Natural Intelligence. If this is not the case, perhaps some clarification of meaning is in order. However, if I’m correct, then Strong Artificial Intelligence only has the potential of becoming greater, and that potential is a long way from being realized.

I think what Erwin means is that the moment that ‘strong AI’ comes into existence, it will just about immediately grow far beyond human thinking capabilities. Current super-computers are already far ahead of humans when it comes to pure processing power. So just imagine that we create a program that can think, reason and make decisions like humans, but with the added capacity of todays (or even better; tomorrow’s) super-computers.

However, what I meant to say with my own statement is this: I’m convinced that we don’t need a specific ‘AI-language’ to create strong AI. We only need ‘code’ to build the AI operating system, that will do nothing more that handling IO and ‘run the database’. Everything else will emerge, IF we build the correct datamodel. My idea is that the neural network in our own brain is NOT a super-computer but instead a super-database (a very dynamic one as well).

I totally agree with the idea of a super database and this is also my general approach!
If we describe data in a machine readable format so that they can consume it efficiently, then we would be able with simple tools to create agents either for a specific scope or more general tasks.
Taking a quick look at linked-data we can see that there is lots of data waiting to be consumed somehow.
There are also tones of information at the social media platforms. So, instead of trying to parse this information we could just describe it correctly in the first place and then let agents consume it.
There are plenty of tools out there for someone to start.

 

 
  [ # 18 ]

I don’t think that “just one” AI language would cover the whole range of AI’s out there.

Languages like AIML and RiveScript are mostly for giving pre-programmed responses to your bots… you anticipate all the quirky questions your users might ask and teach the bot how to respond cleverly to them. Done well, people will sometimes have a hard time believing your bot is just a program.

But then there are bots like Jabberwacky and CleverBot which learn by interacting with strangers. Or… perhaps a better example would be Eliza, which is sort of in between - it has pre-programmed responses, but also keeps good track of context and short term memory to steer the conversation back on track. Neither AIML nor RiveScript (afaik) would do a very great job emulating that.

 

 
  [ # 19 ]

I believe 1 language could do it all. In the past I was involved with PostScript, the defacto language for high quality printing, and of course JavaScript is the standard browser language. The problem seems to be lack of interest from the community. It doesn’t look like people feel an driving need.

 

 
  [ # 20 ]

But we can identify a readily achievable goal for each example you gave, Merlin: high-quality printing and a browser that fulfills X requirements. In fact, those languages weren’t the first to achieve those goals, standardizing was more of a way to concentrate people’s efforts so that improvements in those areas would not have to be developed in parallel.

If one could convince people of standard goals for AI, then one could convince them to adopt a language that optimizes that type of development. For chatbots, a lot of AIML’s success stems from simply and effectively achieving a goal of chatbot communication: deliver answer Y for input X in context Z. But if you’ve got some people interested in visual AI and others in databases and still others in various aspects of robotics, well, those are some slippery cats to herd. smile

 

 
  [ # 21 ]
C R Hunt - Sep 23, 2011:

But we can identify a readily achievable goal for each example you gave, Merlin: high-quality printing and a browser that fulfills X requirements. In fact, those languages weren’t the first to achieve those goals, standardizing was more of a way to concentrate people’s efforts so that improvements in those areas would not have to be developed in parallel.

That is my point. Right now, all of the efforts are developed in parallel. It is a huge task to start from scratch. Given a standard, effort would be put into creating the baseline, but after that, progress and innovation would be more rapid.

C R Hunt - Sep 23, 2011:

If one could convince people of standard goals for AI, then one could convince them to adopt a language that optimizes that type of development. For chatbots, a lot of AIML’s success stems from simply and effectively achieving a goal of chatbot communication: deliver answer Y for input X in context Z. But if you’ve got some people interested in visual AI and others in databases and still others in various aspects of robotics, well, those are some slippery cats to herd. smile

PostScript became a defacto standard, and I believe that is ultimately what will happen with an “AI language”. Page Description Languages or PDLs, need to describe all of the elements of any page. These pages are generated with an accurate, consistent visual rendering, on high and low resolution devices, in color or black and white. Elements from databases can be integrated on the fly, and a PDF file allows an electronic copy to be transmitted and archived anywhere in the world. I see parallels in the AI world.

Someday a dominant Artificial Intelligence Language will emerge and then I believe progress will accelerate. Maybe JAIL will become that language. wink

Part of why I started this thread was to take the temperature of the community to see if they felt a pressing need. My sense is that they don’t. I was trying to decide if I should open up JAIL as a standard and decided it was not worth the effort. For one language to win, it will have to prove its use in applications and only after it gains massive market share will there be enough momentum to create a standard. Until that time, there will be a lot of wheels reinvented.

 

 

 
  [ # 22 ]

There is a standard language for artificial intelligence. Nowadays it’s called Common Lisp.

Lisp has been under continuous development and refinement by the best minds in computer science for more than sixty years and it is still the best choice for any serious work in artificial intelligence.

 

 
  [ # 23 ]

Good point Andrew. I believe Pandorabots interpreter is a Lisp implementation. Do you have any perspective on the pros and cons of using Lisp? Do you use it in your own projects? If so why so (or not)?

As a language that has been around for 60 years, and stable for around 20, why hasn’t it developed a much bigger community following? Or am I just not following the right community?

 

 

 
  [ # 24 ]

Hey guys!

I have also developed a language, different from AIML and other’s approach!
And I am constantly changing things, adding functionality

The base of this language is a planned pattern matcher, somehow like AIML but with several enhancements:

The matcher’s work individually and compete against each other having different “scores” and special behavior flags.

Among those matchers, there is from simple text matching, passing trough wildchars (to give AIML a chance to port some knowledge, if you dare giving AIML this attribute), and getting into deep parsing, even with semantic out-of-order inter-sentence understanding.

Then the answer engine is also different from standard AIML, you can Add-Up several answers, composing them as output, giving them timings, even being context dependent.

All this language resides inside a object-like environment, where you may operate with “concepts” rather than words, you may add two words and depending on the assigned sense, they may perform an internal operation ranging from units conversion, quantity composition, simple to complex math, logic, set operations, or simply concatenate properly as a list!

There are other pattern matchers, but the pattern may be linguistic, specified with POS tags and even sintactically parsed-segments (chunked) so you can specify “a syntagmatic part as subject”, which should have a nuclei as any animal.

Also this language has ‘persistence’ inside the user and bot objects, it has a context object capable of firing external database operations, checking the web, searching mail, reading rss feeds or whatever can be done externally.

The Natural Language generator is also somehow complex but simple to be used, it blends all elements and tries to make a sense-full sentence out of scrap, making verb conjugations, doing concordance matching among objects and their prepositions, or create anaphoric pronouns to replace things concatenating them with ‘just said’ things.

It also has a Anaphoric Selective memory, capable of getting the best match of a personal pronoun or parabolic object-grounding sentence, to help in co-reference solving. (this is still under development, just starting to day hello!)

I am also working to get ontology, but the ontological links in wordnet, are not necessarily useful when making choices, or taking decisions, under a context-full conversation.

Another issue is that I have decided to model the personality of the agent, by means of a complete analysis of the full conversation , as a time-theme-turn sequence, gathering the “success"or failure sensation from this, and even (in the future: next) guess the user’s mood!

Other thing is the theme navigation, this is controlled by a complex algorithm I’ve not fully tested, (under development) but as now, its capable of nesting things, doing questions, telling jokes in several turns, handle properly backchannel and even answering out-of order questions.

This is the first release of the language, the documentation is huge, the manual is written in Spanish (sorry) but the main customers are Latin America’s and Spain one’s.

This has also been created becauuse the failure of AIML (which I initially tested and worked with) in Spanish and all inflected languages, because of the failed pattern matching, many people had added stemmers, but the misunderstanding is huge!

 

 

 
  [ # 25 ]

No wonder we haven’t heard from you in a while, Andy! You seem to have been busy! smile Your description sounds intriguing. Please keep us informed on your progress, and thanks for telling us about it.

 

 
  [ # 26 ]

My project includes agents written in AIML, Java, C, Ruby, Python, Perl, PHP ... any language that can interact via stdin and stdout ...

The hypothesis is that intelligence emerges from interaction among the agents, and from competition between them, influenced by feedback.

I should add agents written in LISP and Javascript, as a proof-of-concept :)

 

 
  [ # 27 ]

Robert,
How do they compete? How do you normalize the interaction across agents?

 

 
  [ # 28 ]
Merlin - Sep 24, 2011:

Good point Andrew. I believe Pandorabots interpreter is a Lisp implementation. Do you have any perspective on the pros and cons of using Lisp? Do you use it in your own projects? If so why so (or not)? As a language that has been around for 60 years, and stable for around 20, why hasn’t it developed a much bigger community following? Or am I just not following the right community?

I’ve been using Common Lisp extensively for about 12 years. I usually use it when I need to get something working in a hurry, or if I’m trying to figure something out or prototype something. If I need to squeeze a bit more performance out of the software, or use it in a micro-controller, I may then rewrite it in C. I’ve found that it often takes weeks to write a C version of something that only took a day or two to develop using Common Lisp.

If you think there aren’t a lot of people using Common Lisp, you simply haven’t been looking in the right places. For example, the following indiegogo.com project was recently created to fund further development of SBCL which is one of the open source Common Lisp compilers that are freely available. The initial goal of $3000 was raised in less than 24 hours and the final amount raised was in excess of $16000 from hundreds of contributors.

http://www.indiegogo.com/SBCL-Threading-Improvements-1

 

 

 
  [ # 29 ]

“If you think there aren’t a lot of people using Common Lisp, you simply haven’t been looking in the right places. “

My perception (feel free to correct me) is that Lisp has been a good prototyping language, but is not really used for production. The trade off then is reuse of the code versus speed of developing the prototype.

There is one good story about Lisp being used in the production of what became Yahoo’s storefront, but I believe that has been converted to C also.

http://www.paulgraham.com/avg.html

Have you had any experience with Parenscript? Parenscript is a translator from an extended subset of Common Lisp to JavaScript.

http://common-lisp.net/project/parenscript/

 

 
  [ # 30 ]
Merlin - Sep 25, 2011:

Robert,
How do they compete?

In my system, each agent competes to provide the response returned to the user.

For example: if I enter “1 + 1”, the calcagent matches the pattern and gives its response a higher score (+4, say). The ALICE agent (and each other agent) also gives its response a score, but lower than the calcagent’s since they weren’t looking for the same pattern*.

A controller submits input to all the agents, then waits for their responses. After a (configurable) time, it selects the highest-scoring response to return to the user.

The user can modify the scoring at runtime by telling the bot to decrease (or increase) an agent’s score (“ALICE: stfu”, “logicagent: speak louder”) or a specific response’s score (“that response sucked”, “good answer”). The user can also submit if-then rules at runtime which allow more fine-grained manipulation of responses and their scores.

* The ALICE AIML set might have a pattern that matches “1 + 1” but that is separate from the ALICE agent’s patterns. AIML doesn’t assign scores to responses; the MyAgent wrapper for the ALICE program does. In the case of “1 + 1”, the ALICE agent doesn’t have a pattern to match that, so it assigns a default score. There is some randomization involved so that sometimes the ALICE agent (or the MegaHAL agent, or the In Soviet Russia agent…) might by chance get a higher score than the agent I expected to respond. If I don’t like the response, I can decrease its score so it shouldn’t happen again. Or I can modify the response to something I would prefer to have seen ...

Right now, the user alone modifies the scores. Eventually, I want agents to modify the scores of other agents. So if a better calcagent comes along the old one’s scores will be automatically decreased. Or if a better grammar agent appears, the In Soviet Russia agent will discover it and query it and if the jokes get better responses (“lol”) from the user, the agent will reinforce the scores of the new grammar agent.

How do you normalize the interaction across agents?

I’m not sure what you mean. Each input is sent to all agents. Each agent is responsible for discarding or disregarding input it doesn’t know how to handle (e.g., doesn’t have a pattern that matches). The scoring is somewhat arbitrary and ad hoc, and might become subject to inflation, but I’ll address that problem if it develops :) My general guidelines are 1 is a default score, add 3 - 5 points if the agent is addressed directly, add 2 - 4 if the input triggers one of the agent’s patterns…

 

 < 1 2 3 4 >  Last ›
2 of 6
 
  login or register to react