AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Is it time to create an AI standard language?
 
Poll
Is there a need for a new AI/Chatbot language
Yes - Create a new independent language 9
Yes - But, I would prefer AIML was just enhanced 8
Yes - But, it should be based on a current language 3
No - each developer is best doing his own thing 5
No - I don’t think one language cold do it all 3
No - It is a waste of time 2
Total Votes: 30
You must be a logged-in member to vote
 
  [ # 61 ]

That’s a good point, but there have been other questions asked of chatbots that have been equally (or even more) “silly”.

If I hit you with a towel, will it hurt?

big surprise

 

 
  [ # 62 ]
Steve Worswick - May 9, 2013:

As Art says though, who is going to ask a chatbot this sort of puzzle?

And here we blur the line between a chatbot and AI. Artificial Intelligence Planning and problem solving goes beyond what today’s chatbots use in conversations.
https://www.coursera.org/course/aiplan

I do find these types of problems allow me to explore the assumptions I have about the AI.
Even if we wanted to, how would we form the input?
How should we represent the problem in memory?
How should the AI go about solving the problem? Brute Force? Intelligent Algorithm?

Although the math solvers I worked on and put into Skynet-AI get little use in typical conversations, they did allow me to explore and enhance how the bot interacts and led to some interesting enhancements that would not have seen the light of day otherwise.

 

 

 

 
  [ # 63 ]

I just thought I’d throw in my 2 cents’ worth since just this morning I was just working on a language for my attempted artificial general intelligence (AGI) architecture. My newly created language looked just like assembly language, with an implied accumulator, goto’s, stores, outputs, compare flags, and so on! I had to take a step back and ask myself what the heck I thought I was doing, using a 60-year-old ultra-primitive ultra-low level language for supposedly advanced AI.

Then the answer hit me, which was quite an insight: it’s not really the language that matters, but the underlying representation. I’ve mentioned that problem of knowledge representation (KR) in this forum before: I believe KR consitutes the essence of any computing architecture, even more so than the software paradigm used, and certainly more so than any specific language itself. I believe KR is *the* critical topic to study for producing future AGI. The boolean logic we use nowadays is many times more advanced than is needed for practical survival-related problems or decisions in the real world, as are all the looping constructs we commonly use in programs: with the appropriate KR method, most of all that logic and most of those control structures could be eliminated entirely. The result I saw this morning was that my program was essentially 100% assembly language except for a single special non-assembly language statement, which contained the concentrated dose of intelligence on which the entire program depended: sort of an “oracle” statement, in a sense. But that one statement was decomposable into clear-cut operations, so it wasn’t some “magic” or theoretical subroutine, though it was not guaranteed to produce an answer, or even the correct answer. That led to more insights, such as that was the only possible behavior I could have expected, and also the only possible behavior that we’ll *ever* be able produce when trying to produce intelligence. We’re just finite beings with finite physical resources and finite time, trying to tackle problems that ultimately have infinite complexity and infinite extent.

Therefore, if my architecture ever works right, the future language of AI might look just like 60-year-old assembly language but set atop one heck of an exotic underlying KR method!

(By the way, the answer I would need to answer this poll question wasn’t listed, so I couldn’t vote, and besides, I also agree that a good language for chatbots might not be a good language for general applied AI, at least for the current state of the art and practical scope, so the question might be making a false assumption or two.)

 

 
  [ # 64 ]
Mark Atkins - May 10, 2013:

Then the answer hit me, which was quite an insight: it’s not really the language that matters, but the underlying representation. I’ve mentioned that problem of knowledge representation (KR) in this forum before: I believe KR consitutes the essence of any computing architecture, even more so than the software paradigm used, and certainly more so than any specific language itself.

I agree with you. Finding efficient ways to represent the knowledge is key. Why did you decide to use an assembly like language rather than using a higher level language to prototype the entire thing?

 

 
  [ # 65 ]

An “AI language” may need a description of connections along with the actual data.

Sebastian Seung’s TED Talk I am my connectome shows just how difficult it will be to emulate a human brain and its neural connections.

A Neuron’s Obsession Hints at Biology of Thought shows how a single neuron may be responsible/fired for a complex concept like “Halle Berry”.

The Jennifer Aniston neuron

 

 
  [ # 66 ]

Again, are were referring to a Chatbot / Vhat Assistant or an AI, be it an entity, factory / warehouse / laboratory / business / home type of “Intelligence”?

For a Chatbot, I would think that it would be nice for it to emulate some of the functions and associated actions / emotions of a human counterpart. It should be able to guess in certain instances whether something is best suited for it’s human or “partner” and act upon it when or if confirmation is required or directed.

For an AI based entity, I’m not sure whether it SHOULD emulate the human brain and all those neuron connections.

The factory based one should be able to answer rudimentary questions and carry out desired tasks with certainty and accuracy. It should likewise be equipped with a programmable “Decision Tree” that it can execute if and only if certain conditions are met.

Giving such an entity the power to wield as it deems necessary is kind of unsettling to a large degree. Sort of akin to the “infallible Hal 9000” that untimately killed the ship’s crew while they slept!

Just some more wood for the fires of thought….

Y’all have a great weekend!!;)

 

 
  [ # 67 ]
Merlin - May 10, 2013:

Why did you decide to use an assembly like language rather than using a higher level language to prototype the entire thing?

My simulation is written in Visual Python, which is a language independent of the architecture that is being simulated, somewhat like writing an artificial neural network simulation in C, or like writing a C compiler in LISP.

As for the language being simulated, I didn’t “decide” to use assembly language: I merely used the simplest commands that would accomplish the chores I needed, and after creating several such commands I noticed that the result was almost exactly equivalent to assembly language. For example, I really needed only one concept to be held in memory at one time—the most recent concept (analogous to a chatbot needing to remember only the most recent sentence)—which is what an accumulator does. No loops were needed, nor strings, nor arrays, nor pointers, nor variable assignments, and so far not even any if-then’s. Objects needed to be compared for equivalence but all I needed from the result of that operation was a simple boolean yes/no, which is exactly what some assembly languages do in a single command, the output of which is a special boolean flag that is set automatically and can be easily checked.

The situation was analogous to a situation that arises in neural network design, where the bulk of the recognition effort is done in the feature detectors, then the rest of the network merely linearly weights the output of those low-level detectors, which is a much easier task than involving the whole network in learning a very complicated mapping. In this case I put the bulk of the work into a single statement that analzed the raw input (analogous to parsing a sentence given to a chatbot), and the rest of the needed logic was so simple that even assembly language could handle it. All the design work I did previously was aimed at performing that analytical stage with extreme generality and efficiency.

If nothing else, it was an auspicious start. I had recently finalized some design changes and I was needing to consider how the system could store algorithms, and I thought it was interesting and promising that the needed logic and language were so simple.

 

 

 
  [ # 68 ]

Really like the fact that this thread has clearly shown the distinction between AI and most chatbots. That’s not to belittle current chatbots, as there’s some clever programming out there and the task is immense.

For some time, I’ve tried to distinguish pattern matching and fixed responses to identified triggers from ai, because I don’t think they are the same thing. Some bots have so many outputs it is perceived as good AI from the perspective of some competitions.

I would love to see the various chatbot competitions not awarding higher scores for funnier or more interesting responses. Frankly, that is just testing the writing skills or wit of the botmaster.

Sometime ago I had a debate in the Verbot forum about the quality of various responses, and how we might award them. An example of that was:

UserHow are you doing today

And then a variety of replies that might be given.

BotI am very good thanks
or
BotMeI am Mr Splendid today and I am finer than a fine thing on a sunny day
or
BotGoodYou

Now, my guess is the second one would get the most points from a judge on say Chatterboxchallenge. However, for me the third one is the best (A)I response. It’s the most natural, and leads the conversation in the direction it would take with a lot of people (in the 3rd, the bot is seemlingly showing an (artificial) understanding of the need to be nice back).

I’m not saying any of these responses suggest great AI, but I’d like to see the development go more in the direction of the 3rd response, and competitions to start looking for more natural responses and further signs of AI, rather than clever ones.

 

 

 

 

 

 
  [ # 69 ]

The only “problem” there, Mike (if you can call it a “problem”), is that for some people (myself included and especially), the second response is actually more “natural” then the other ones (that is, that’s how I, personally, speak). The concept of “natural responses” is a highly subjective one, and is exceedingly hard to inject even a modicum of objectivity into.

Just sayin’. wink

 

 
  [ # 70 ]

Yes, fair point. I’m didn’t want to suggest that any of those repsonses were wrong. I just think that the third response, in my opinion, is often considered less intelligent. That’s the bit I don’t agree with.

Obviously, if the purpose is an entertaining bot, that’s different. I’m only talking about what looks better AI. For me, the third one is better than the second in that respect because the bot is showing an understanding (or is faking showing an understanding) of the purpose of the users input, which is to make nice and interact.

 

 
  [ # 71 ]
Hans Peter Willems - Apr 24, 2011:

I think what Erwin means is that the moment that ‘strong AI’ comes into existence, it will just about immediately grow far beyond human thinking capabilities. Current super-computers are already far ahead of humans when it comes to pure processing power.

I’m not sure about this, as far as comparing pure processing power.

For example, take the process of recognising the face of someone you once met and then deciding what to do about it. The processing power psychologists believe we use for that is immense, far beyound current technology.

I saw an interesting question once about whether computers would be as efficient at mathmatical calculations if they had to deal with emotions and other issues waiting to be solved.

I think what you are classing as processing power is the power of a super-computer focused on one task. I believe even the most advanced super-computer or nueral network can only provide very basic facial recognition. And for us, that recognition is only the start of a thought process.

 

 
  [ # 72 ]

I think Xaiml is the best way to go. The other chatbot languages are too rigid and do not provide enough commands/tags to perform many silicopsychological functions.

 

 
  [ # 73 ]
Devyn Collier Johnson - Oct 2, 2013:

...The other chatbot languages are too rigid and do not provide enough commands/tags to perform many silicopsychological functions.

Could you give us an example of such a function?

 

 
  [ # 74 ]
Devyn Collier Johnson - Oct 2, 2013:

I think Xaiml is the best way to go.

I think that is a bit naive. An AI standard language should be fully cross-platform, concise, and address the specific issues of the target domain that cannot be well represented in another language. It needs portability, flexibility, and potentially extensibility.

Xaiml currently intermixes basic AI functionality with tags related to platform specific functionality. A developer trying to support Xaiml would need to develop and debug tags that his users never could or should use. A couple quick examples:

Xaiml allows some common minor spelling errors.

Xaiml allows some tag mis-matching.
<  prn>RANDOM TEXT</pattern>

Although helpful for a user interface entering information, I can’t think of any interpreter that encourages mismatched open/close tags. This leads to additional overhead on the interpreter which is unnecessary, since a 1 time pass through the file could fix all the tags and allow the interpreter to only look for a subset. I also believe it breaks the XML standard which is one of the few advantages of using a “tag” based language.

There are three styles that the tags may be typed. Most tags are XML-style, some are BASH-style, and very few are Python-style. An example of all three of these can be shown with the swapoff tag. The XML-style is <swapoff>, the BASH-style is <swapoff -a>, and the Python-style is <swapoff()>. Not all tags can be typed in BASH style. However, all tags that end in “>” can be typed in Python style.

Xaiml excepts <think>, <thought>, <thinking>, <thnk>, and <thk>.
<input/> - This tag prints the user’s input. Equivalent tags include <in>, <usersaid>, <usersaidwhat>, and <repeatme>.

The goal should be for a single style to ease portability, debugging and support. For developers who use English as a second language, explaining the rational for all the different flavors of “think” or “input” is just extra overhead. You quickly get into discussion of other spellings/synonyms and why they also are not supported. Every tag that is part of a language requires documentation, development, support and training. It often makes debugging more problematic.

<perl> - This tag executes perl code that is enclosed in the tag.
<
python>
<
jython
<
cython>
<
scilab>
<
ruby>
<
jruby>
<
rubyjs>
<
csh>
<
tcsh>
<
ksh>
<
zsh

Tags like these make operating system/program assumptions that you can’t assume in a cross-platform AI standard environment.

<french_ruler/> - This tag prints a random French ruler name from ./databases/french_rulers.db.
<
georgian_alphabet/> 

Most botmasters/AI builders would find Tags like these useless overhead.

<opera/> - This tag opens the Opera web browser.

 <
firefox/> - This tag opens FirefoxEquivalent tags include <ff/>, <firef/>, <firex/>, <ffox/>, <mozilla/>, <mff/>, <mozilla firefox/>, and <mozillafirefox/>.
<
xterm/> - This tag opens Xterm.
...
<
adobe>
... 

You have turned aliases to applications into tags. You can’t assume every platform will have every “APP tag” that is currently in Xaiml.
By the way, “Adobe” is a company, not an application. That tag is ambiguous. It could mean “Acrobat viewer” which displays PDF files or a number of other Adobe apps like the creative suite, or photoshop.

You would be better off supporting a “LAUNCH” command that would then allow the user to open a resident app and fall back to an error mechanism on fail.

Devyn Collier Johnson - Oct 2, 2013:

The other chatbot languages are too rigid and do not provide enough commands/tags to perform many silicopsychological functions.

I would be interested in your analysis of ChatScript/Rivescript and how you found them too rigid. Both are open source.

You make the assumption in Xaiml that more commands/tags are better. I hold exactly the opposite view.
My own language JAIL (JavaScript Artificial Intelligence Language) is not public. But, JAIL is largely tag-less and currently has the distinction of having run on more platforms than any other chatbot/AI language/interpreter. If nothing else it proves that tags are not necessarily a requirement.

 

 

 
  [ # 75 ]
Steve Worswick - Oct 2, 2013:

Could you give us an example of such a function?

I made up the term “silicopsychological”. In this point of technological development, computers cannot have true psychological functions. I am mainly referring to emotional and human-like mental processes. Xaiml has many emotional-tags. Xaiml also has an <interjection> tag. In the latest developmental version of Xaiml (which is version 1.9b), I am adding more tags for better emotions and psychological features. In the bot’s personality and settings file (called startup.xml in Betabots), I have various psychological qualities scored on a scale 0 to 10. My newest tags will be more specific condition tags for better personality support.

 

‹ First  < 3 4 5 6 > 
5 of 6
 
  login or register to react