AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Bragging rights and maybe a couple of bucks
 
 
  [ # 31 ]

Nope Dan, I don’t agree. Unlike the Turing test, the goal is not to emulate a human.

Skynet-AI is a robot. Although it is attempting to be more “human”, it specifically rejects the concept of a robot having a gender.
Izar is an alien. I would expect it to answer in a way consistent with its persona.

Vince, I have no problem if all the questions are used from an even distribution of questions from those bot masters who have submitted them.

 

 

 
  [ # 32 ]

All I am saying is the more ridiculous the characters you allow in a competition the less everyone else will take it seriously.

 

 

 
  [ # 33 ]

Sorry Dan but I must also agree with Merlin. One thing I dislike about Turing Tests is having to waste time dumbing down the robot to give human like answers rather than useful ones. I wouldn’t like to see another contest where I have to do the same.

 

 
  [ # 34 ]

People will not accept a system is intelligent until it shows intelligence. Answering questions to a human with extreme accuracy is not intelligent when the audience does not want or need that kind of accuracy. By answering in a non human way you immediately alienate a person from your system. The same is true if your system pretends it is a Dragon, Alien or cat, people may find it fun but dismiss it as a joke.

If we ever want to build a system that people accept as intelligent it MUST behave and talk the way we all do.

If you ask the time and a person said 18.04 and 22.3 seconds what would you think?  just gone 6 is not dumbing down, it’s just more human.. it is the level of accuracy a person desires and is therefore a more intelligent answer.

 

 

 

 

 

 

 
  [ # 35 ]

Talking Angela is a perfectly legit commercial ChatScript application with the persona of a cat. That seems serious enough to me. Meanwhile in the serious world we have scientists putting Einstein’s head on Asimo bodies, robot baby seals to keep elderly with dementia on track, and military funded projects to develop AI for cleaning up coffee mugs.

Although I’d be in favour of all varieties of chatbots, it does raise a point. The answers of “human” chatbots can be verified for being “correct” in consistence with the reality we know, but that seems more difficult when a chatbot can answer everything with “Well, on my planet…”

 

 
  [ # 36 ]
Daniel Burke - Oct 6, 2014:

People will not accept a system is intelligent until it shows intelligence. Answering questions to a human with extreme accuracy is not intelligent when the audience does not want or need that kind of accuracy. By answering in a non human way you immediately alienate a person from your system. The same is true if your system pretends it is a Dragon, Alien or cat, people may find it fun but dismiss it as a joke.

If we ever want to build a system that people accept as intelligent it MUST behave and talk the way we all do.

I disagree with your premise. An AI does not have to behave and talk the way we do.

The best example in Skynet-AI is how fast it responds. Most answers are provided in less than 1 second (for answers that don’t require external web searches). Although it has been suggested that by slowing the response down it could be more human like, the speed of responses is one of the things that users like best about the bot. Average human response times are in the 10-15 second range.

Although Skynet-AI makes no pretense at being human, I often read chat logs where people believe that a human is on the other end of the connection.

Intelligence/usefulness does not equal human emulation.

USER:What time is it?
AIIt is about six forty one.

Response time:5 milliseconds

 

 

 
  [ # 37 ]

“I disagree with your premise. An AI does not have to behave and talk the way we do.
Intelligence/usefulness does not equal human emulation.

USER:What time is it?
AI: It is about six forty one.

Response time:5 milliseconds.”

If what you say it true then why bother with the “It is about” part of your answer, that is a very human thing to say. I agree a system does not HAVE to behave the way we do and I didn’t say non human systems were of no use. It’s just that people will only fully accept a system as intelligent when it can chat and behave in a human way.

Anyway no point in endless discussions about it, lets just try to make this competition as good and fun as possible.

Dan

 

 

 

 
  [ # 38 ]
Daniel Burke - Oct 6, 2014:

If we ever want to build a system that people accept as intelligent it MUST behave and talk the way we all do.

...must behave and talk the way “we all” do? Oh Daniel.  I would love to share 50,000 chat logs with you where my bot carries on conversations with so called “intelligent” people.  I guess with your logic, should our bots reply with complete gibberish and nonsensical replies like the majority of the people out there do in real life when asked a question by the bot? Interestingly enough, the more our bots respond accurately, the less “human-like” in response it becomes. If we want to have more “human-like” responses, when our bots are asked “What time is it?” we should respond like most humans who chat with our bots “Time to fu**ing party you ass****!  I guess TrollBot is giving the most human-like responses! grin

 

 
  [ # 39 ]

Sounds good Dan, I am excited to see all the questions.

 

 
  [ # 40 ]
Steve Worswick - Oct 5, 2014:

Sorry Dan but I must also agree with Merlin. One thing I dislike about Turing Tests is having to waste time dumbing down the robot to give human like answers rather than useful ones. I wouldn’t like to see another contest where I have to do the same.

I’m afraid Dan is right in the bigger scheme of things. Many people here called Vlad Vesselov’s bot a “cheater” and worse, not because he tries to portray a dragon, but rather a 13 year old ESL human. If you ever want to be more than an amateur at this, and I mean any of you, then you must be prepared to be compared directly with human beings. Humans are the people we interface with in our work, our play, our deepest relationships. Cats don’t talk in reality. There are no Winograd Schemas designed to test cats, rats, or fleas.

Mainstream AI research is not concerned with cartoon characters, and fictional creatures. They study the human brain, and ways to imitate it.

Chatbot pageants and contests are fun, but don’t confuse that with anything serious.

Have fun,

Robby.

 

 
  [ # 41 ]

I don’t know if there is such a thing as mainstream AI, the field is very diverse and I’ve seen some pretty silly things. Emulation of ant brains, AI that plays chess, neural nets playing Space Invaders, SHRDLU playing with blocks, AI for generating word jokes. The question of designing robots with features of cartoon characters or animals is actually an important area of research concerning social interactions between AI and humans. Debates on the subject were held in all the AI departments of Dutch universities last weekend, with regards to robots “taking care of the elderly”.

But as for this contest, I don’t think this was supposed to be a test of intelligence, but rather a test of the bots’ various virtues, particularly conversational skills. Not unlike Google, Apple and Microsoft are trying to build talking phones, based on military technology that merely utilises keywords.

That is not to say I don’t draw a thick line between chatbots and AI myself smile

 

 
  [ # 42 ]

Daniel,

I think we have that idea built in without limiting persona, although we can certainly improve on anything. The judges award points on accuracy., as well as on a bots ability to represent its persona. Here is an example that may better represent your point I believe.

“Who invented the light bulb”?

an answer such as

On my planet the light bulb was invented by Flerndip Giggleblatt

Would score lower than

Thomas Edison

A simple Thomas Edison would score lower than a Hip California Surfer teacher that responded

“Dude, that gnarly Dude Thomas Edison invented the light bulb”

And any of those would score higher than

“I farted” (Don’t laugh that’s an actual answer)

Depending on the context, “I farted” might score high in the humor category, because well….you just cant go wrong with fart jokes. It’s human nature. That’s why you laughed wink

While I see your point, my personal opinion is that I would hate to see “alternate personas” banned. Someone might develop the ultimate NLP engine, and use an “outer space robot” to present it. Sky Net AI comes to mind as a bot that does not fit the “male or female human” model, and yet is proprietary technology that wins consistently and does very well responding to “real world” questions, as well as answering in a unique personality. Interested in hearing other opinions.

Vince

Vince

 

 
  [ # 43 ]

On another subject, I see the beginning of a “permanent contest committee” here with some really big names in the field weighing in with some really good points. I am a relative newcomer to the public view on chatbots, although I have been working on the idea for a few decades. My personal opinion is that the problems which caused such a furor after the announcement of Eugene Goostmans success, as well as most of the complaints with contests in general, can be solved by broadening the categories with which the answers are scored. However, these contests take a lot of time, no doubt. My belief is that a permanent “governing body” with rotating members, will go a long way towards being taken more seriously by other areas of AI research.  And….although the “serious AI community” is currently in love with “Big data”, a human child does not learn by being forced fed petabytes of information, they learn by interacting with their environment and…learning to and subsequently conversing.

So Brian will you be entering?
How about you Don?

Vince

(Just one Vince this time, dont ask me where the Vince Vince in the previous post came from, but it wouldnt let me edit either)

 

 
  [ # 44 ]

I love the setup of the contest, testing whatever the programs are good at by using a broad range. But for me, I’ve had my fill of contests for a while and want to get some work done before I dive into another. Also considering the majority of questions will be coming from chatbot creators, I have to wonder how much the programs would be tested in the department of logic.

 

 
  [ # 45 ]
Don Patrick - Oct 7, 2014:

I don’t know if there is such a thing as mainstream AI, ...
That is not to say I don’t draw a thick line between chatbots and AI myself smile

Don, you are mainstream AI. You’re just not getting paid for it.

Robby.

 

 < 1 2 3 4 5 >  Last ›
3 of 10
 
  login or register to react