AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

State of the Art
 
 
  [ # 16 ]

ok, we’ll start with 6 months; after 18 months we’ll reduce this to 3 months; after another 18 months to 6 weeks; followed by 3 weeks, 10 days, 5 days, 60 hours, 30 hours, 15 hours, 450 minutes, 225 minutes, 112 minutes, 56 minutes, 28 minutes, 14 minutes, 7 minutes, 210 seconds, 105, 52, 26,13,6,3 seconds. And that’s where it stops as people need 3 seconds to assign their points.

Thanks for your vote!
Please vote!

grin

 

 
  [ # 17 ]

lol, ok. Point taken. LOL I think I’ll go back to bed now. {chuckle}

 

 
  [ # 18 ]
Andrew Smith - Oct 5, 2010:

That is a very interesting article and the description of how NELL went wrong was particularly intriguing. Given that NELL is supposed to be able to correct itself as it learns more, I wonder if it would have been able to figure out its misunderstanding of baked goods by itself eventually. If not, then what strategies, other than the obvious one of intervention by a teacher, might reduce the likelihood of such mistakes in future?

I think the problem is that the more it veered down the wrong path, the more self-reinforcing its wrong conclusions became. That is, the learned incorrect facts worked to boost confidence in the new incorrect facts, snowballing the error. At some level, there is no cure for this. How many pernicious ideas become common knowledge and remain so for years and years until someone comes along and thinks outside the knowledge base? wink

But I think for NELL the best hope is a large knowledge base that is well-trimmed by its creators. If NELL had known anything about internet cookies to start with, the mistake would not have happened. The more it knows, the more it will avoid the same problems.

I wonder if this has a limit though. Would certain topics become more vague and less helpful to NELL the more it knew about them? This is certainly true in science. The more you know about a particular sub-field, the more its established facts, which are useful to take for granted in other fields, become matters of opinion and shifting confidence within the sub-field itself.

 

 
  [ # 19 ]
C R Hunt - Oct 5, 2010:

I wonder if this has a limit though. Would certain topics become more vague and less helpful to NELL the more it knew about them? This is certainly true in science. The more you know about a particular sub-field, the more its established facts, which are useful to take for granted in other fields, become matters of opinion and shifting confidence within the sub-field itself.

This is true with specialization in ANY field, not just science. The more specialized someone/something becomes the less “generally useful” they are. That’s not to say that the specialist in question is useless, but when was the last time you saw a neurosurgeon repair a leaky drain under a sink, or change a flat tire, or cook even a simple meal? Specialization has it’s place, but care should be used to make sure that “information overload” doesn’t occur. Perhaps I’m veering a bit from the idea at hand, but I feel that the premise still holds. smile

 

 
  [ # 20 ]
Dave Morton - Oct 5, 2010:

This is true with specialization in ANY field, not just science. The more specialized someone/something becomes the less “generally useful” they are. That’s not to say that the specialist in question is useless, but when was the last time you saw a neurosurgeon repair a leaky drain under a sink, or change a flat tire, or cook even a simple meal?

Well, the advantage of a bot is that it is less single-minded (in principle it can focus on multiple topics at a time) and at least for NELL, it has motivation to pick up new fields—its hard-coded to be a Renaissance Bot!

Dave Morton - Oct 5, 2010:

Specialization has it’s place, but care should be used to make sure that “information overload” doesn’t occur. Perhaps I’m veering a bit from the idea at hand, but I feel that the premise still holds. smile

No, I think we’ve got the same idea here. The danger is that learning about a new subject might require, at least at first, a naive interpretation of another subject. But if the bot has a very complex and nuanced database for the other subject, and could not differentiate what level of nuance a particular fact has, it might not be able to learn effectively about the new subject and might be more likely to go astray or assign confidences poorly. I’m imagining a bot trying to apply quantum mechanical ideas to how an airplane flies. smile

 

 
  [ # 21 ]

{offTopic}
Won’t it be nice when we finally figure out the “missing link” that will reconcile Newtonian Physics and Quantum Physics so that they ca “play nicely” with each other?
{/offTopic}

I think that a lot of the identifying indicators that help to define the level of nuance or specialization can be “tagged” in the knowledge base to help keep the algorithms “centered” on the precise concept involved. In your example, “lift” and “drag” have no bearing on a Higgs Field, whereas “Quarks” and “muons” have no bearing on atmospheric flight. thus, these terms can be labeled within the KB as “Q_physics”, or “N_physics”, accordingly. Obviously, I’m just using labels off the top of my head, but I’m hoping that my meaning is taken.

 

 
  [ # 22 ]
Dave Morton - Oct 5, 2010:

{offTopic}
Won’t it be nice when we finally figure out the “missing link” that will reconcile Newtonian Physics and Quantum Physics so that they ca “play nicely” with each other?
{/offTopic}

{yay physics}It’s all a question of scale. Quantum mechanics can be taken to the limit of airplanes. Of course, the result would be a classical Hamiltonian (completely equivalent to Newtonian mechanics) and we’d be back in the realm of the familiar. The murky crossover regime is only murky in practice, not in principle. Setting up Hamiltonians and defining wavefunctions for complex systems with important, emergent classical properties is a tricky science and getting any useful solutions is a potentially computationally intensive process. That’s where the “art” of physics takes over and tricks are used to employ classical mechanics whenever possible.

Now the t-shirt I’d like to buy is the one with the equations that get quantum mechanics and relativity to “play nicely.” smile {/yay physics}

Dave Morton - Oct 5, 2010:

I think that a lot of the identifying indicators that help to define the level of nuance or specialization can be “tagged” in the knowledge base to help keep the algorithms “centered” on the precise concept involved. In your example, “lift” and “drag” have no bearing on a Higgs Field, whereas “Quarks” and “muons” have no bearing on atmospheric flight. thus, these terms can be labeled within the KB as “Q_physics”, or “N_physics”, accordingly. Obviously, I’m just using labels off the top of my head, but I’m hoping that my meaning is taken.

Would a bot like NELL be intelligent enough to define its own sub-categories like this? Even if the bot knew a million and two physics facts, would it be able to extrapolate that, say, muons are unrelated to atmospheric flight? As it happens, we are constantly bombarded by muons penetrating our atmosphere. (I had to measure them myself in an undergrad lab!) Now, this doesn’t effect an airplane. But the link between “atmosphere”, “muon”, and “airplane” is there. And the link between “physics” and “airplane”. How long before the bot decides that airplanes have can teleport?

 

 
  [ # 23 ]

Erwin,
So what’s wrong with “Walleehhoo”?  I was born and raised there.

=) j/k

I’ll participate in polls if you post them.

Andrew,

Given that NELL is supposed to be able to correct itself as it learns more, I wonder if it would have been able to figure out its misunderstanding of baked goods by itself eventually. If not, then what strategies, other than the obvious one of intervention by a teacher, might reduce the likelihood of such mistakes in future?

It seems with each piece of data harvested the program records the source. An initial believability value of 0.5 might be assigned. Then as the program finds difference sources it increases the number.

CR pointed out about there being a common belief accepted by all…and then someone develops a new idea (greatly paraphrased)  I’ve got a few possible thoughts on this.

1)  Don’t worry about your bot being a genius…it’s okay for a bot to be quite ordinary. It’s okay for the bot to follow along with the general population of people. Right or wrong, humans do pretty well operating this way.

2)  Set limits on the believability so the range is 0.05 to 0.95. This prevents the bot from being 100% convinced of a fact. Now, at a score of 0.95 it can still be quite stubborn. However, there is room for the bot to change its mind. 

3) Somehow track a ‘moving average’ of a fact.  After several samples (collections of different data points) this MA increases the bot’s believability and acceptance of a fact.  Now as opposing facts begin to emerge this alters the MA.

4)  Designate a couple of sources as being highly believable. The opinion of these sources could carry more weight in the bot’s assessment.

This is certainly a very interesting idea. I’ll give it some more thought.

Regards,
Chuck

 

 
  [ # 24 ]

cool stuff Andrew. We’ll create a separate article on this (actually look for authors to review this whole concept….)

 

 
  [ # 25 ]
Andrew Smith - Sep 30, 2010:

So what is the current state of the art for artificial intelligence in general, and conversational software (chatbots) in particular? I’ll list a few of the projects that I know about here, from time to time, and I would like to hear about any others that you know about too. Then it would be inspirational and informative to discuss them further, possibly in topics of their own.

Number one on the list would have to be CYC (pronounced sike). This project has been in development since 1984 and was originally expected to take fifty years to complete! Unfortunately it has mostly been funded by the US military so the general public will probably never know its true capabilities (or lack of them), however, the last I heard about it was that CYC’s initial knowledge base of common sense (it’s primer) was largely complete and that it was busy reading the internet to acquire new knowledge.

http://www.cyc.com/

There is also a partly open source version that the general public can download and use here:

http://www.opencyc.org/

I’ve checked CYC. It seems like a collection with millions of pieces of information organized in domains. I’m wondering if it can do the family tree reasoning? The simple one might be an input of “Tom is Jerry’s father; Jerry’s Dora’s brother”, then ask questions like “is Tom Dora’s father?” or “is Dora Tom’s sister?”

And anyone know any state of the art AI software that can reason out the joke of a guy married a widow while his father married the widow’s daughter, and when there’s a new baby born the guy became his own grandpa?!

Or a knowledge base is not supposed to do this kind of stuff?

 

 
  [ # 26 ]

moderation: Moved all Bot Colony posts to
http://www.chatbots.org/ai_zone/viewthread/250/

 

 
  [ # 27 ]

State of the art, not so important? http://www.engadget.com/2010/10/16/study-shocker-babies-think-friendly-robots-are-sentient/

This is more related to robotics than AI, but it seems that a study with infants found that after watching a simple robot interact with humans, they treated the robot as sentient, whereas if they just saw the robot on its own, they would not. The researchers tested this by doing a skit between adults and the robot, where the infant watched the adults interact with it. Then, when left alone with the robot, they checked if the infants would follow the robot’s “gaze” as it looked around the room. When the babies had not seen the skit, they did not try to follow the robot’s gaze, but if they had, they did.

Even something as simple as this robot can persuade a person (well, baby) it is intelligent if it appears to behave in an intelligent way. Gives hope that even if robotics and AI aren’t matched technologically, people will still be interested in meaningful interaction with the AI. (Although, I would say at this point it is robots that have the lead on AI in terms of development…)

Anyway, just thought it was a cute study.

 

 
  [ # 28 ]

That’s a very interesting experiment, and it would have been even more interesting if the robot had been as life-like as possible. No doubt you’ve heard of the uncanny valley effect which causes synthetic human beings to become less acceptable as they get more life-like.

http://en.wikipedia.org/wiki/Uncanny_valley

Watching a computer animation like “Shrek” it is far easier to become immersed in the characters because they are not trying to emulate human beings exactly—even the “human” characters like Princess Fiona are actually caricatures—than a movie like “Final Fantasy XI” where the computer generated characters are made as human-like as possible and end up being much more distracting because of that.

The following video was released a few days ago of HRP-4C, which is arguably the most human-like robot ever built. It learns to sing by watching and copying a human singer, and it even emulates their gestures and breathing patterns. The result is both beautiful and creepy. I suspect that it would scare a baby even more than it scares me.

http://www.youtube.com/watch?v=_migLQ802Go

 

 
  [ # 29 ]

Yes, I think the “uncanny valley” effect is definitely a road block to developing human-like robots. Where is the motivation if customers will just be unnerved by the product? Probably the first robots that enter common use in a “companion” or “entertainment” role (the type of robot that would require chatbot technology) will not try to mimic a human in appearance for this reason*. When it comes to robot aesthetics, we prefer a C-3PO to an HRP-4C!

*Well, I can think of one application that would be an exception, but that type of companionship is a whole different ball game…

 

 
  [ # 30 ]
C R Hunt - Oct 17, 2010:

Yes, I think the “uncanny valley” effect is definitely a road block to developing human-like robots. Where is the motivation if customers will just be unnerved by the product? Probably the first robots that enter common use in a “companion” or “entertainment” role (the type of robot that would require chatbot technology) will not try to mimic a human in appearance for this reason*. When it comes to robot aesthetics, we prefer a C-3PO to an HRP-4C!

*Well, I can think of one application that would be an exception, but that type of companionship is a whole different ball game…

You’re thinking of virtual receptionists, right? smile

http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2010/10/16/BUOI1FSOCC.DTL

“The Personal Assistant for Scheduling system, a disembodied head floating on a computer screen outside Horvitz’s office in Redmond, Wash., is one of the most advanced artificial intelligence programs in the world. It can understand speech, detect faces and interpret body motion. It analyzes years of Horvitz’s daily routines to find patterns that suggest the appropriate responses to most workplace scenarios.”

 

 

 < 1 2 3 4 >  Last ›
2 of 5
 
  login or register to react