AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Synthetic Senses and Artificial Experiences
 
 

One of the problems we have when developing a chatbot, game, or robot, is how to train and educate the device. Up until now the most common way has been for the programmers to add all the logic. This often takes lots of effort by the developer to bring the AI up to a realistic level.
This is especially true because of the symbol grounding problem.
http://en.wikipedia.org/wiki/Symbol_grounding

To solve this problem, AIs could be trained through synthetic senses and Artificial experiences. One method of doing this is to have a human or humans take the place of the AI in a simulation. After enough interaction, the AI has a reasonable sense of the environment. Think of it as crowd source training for the AI.

Dr. Cynthia Breazeal is an Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology where she founded and directs the Personal Robots Group at the Media Lab. Some of the work in her lab shows that it is possible to take simulated interactions and then apply the learned actions in the real world. It also suggests that robots linked to the could could be able to take advantage of “hive mind” learning behaviors. The following presentation shows some of the work her lab is doing, at about 36 mins. she goes over a project that links simulated and real world environments.

http://www.youtube.com/watch?feature=player_embedded&v=o67Q7sUx0VQ#at=2131

 

 
  [ # 1 ]

I looked at the video and it does raise some interesting points. Just a few points however, that came to mind while watching it:

1. I don’t see any reference to ‘symbol grounding’ as in ‘building a conceptual frame of reference’, and how that should/could be done.

2. Logging interactions, while certainly useful in one or another application, is more towards ‘building experiences’. However, it seems their research is more aimed at mapping possible actions/reactions, so it’s more akin towards game-design (a frequently used strategy in robotics) then it is towards building consciousness.

3. In the example video there was no hint at the robot actually having any comprehension of the things it was doing. It was just choosing options from prerecorded behavioral paths.

Of course, combining such data with probabilistic reasoning IS part of the total solution (I think). And let’s not forget that most robotics research is towards autonomy as opposed to consciousness. Consciousness would help a lot towards autonomy, but it’s not a prerequisite for autonomy.

 

 
  [ # 2 ]

She talks about symbol grounding in the first half of her talk. The frame of reference is built first in the simulated environment and then tested for accuracy in the real world. Although it was not done in this case, she talks about round-tripping, or establishing a feedback loop between the real and simulated environments where each could refine the actions in the other. She also talks about the bots establishing a frame of reference not only for itself, but also for the human it is interacting with (this comes across in the Kismet demo).

The grounding in the Mars mission case was done in a couple of ways:
Programmatically - The simulated robot was limited in the sim to the specs of the real robot. This type of thing is aided with Physics engine libraries.

Via learned experiences - with humans taking control of the bot, “artificial experiences” were built for the bot much more rapidly than if the bot had to establish them in a real world environment.

In the video the bot does have comprehension of what it is doing and what its partner is doing. The case based planning system helps it decide on its goals and how to accomplish them. Results show that the tasks done in the real world were accomplished in about the same time as in the virtual. The real robot’s actions were the result of the actions that the robot inherited from the virtual world. Virtual experiences enable faster learning, similar to how a human learns via a flight simulator.

Additionally, the text/verbal instructions that the robot gives its partner were from the data that was learned in the virtual world.

Other parts of MIT are “grounding” robots through the interaction with the real world.
http://techtv.mit.edu/videos/531-ai-lab-cog-the-humanoid-robot

Look for the part on Ripley:
http://www.youtube.com/watch?v=0-vIGukPgkE

 

 
  [ # 3 ]
Merlin - Mar 20, 2011:

She talks about symbol grounding in the first half of her talk.

Ah, OK. Your link dumped me in the middle of that video so I assumed that the interesting part started there. I’ll look at the whole video again, and I’ll check your newly posted links. Thanks for posting them.

I’ll get back with more/other comments later (if I have them).

 

 
  login or register to react