AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Curiosity in machine learning
 
 
Don Patrick - Apr 24, 2014:

On further thought, I think the general procedure for curiosity is:
1. Notice something out of the ordinary (this may include location, colour, shape)
2. Investigate.
3. Stop investigating once sufficient information or procedure has been established to deal with #1 (curiosity threshold).
Link to source post

Don,  Perhaps curiosity can be modeled as a behavior of a [insert quantifier] self aware entity; the behavior is triggered by a [new or remembered] awareness of some lack of knowledge, or inconsistency of belief or model. 

(The behavior may be constrained or prioritized by a new knowledge reward/cost evaluation function in a high situation awareness entity.)

The behavior is to establish a goal to learn the unknown or resolve the inconsistency.  (The entity is usually unaware of the total effort the goal will trigger.)

The goal instantiates one of the entity’s most appropriate learning behaviors, (apropo to the entity, but sometimes the most inappropriate to the situation or cooperative entities). 

By a principle that the entity must almost know something to learn it, the curiosity behavior may be recursively triggered as the entity becomes ever more aware of the breadth or depth of its lack of knowledge in a subject (which was evaluated to have high reward for knowing).  You are, thus, right on to include a curiosity thresholding function to limit the learning goal generation.

(Additionally, the non-apparent curiosity threshold may be much higher than an initially cooperative knowledge source understands or agrees to serve.)

Given a curiosity mechanism, directed learning can be caused by asking questions for which the entity becomes aware it cannot answer due to a lack of knowledge or inconsistency of its beliefs or models.

Your model of curiosity sounds a lot easier to program.

Alan

 

 
  [ # 1 ]

LOLexcaim

 

 
  [ # 2 ]

Lambert: Farmers used to look up at the sky and laugh at meteorologists and their devices for trying to predict the rain smile. Sometimes they still do, but they also listen to the weather forecast.

So Alan, you have read the entire thread after all and found the one solid discussion
“I submit to you the question: Can machines be curious?” or as John actually put it:

John Lizotte - Apr 24, 2014:

So my question is….How the heck would you program a sense of curiosity into your AI, or in my case, into my robot?

To which I suggested one could make the robot check how many facts it had in database on an object, and ask questions when that amount was below a certain number, until it either reached that number or exhausted its investigation. Whether that can be called self-aware to some extent, I don’t know, it’s probably too simple for that notion.

You may be giving my basic guidelines more credit than I deserve, I didn’t know that could be considered a model. It seems you have rephrased them more intellectually and added cost/reward. While I don’t awarely think in those terms when curious, they are a likely underlying mechanism: Knowledge is power. Cost/reward feedback could be used to dynamically determine whether an investigation should continue or end, instead of a static pre-set limit. But all the same I’d still set a limit or the robot may go out and travel the world if the reward system is set even slightly too high.

As for “a behaviour”, I’m not sure how to imagine that, that sounds like a step beyond my level of programming. It would help to know your goal in this: Are you a roboticist, theorist, psychologist, programmer, or are you in fact a human? Feel free to introduce yourself in the “New here?” subforum, and you as well, Lambert smile

 

 
  [ # 3 ]
Don Patrick - May 30, 2014:

It would help to know your goal in this: Are you a roboticist, theorist, psychologist, programmer, or are you in fact a human? Feel free to introduce yourself in the “New here?” subforum

My goals for the post were:
- move a deeply interesting topic out from hiding under a bait and switch title.
- document a thought experiment on control of unsupervised learning
- elicit thoughts from others on this anthropomorphic model

Because it is easier to summarize my thoughts on curiosity than “who am I”, I have been working on an AI Bio for the “New here” thread offline, but as is one of my burdens, the more I think on any subject, the more subtopics arise to distract my thinking (and doing).  I’ll go right now to create an intro.

Alan

 

 

 
  [ # 4 ]

There has been a long running experiment (more than four years) in unsupervised learning conducted by Carnegie Mellon University called “Read the Web” or “Never Ending Language Learning” that might be on interest to readers of this thread.

http://rtw.ml.cmu.edu/rtw/

NELL has been in continuous operation since January 2010. For the first 6 months it was allowed to run without human supervision, learning to extract instances of a few hundred categories and relations, resulting in a knowledge base containing approximately a third of a million extracted instances of these categories and relations. At that point, it had improved substantially its ability to read three quarters of these categories and relations (with precision in the range 90% to 99%), but it had become inaccurate in extracting instances of the remaining fourth of the ontology (many had precisions in the range 25% to 60%).

 

 
  login or register to react