AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

The Hat Riddle and Other AI Conundrums
 
 
  [ # 61 ]

That’s ok, Gary. Cheating is nothing more than making use of an unexpected resource. smile

 

 
  [ # 62 ]

From the very first entry in this thread: “I wonder how WATSON would cope with this one.”

I’ve tried to keep the riddling in that context.  I’ve debated the use of logical deduction and induction and even some hidden context brought about by use of supposition and beliefs and theories. All about how Watson would cope with advance logic problems.

Two things to point out about Steve’s rules:  They appear simple to execute, but complicated to put together into that execution plan.  They are begging the question because the task of waiting for all the others to finish processing isn’t logical nor is it practical in a real sense unless all the actors agree ahead of time to indicate when they are at a point where they can’t make a choice on their own.  Waiting is relative and not clear by using a communication protocol.  Relying on a protocol to reveal that “enough time has passed for the others to process the first five rules” isn’t a logic step, it is a procedural step of understanding how the (computer) prisoners act.

The second puzzle can be completely done in logic steps, but the parameters of success are something also to be discovered.  The context plays an important role in this example.  It should be easier to outline how Watson would cope with this one even without begging the question by knowing the desired result before the execution plan is adopted.  That includes Watson knowing where to search to look up the answer.

I’m still investigating the topic: “I wonder how WATSON would cope with this one.”  Can anyone simplify the process other than having a huge set of algorithms for puzzle solving similar to chatter robots having a large set of patterns to pick output templates with which to respond?

Case based reasoning almost solved the hat riddle…  That first part of the riddle’s analysis probably only takes the algorithm for doing story problems.  The rest of the riddle, well that takes common sense.

 

 
  [ # 63 ]
Robert Mitchell - Jul 12, 2011:

For more on reflection, see http://web.media.mit.edu/~push/ReflectiveCritics.html

A more complete picture is revealed in the book “The Emotion Machine” which adds the rest of the layers to the point that the machine becomes self-actualized, that is, it succeeds in the purpose of its life as it has chosen for itself.

 

 

 
  [ # 64 ]

Watson would likely solve the problem by looking up the solution.

Beyond Watson, we can use the description(s) of the solution to create the if-then rules that fully describe the solution. That’s how I came up with the initial set of five rules, which Steve then amended with a sixth. So the bot should be flexible enough to add new rules at runtime (and reorder those rules) ...

As noted before, the issue of how long to wait is a red herring, which applies equally to human and computer players. In my proof-of-concept implementation, the time to wait is a variable. You could code strategies for determining the value of that variable, adapting it to observations of other players, etc. That would be fun! Anyone up for the challenge of testing out the hat riddle (including a control of all human players), in an irc chatroom for example? The proof is in the pudding, we can beat around the bush yanking each others’ chains on the internet till the cows come home, but the devil’s in the details; let’s bring this discussion down to the brass tacks of science and test our assertions (“seeing is believing”)!

Or will your reaction be “Even if you proved a computer could solve the hat riddle, it wouldn’t matter, because my beliefs determine how I interpret the evidence, and there’s just no way I will ever believe that a computer can do such ‘advance logic’”?

Regarding “The Emotion Machine” (cited as the first reference in the paper I linked to): instead of imitating Minsky and just writing about it, why don’t we try to implement such a machine and see if the theorists’ predictions are accurate; or maybe “there’s many a slip ‘twixt cup and lip”?

 

 
  [ # 65 ]

Why would you say such a thing?

Robert Mitchell - Jul 13, 2011:

Or will your reaction be “Even if you proved a computer could solve the hat riddle, it wouldn’t matter, because my beliefs determine how I interpret the evidence, and there’s just no way I will ever believe that a computer can do such ‘advance logic’”?

That is not even on topic.

As I stated in another blog, given a standard IQ test including the questions and all the answers (for unit testing of course), you can write a program which recognizes each question and outputs the corresponding answer.  With a perfect score on the test, wouldn’t the program be real smart? (I vaguely recall this as a plot of an old “Patty Duke Show”.)  At least it could solve the problems it was programmed to do.

In the late ‘80s those were popularly known as expert systems.  They contained the rules for their given domain of expertise.

Watson is better than that (I hope.)  Even if IBM is building a doctor’s assistant just like the first authors of expert systems.  This time it will be different.  I’m just curious as to how.

How will Watson cope with advanced logic (or even common sense) problems?

 

 
  [ # 66 ]

it could solve the problems it was programmed to do

I think this is the interesting thing: anything programmed to handle the game, can. But what if there is no programming to handle the specifics? Can it come up with it’s own program?

 

 
  [ # 67 ]

Here’s a relevant link from today’s slashdot:

http://games.slashdot.org/story/11/07/13/1451246/Computer-Learns-Language-By-Playing-Games

“By basing its strategies on the text of a manual, a computer infers the meanings of words without human supervision. The paper Learning to Win by Reading Manuals in a Monte-Carlo Framework (PDF) explains how a computer program succeeds in playing Civilization II using the official game manual as a strategy guide. This manual uses a large vocabulary of 3638 words, and is composed of 2083 sentences, each on average 16.9 words long. By this the program improves it success rate from 45% to 78% in playing the game. No prior knowledge of the language is used.”

While we’re arguing about topics, others are coding!

 

 
  [ # 68 ]
[19:00:59] [PRIVMSG >>> prisoner1]you see one white and one red hat
[19
:01:01] <prisoner1> *TAP*
[19:01:02] [PRIVMSG >>> prisoner2]you see one white and one red hat
[19
:01:04] <prisoner2> *TAP*
[19:01:05] [PRIVMSG >>> prisoner3]you see two white hats
[19
:01:06] <prisoner1White!
[19:01:09] <prisoner2White!
[19:01:12] <prisoner3Red

—-

So, there are some timing issues: the prisoners should all be notified of the colors that they see at once. But it’s a start :)

 

 
  [ # 69 ]

Robert, I’d love to get a look at that bot’s code. A great example of guided learning—where the bot’s “world” is the guide! Much easier than a human teacher taking the time to correct bad assumptions.

I wonder though how easy it would be to design a game and manual that would throw the bot off. That is, lead it into some local minimum where it’s play strategy was improved slightly, but positively enforced misinterpretation would prevent it from improving further. Hmm… Maybe this belongs in the “state of the art” thread?

EDIT: Ha! The code is available: http://groups.csail.mit.edu/rbg/code/civ/

 

‹ First  < 3 4 5
5 of 5
 
  login or register to react