AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

The Hat Riddle and Other AI Conundrums
 
 
  [ # 46 ]

1. Banana
2. Dresser
3. Grammar
4. Potato
5. Revive
6. Uneven
7. Assess

“arroens” ... Vertically ... 2nd and last letter.

Remove the capital letter and you have a palindrome

 

 
  [ # 47 ]

I never thought that the order of the words was significant. Well played, 8pla! smile

And Gary, you’re right. No “simple” algorithm would be able to pick that up (assuming, of course, that Tom’s answer is correct, which I’m betting it is). But that combination of letters scores nothing in Scrabble, since it’s not in the dictionary. raspberry At least, not in any dictionary I’m familiar with.

 

 
  [ # 48 ]
Gary Dubuque - Jul 11, 2011:

@Robert,

Why would an algorithm worry about being in prison for the rest of its existence.  It seems your view of them already puts them in a prison.

Why would Watson worry about winning at Jeopardy. It seems the programmers’ view of it already puts it in a game.

“How long would a computer wait on the other computers before it could conclude that there was more to the problem than just logic or at least that it now had the additional information to determine they all were marked red?  A couple of microseconds?  Computers can figure out all the possibilities much faster than you can.  I maintain that the most direct calculations can determine is that there are at least two of the three marked red. The taps don’t tell enough to declare all three are marked red.”

How long would a human wait on the other humans before it could conclude that there was more to the problem than just logic or at least that it now had the additional information to determine they were all marked red? A couple of minutes? ...

In other words, the time factor is irrelevant to whether the entities are computers or humans.

“I’ve never heard of an algorithm that understands what other machines are doing without sensors or instrumentation as agents inside the other machines.  Are you suggesting all computers have some kind of built-in protocol so they can handshake these things without arranging any direct communication paths?”

IRC bots are algorithms, and I can communicate with them using natural language…

“If so, why wouldn’t they all three quickly know the answer and be set free?”

All three humans could also simultaneously respond with the correct answer, so again this objection is the same whether the entities taking part in the test are humans or computers.

We could do a test. Put three programs in an irc channel, and have a “warden” assign colors to them. The programs would have to blindfold themselves so that they wouldn’t record the colors internally and use that to generate their answer; perhaps you could use irc Actions to assign the colors and make sure the programs did not read the actions. Or some such contrivance.

I’ve started on such a program: subbot.org/hatriddle.

A sample dialog, also available at subbot.org/hatriddle/dialog/three_red_hats.txt (note that the dialog is frightfully synchronous, but an irc wrapper would take care of that, since irc is inherently asynchronous):

> you are wearing a red hat
Okay, I am wearing a red hat (but I can’t see it!).

> you see a red hat
Okay, I see a red hat. *TAP*!

> you see another red hat
Okay, I see another red hat.

> you hear a tap
Okay, I hear a tap.

> you hear another tap
Okay, I hear a tap.

> what color is your hat?
...

> what color is your hat?
Red!

—-

So, the bot doesn’t answer the first time, because it doesn’t know; it does answer the second time, because it now knows that no one else has answered.

It would be fun to put this program to the test against others in an irc chatroom. Or in another setup. Anyone else up for the challenge?!?

—-

The logic boils down to the following if-then statements (expressed in Ruby in hatriddleagent.rb):

    if I heard 0 taps then I am wearing a White hat.
    if I heard 2 taps and I tapped then I am wearing a White hat.
    if I heard 2 taps and didn’t tap then I am wearing a Red hat.
    if I heard 3 taps then wait.
    if I heard 3 taps and enough time has passed for the others to process the first four rules, then my hat is Red.

 

 
  [ # 49 ]

The order of the words is not significant. 8pla is nearly there. It’s to do with moving the initial letter right Gary?

 

 
  [ # 50 ]
Andrew Smith - Jul 12, 2011:

... they all have one unique property in common. (Unless you include proper names like Ghazzah and Brenner, or non-english words like caressera, gareggera, matarrata and paressera for example.)

These words also fit into the solution.

 

 
  [ # 51 ]
Robert Mitchell - Jul 12, 2011:

    if I heard 0 taps then I am wearing a White hat.
    if I heard 2 taps and I tapped then I am wearing a White hat.
    if I heard 2 taps and didn’t tap then I am wearing a Red hat.
    if I heard 3 taps then wait.
    if I heard 3 taps and enough time has passed for the others to process the first four rules, then my hat is Red.

You need an extra rule Robert:

“if I heard 3 taps and I can see a red and white hat then I am wearing a red hat” otherwise the bot will be waiting forever.

 

 
  [ # 52 ]

For what it’s worth, I sent the following message to Gary at the same time as I posted “got it” just after his original message.

“Drop the first letter and the remainder of each word is a palindrome.”

I really didn’t want to spoil it for anyone else who was still trying to figure it out. The list of additional words that I posted was by way of a hint for Dave who seemed to be requesting one.

 

 
  [ # 53 ]

Don’t mind Dave. He’s being inordinately lazy and impatient, and needs to be taught a lesson in the evils of each. smile

Don’t you just hate it when some pompous *** refers to himself in the third person? raspberry

 

 
  [ # 54 ]

off-topic: LOL @Dave I don’t know how closely you read my Google+ profile, but I added the following to the “bragging rights” field today. “Andrew wrote his own profile without making it sound like a fake press release.”

I guess I must have seen one too many pompous profile descriptions written in the third person. Still waiting to see if it causes anyone to unfollow me. wink

 

 
  [ # 55 ]

Actually, I hadn’t noticed that when I skimmed your profile. Oops? That “pompous ***” comment was solely directed at me, and nobody else (unless, of course, someone else deserves the moniker!).

I hope you all forgive the comments. I’ve been working on debugging the new version of Program O for the past several hours, and I’m getting a bit punchy. I know I need to just give it up for the night, and go at it again tomorrow, but it’s always “just one more thing to try”; repeated over and over again, like some demented mantra.

 

 
  [ # 56 ]
Steve Worswick - Jul 12, 2011:

You need an extra rule Robert:

“if I heard 3 taps and I can see a red and white hat then I am wearing a red hat” otherwise the bot will be waiting forever.

Okay, thanks. With the addition (also at subbot.org/hatriddle/dialog/red-white-red.txt):

> you are wearing a red hat
Okay, I am wearing a red hat (but I can’t see it!).

> you see a red hat
Okay, I see a red hat. *TAP*!

> you see a white hat
Okay, I see a white hat.

> you hear a tap
Okay, I hear a tap.

> you hear another tap
Okay, I hear a tap.

> what color is your hat?
Red!

—-

The rules now are:

    if I heard 0 taps then I am wearing a White hat.
    if I heard 2 taps and I tapped then I am wearing a White hat.
    if I heard 2 taps and didn’t tap then I am wearing a Red hat.
    if I heard 3 taps and saw a red hat and a white hat, then I am wearing a Red hat.
    if I heard 3 taps and saw two red hats, then wait.
    if I heard 3 taps and saw two red hats and enough time has passed for the others to process the first five rules, then my hat is Red.

 

 

 
  [ # 57 ]
Andrew Smith - Jul 12, 2011:

“Drop the first letter and the remainder of each word is a palindrome.”

That is as maybe but words like otato are not valid words. It is not to do with palindromes.

 

 
  [ # 58 ]
Robert Mitchell - Jul 12, 2011:

The rules now are:

    if I heard 0 taps then I am wearing a White hat.
    if I heard 2 taps and I tapped then I am wearing a White hat.
    if I heard 2 taps and didn’t tap then I am wearing a Red hat.
    if I heard 3 taps and saw a red hat and a white hat, then I am wearing a Red hat.
    if I heard 3 taps and saw two red hats, then wait.
    if I heard 3 taps and saw two red hats and enough time has passed for the others to process the first five rules, then my hat is Red.

and if you heard 1 tap then someone is colour-blind smile

 

 
  [ # 59 ]

@Robert,

Your rules are becoming pretty complicated and very specific to this particular hat riddle problem.  Is Watson going to have these kinds of rules for each puzzle it is presented?  If not, where do these rules come from?  Remember, Watson is an evidence based search results selection device.  It does have some various algorithms for generating evidence and candidate results. How would it use your rules against they many inputs it can accept?

The second puzzle is an attempt to show how this approach of rules designed for each puzzle quickly falls apart.  What are the rules for the second puzzle?  They are not like the hat riddle unless you consider that there is more to the puzzles than first appears.

BTW, Dave I never got the answer for the second puzzle myself. I cheated and looked it up.

 

 
  [ # 60 ]

It’s interesting to see how the goalposts have moved in this thread, perhaps reflective of the history of AI itself: “AI is discovering new proofs, oh Logic Theorist can do that? No, then AI is natural language, oh Eliza can do that? Then NO!, AI is chess, oh Deep Blue can do that? NO THAT’S NOT AI, AI is playing Jeopardy, oh Watson can do that? NO AI IS NONE OF THOSE THINGS YOU DOLT!! IT’S RECOGNIZING FACES Oh, law enforcement is using that technology today in airports? THEN THAT’S STILL NOT AI, AI IS WHAT COMPUTERS CAN’T DO!1!”. Is there any trick low enough for “social intelligence” to spurn, in its desperate attempt to validate its fragile existence? :)

We started with ‘an advanced logic problem which requires the use of “Don’t Know” and “Don’t Care” to solve’. Then natural language solutions were presented. Then we moved to how First Order Logic was inadequate to solve the problem, and we needed “plot points”, because logic can’t overcome “the impasse” of “understanding how people act”: “this problem requires a much deeper understanding of actors and plot to formulate that model than the usual sorts of brain teasers”. Then back to speculation whether FOL can solve the problem with deixis resolution. Then a proposal (mine) that Watson could answer the problem by looking up the solution on wikipedia. Then there was some discussion of other software, but nothing seemed capable of dealing with the Hat Riddle problem. Then the suggestion (again, mine) that there was no need for the “deeper understanding of actors and plots” to solve this problem; Gary challenged me,  asserting “the most direct calculations can determine is that there are at least two of the three marked red. The taps don’t tell enough to declare all three are marked red.” Then I provided a proof-of-concept implementation (which still needs to be tested in an asynchronous, multi-player environment, such as irc for example), which Steve corrected. Then Gary replied that the rules are “pretty complicated” and anyway how is Watson going to come up with them? (Note that the rules are simple if-then statements, not needing “advanced logic” or “deixis resolution”, merely expressing in a simplified form the natural language solutions that were proposed at the beginning of the thread.)

So, my answer is that 1) Watson could use the same method Gary himself says he used to solve the second puzzle, looking it up; and 2) getting away from Watson, we can design other agents that we can teach using natural language to solve problems. Then we create other, reflective, agents, that “reflect” on the solutions learned, and try to generalize from them. If some generalizations don’t work, reflection must abandon them, and the system tries another agent’s approach…

For more on reflection, see http://web.media.mit.edu/~push/ReflectiveCritics.html

 

‹ First  < 2 3 4 5 > 
4 of 5
 
  login or register to react