AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

The Hat Riddle and Other AI Conundrums
 
 
  [ # 16 ]

Gary you raise some interesting points but you should not confuse construction of a logical model with solving it.

In this case the model can be expressed as a simple table with as few as three columns: one column for the possible combinations of colours, one column for the number of taps the prisoners heard, and a third column for whether or not anyone could have known the answer immediately. This results in a table with four rows, and with all the values filled in correctly the answer becomes obvious.

However as Andy pointed out, this could also be expressed using first order logic and solved using logical resolution. This is the algorithm used for automated theorem proving and while it is probably overkill for this particular problem, I recommend anyone who is interested in this should take a look at http://www.cs.miami.edu/~tptp/ where there are literally thousands of example problems expressed as first order logic. The most interesting application of first order logic is in knowledge representation such as KIF (knowledge interchange format)... but I digress.

Ultimately the real problem that concerns us here is construction of the logical model that is to be solved, and Gary is quite correct that this problem requires a much deeper understanding of actors and plot to formulate that model than the usual sorts of brain teasers.

Hence my wondering out loud in the first post of the thread, how would WATSON cope with this?

 

 
  [ # 17 ]

Gary, Andrew
Thanks for the enlightenment, but let me tell you I did not know about this theory “name” (and obviously too many others), but just goggled it and found the basics: it’s nothing more than a novel and ‘light’ way to see deixis resolution similar to anaphoric but time dependent by doing co-reference solving inside a time-dependent model, this is just what I am modeling in my actual bot engine: grounding and solving co references dynamically (in context) by getting some common-sense metrics to measure its fitness (like CYC).

But.. according to my opinion FOL already do solve this problem kind thus, obviously you must apply the correct transformation and relations among variables, the time continuous cannot be solved directly, but the time overlapping theory is nothing but boolean logic, as soon as you get this logic solved you can go into a deeper math-reasoning formulae, with the proper relations solved and get the exact time, for example: You must just solve the deixis and state each of the variables and the dependencies, and then… it gets solved by logic proofing!

There is also something like Fuzzy logic reasoning but this is much more unexplored, I guess.

I am currently doing this way, and it seems to work! (at least for me) wink

 

 
  [ # 18 ]

That’s really interesting Andy, I hadn’t come across the term “deixis resolution” before. Not sure if I understand this correctly but was it an early method for handling time based concepts?

Anyway your comments prompted me to follow up on one researcher who I had read about, Erik T. Mueller, who has done a lot of work on “event calculus” which is a first order logic library for handling time and its related concepts. He published an extremely sophisticated piece of software called “Thought Treasure” which is now available as open source and (I just discovered) is now one of the brilliant minds behind IBM’s WATSON project!

http://xenia.media.mit.edu/~mueller/papers/tt.html

 

 
  [ # 19 ]

Thanks, but deixis is a way to point something when you speak, sometimes anphoa is a deixis resolution, is like virtual imaginary pointers toward each piece of information and or actor un the described scene.  Hope this helps. Also time resolution is a kind of deixis operation.

 

 
  [ # 20 ]

Its been along time since I’ve seen the Thought Treasure software.  Around the turn of the century it was available, but then it just seemed to disappear.  Did you find it again?

 

 
  [ # 21 ]

According to the Wikipedia article the original software no longer seems to be available however the knowledge base is still available in a number of different formats together with libraries for accessing it in various languages. There are links near the bottom of the article.

http://en.wikipedia.org/wiki/ThoughtTreasure

 

 
  [ # 22 ]

The problem with thought treasure and cyc and UIMA and other knowledge representation formats is that they’re arbitrary programmer-defined structures that are inevitably inflexible because the programmer(s) who created them didn’t think of all the possibilities that can exist in natural language. Translating between the subject-predicate syntax of natural language and the (invariably, thanks to Frege) function-argument syntax of formal knowledge representations involves an impedance mismatch that is avoidable if you start and end with natural language!

As for the hat riddle, one way I might approach it is to first have the bot ask itself, “what is the hat riddle?” When I asked my bot this question just now, the wikipedia agent replied: “The prisoners and hats puzzle is an induction puzzle (a kind of logic puzzle) that involves reasoning about the actions of other people, drawing in aspects of Game theory.” - http://en.wikipedia.org/wiki/Prisoners_and_hats_puzzle

Then I would have another agent read (scrape) that article, perhaps looking for “solution” and grabbing the text after it :)

 

 
  [ # 23 ]

You could build your own Watson and experiment with how it would cope with this problem…

ibm_watson_how_to_build_your_own_watson_jr_in_your_basement7

But remember Watson is a question-answer system, not a reasoning system.

 

 
  [ # 24 ]
Robert Mitchell - Jul 10, 2011:

As for the hat riddle, one way I might approach it is to first have the bot ask itself, “what is the hat riddle?” When I asked my bot this question just now, the wikipedia agent replied: “The prisoners and hats puzzle is an induction puzzle (a kind of logic puzzle) that involves reasoning about the actions of other people, drawing in aspects of Game theory.” - http://en.wikipedia.org/wiki/Prisoners_and_hats_puzzle

Then I would have another agent read (scrape) that article, perhaps looking for “solution” and grabbing the text after it smile

Your bot sounds like many a desperate college student. LOL

 

 
  [ # 25 ]

Imagine the quiet desperation of a teacher who can only get the attention they so crave by keeping knowledge of the solution to questions they ask back with a closed fist :)

My approach, like Minsky’s, is to use a variety of different methods. While a “reasoning agent” is working on the problem, another “Q-A” agent might have already found the solution posted somewhere…Then the bot internally could check its “reasoned” solution against the posted one; if the answers don’t match, it can then question the posted answer’s correctness, or try to incorporate the answer into future reasoning tasks… or both.

This thread provides an example of this process :)

 

 
  [ # 26 ]

Everyone:
I think we are blurring the concept of AI and reasoning inside a bot-mind (¿wow… a mind?), also all this AI things are breeders for many contradicting thoughts and too many threads to continue on w/o guidance.
Let me put some things out and clear and try to shed some light on this, or please correct me in this!

AIML is not AI nor a ‘thinker’ engine, it’s just a combinational (recursive) pattern matcher, with some sort of ‘output’,
Precisely: if we restrict randomness in output (once a pattern matched), this can be seen as a simple transformation automaton, which added some fancy ‘output rules’ (like conjugate or mock input pattern into output) may give to the interlocutor an impression of having some ‘cleverness’ but nothing more than this.

AI Reasoning is based on 3 strategies: (1)Understanding the facts and do and (2)Elaboration afterwards. Then (3) Response Generation, depending on the formers result.

1 - Understanding needs Pragmatics extraction, which needs Semantic extraction, which needs Grammatic Interpretation, which needs deep Morphologic Analysis, which may need a Huge Lexicon or a rule-system to extract semantics, and some spell restoration/correction if needed. all of them (in this chain) may have multiple outputs which must be disambiguated, this is the hard part, or we got a NP-problem. Here is where the state of the art is stuck! Cyc and others ‘common sense frameworks’ tried (in vane at my sight) to solve this step by brute-force attack!

2 - Elaboration, need planning and need some solver agents to be available, so you need that the pragmatics extraction result do a planning to find a proper ‘agent’ set (like Minsky’s Agents) to accomplish the job, so you may be limited on the knowledge of certain bot which may have only a limited number of thematic ‘agents’ to solve some kind of puzzles or questions.
Here is where the ‘problem solving’ might be using simple reasoners, or theorem proofing by FOL, time-wrapping, and perform the needed deixis resolution, to trace a feasible planning th accomplish the query or intellectual job, using the available agents (seen as resources).

3 - After all of this may be done and have some sort of success, you might end up with several responses, many without ‘common sense’ (depending on the ‘agents’ capabilities in bringing good responses) Then you need to choose the best or more plausible (with some coherent metrics) and make a elaboration (plan) to say the output in the same language by doing “Natural Language Generation’ based on retrieved data. (this is far from being trivial)

So here is my insight of a ‘smart ass’ bot

¿Is there a serious approximation toward this done?
- I guess not at all!

There are too many problems unsolved, the worst is the ambiguity times the lack of mind-modeling specially the intellectual planning and understanding, there is also no unique nor easy knowledge representation, including all the OWL, thesaurus, and hierarchical databases which are ‘hardwired’ thought based on someone’s model or theory, which is not necessary a good one to do some calculation or reasoning nor represents real knowledge. So here we are now! - stuck until some -mind freak- really shed some light on this!

In my humble opinion, I may only have identified the problem, and therefore I am personally involved on tracing a route to some tiny success, crystallized in some reasoning and inferencing engine, not a math-rule system but only a small learning engine, based on biological modeling of pragmatics.

Also, for fun (and also some commercial reasons), I am writing a small bot-engine, but this bot will not be a real smart-ass, he will know noting at all! but, instead he will have a robust understanding engine, to extract pragmatics, and then with some tiny rule-inferencing engine he will deal with evidence and extract this new evidence and rules might be from a conversation or a already written text. That’s all. He will know many ‘common sense stuff’ which is ‘idiot-safe’ like math, physical units conversion and relations, money and stock valuation, and many other common-world knowledge, this is not AI its only a good database and modeling engine, but required a lot of (my) work to be accomplished. this works fine! (although errors are constantly being debugged).

I think 4.5 billion years of evolution, cannot be bad! nor can we make a better model in a thousand lifetimes, only we may, somehow (i we understand it) get it done a bit faster with some -electronics- clown (silicon and electricity, instead of chemical ion-movements, and transformations). that’s all I guess for us!

 

 

 

 

 

 

 
  [ # 27 ]

@Andy,

What you describe as input, process, output is not a mind. It is a machine. While it may have recorded how people act and can output such data to you in clever banter, it probably will never act that way itself, that is, converse with you like a real person.  I think we are quickly approaching that threshold with research like what this thread first presented, that is, how would Watson do…  We may find soon we can effectively “fake” a discussion with a machine that answers complicated questions, but still are yearning for a real AI that has a mind.

In a few years we may have to ask a super Watson what AI really is!

 

 
  [ # 28 ]

Gary, Of course! I didn’t intend this as a mind.. therefore I was mocking with (¿wow… a mind?)

But to be honest, I tried to describe an AI process on multiple possibles processes, the real human or animal intelligence might never be described nor measured, but its a start!

The other fact I omitted (trying to be concise) is that a bot, need not to be a ‘senior question-answerer’ rather to be a functional inter-operational character, capable of accomplishing some (or none) mission, for example an ATM, may be done by a bot, a really silly one, but he might not answer Watson’s stuff, but it will answer about your bank account (you grant him the access) and the turn-taking and interpersonal stuff (bot-human-al..) is critical, and no bot architecture do yet manage well this kind of stuff (like dialog act managing) I guess..?

This is towards I am actually going, not to be smart but to be polite in inter-acts.

The people at the ATM should say: Yeah! the bot is stupid, but he behaves well and is educated, you may have to explain him the things twice, in clear language, but he lets himself to be understood and knows well his own limitations, asking whatever he don’t understands well!

This might have been the case of the famous TRAINS dialog system, but I want this NOT to be task-oriented and hardwired, like the former, but easily programmable .!

 

 
  [ # 29 ]
AndyHo - Jul 10, 2011:

So here we are now! - stuck until some -mind freak- really shed some light on this!

Quoting Minsky (from the last page of Society of Mind):

“What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. Our species has evolved many effective although imperfect methods, and each of us individually develops more on our own. Eventually, very few of our actions and decisions come to depend on any single mechanism. Instead, they emerge from conflicts and negotiations among societies of processes that constantly challenge one another.”

So instead of thinking that we’re stuck and waiting for some “mind freak”, why not code what you can, and make it freely available, so that others can incorporate your work in a multi-agent architecture (see subbot.org for one example implementation), and put Minsky’s theories to the test :)

 

 
  [ # 30 ]

Andy, the people who developed the TRAINS dialog system have subsequently developed PLOW which is one more step closer to the kind of easily programmable system that you are talking about.

http://www.cs.rochester.edu/~james/

Make sure you watch the videos of the PLOW system is action, they are really impressive.

 

 < 1 2 3 4 >  Last ›
2 of 5
 
  login or register to react