AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

AI Philosophy; When does an AI ‘Understand’?
 
 

In some of the forum discussions, we have been talking about the concept of ‘understanding’ in an AI.

Skynet-AI has some of the best math handling capabilities of any on-line bot. Currently, this might be considered to be at the elementary school level. The question is, does it ‘understand’ this basic math.

The AI understands math in the same way that a calculator or spreadsheet understands it. But, is this the best we can hope for in the AI world? If not, then how would we determine if an AI “understands”?

 

 
  [ # 1 ]

Excellent topic !  I knew this was coming !!!

I think it understands WHEN:

User inputs a sentence.  Any sentence, in any form, using any words, any synonyms AND:

* knows the semantic “connections” between any words or phrases of that sentence
* can answer as many questions as can possibility be asked about that statement.
* every word’s role must be known, (how it is connected to the rest of the sentence).
* can take a follow up question, take the question apart and figure out how to chain together its own logic in order to plan a way to answer it.

* when it can be given another statement later on, and know how that new statement changes its understanding of the previous one.
Example:

a man has 2 hands
each hand as 5 fingers
bob is a man
Q: how many fingers does bob have?

bob has 10

new input: bob lost one arm in the war

AI: I see, so he only has 5 fingers.

* can resolve ambiguity in language, such as

“While I was in Africa, I shot an elephant in my pajamas”

Q: What was I wearing?
A: your pajamas

so, knowing what the antecedent of any prepositional phrases are (above “in my pajamas” modifies “I”, not the elephant)

* does not ignore, or “wild card” ANYTHING:

ai- what is your name?
user- my name is bob, why do you ask ???
ai- Well, Hello bob, why do you ask ?

Here we see, it clearly did not understand (it “wild carded” everything after “my name is (*)”

if it would have had complete understanding (and accounted for every word and PHRASE), then it would no that “why do you ask” is a question following name “bob”.

Of course, there are many more, such as knowing what “that” , “it” , etc of current sentences refers to.

 

 
  [ # 2 ]

every word’s role must be known, (how it is connected to the rest of the sentence).

I’d leave that out. Perhaps if you peep under the hood, but a bot shouldn’t be able to label each word correctly, just like most humans can’t.

 

 
  [ # 3 ]

“I think it understands WHEN:
User inputs a sentence.  Any sentence, in any form, using any words, any synonyms “

Victor,
If this is your definition then myself, my kids, and most people learning English do not ‘understand’ much of anything.
Even the well educated seldom get perfect scores on subject tests, yet they may be said to ‘understand’ the concepts. I would say many people get by with a limited vocabulary and “partial understanding”. Are you being too strict when it comes to AI?

Maybe we need a grading system like school or Karate.
Something like:
Grade 2
or
“My bot is a yellow belt in math.”:)

 

 
  [ # 4 ]
Merlin - Feb 7, 2011:

Are you being too strict when it comes to AI?

Maybe we need a grading system like school or Karate.
Something like:
Grade 2
or
“My bot is a yellow belt in math.”:)

Perhaps I am being a bit too strict.  But, I know this is possible, so far, my own tests are showing this to be possible.

Yes, I really like the grading example.  In fact, that is what I believe about AI in general.  It is not that a program will “have AI” or not have it, it is not a yes/no binary thing, a program could have many levels of AI.

So maybe all of my strict requirements don’t have to be met.  But certainly, if they ARE met, I think the bot DOES understand !!

 

 
  [ # 5 ]
Jan Bogaerts - Feb 7, 2011:

every word’s role must be known, (how it is connected to the rest of the sentence).

I’d leave that out. Perhaps if you peep under the hood, but a bot shouldn’t be able to label each word correctly, just like most humans can’t.

You’re on to something there Jan.

I think that, if the bot didn’t understand, if it at least partially understood, and then, could ask a clarifying question, and revise and update its understanding, that would be incredible.

In my bot, if it doesn’t understand the full sentence, it will actually tell you what portions of the input that it DID understand.  Then, you will know what kind of information to tell it, information that it needs to make the remaining semantic connections.

“semantic connection”—for example… can be as simple as knowing that adverb “yesterday” is modifying “went” in the sentence “I went to your house yesterday”.

If you type that in

“I went to your house yesterday”.

Grace will say, (suppose she never seen the word ‘yesterday’ in her life), but knows all the other words, and has figured out how they relate to each other (ie she’s generate the parse tree).....

I didn’t understand the entirety of your input.  I did however, understand

“I went to your house”

you could then ask her if she knows what yesterday meanings,  and if she replies with nothing, then you’d know you have to tell her:

“yesterday is an adverb”

this will update her database.

THEN

“I went to your house yesterday”.
she will know that adverb ‘yesterday’ modifies ‘went’ (after applying her semantic rules).

 

 

 
  [ # 6 ]

just to add to the above, when Grace would say:

I didn’t understand the entirety of your input.  I did however, understand

“I went to your house”

You could ask questions like “Where did I go?” and get back “to my house”.

But when asked “when did I go to your house?”, it would reply with “I don’t know” (since it didn’t know the word ‘yesterday’ or how it was semantically connected). 

After entering “yesterday is an adverb” it would be able to answer it.

So, yes, Jan , partial understanding is fine.

But being able to add statements after, and the system integrating into its knowledge, and that results in more complete understanding. 

I would add that as a requirement.


Another requirement : STATE or CONTEXT

user: It’s colder outside than in here
ai: make sure to dress warm

ok, that’s fine.  BUT what if this came just before that….

user: the AC is broken, and it’s too hot in here
user: It’s colder outside than in here
ai: make sure to dress warm

no!!!  how about “go outside and cool off a bit”

Kind of a stupid example, but you get the idea.. context, that is state of the conversation matters to understanding.

 

 

 
  [ # 7 ]

My view on this (again, I know) is that when we talk about ‘understanding’, still to many ‘complete sentence conversations’ are being given to illustrate things. For me the real question is; when does a human child begin to ‘understand’ what we are teaching/telling to it. The question about ‘understanding’ also goes to IQ, so maybe we need a AIQ.

One of my other side-projects is to develop a new IQ-scale and IQ-test to go with it, based on measuring the capability to handle ‘cause-effect chains’. I have a faint idea this could also be used to measure ‘IQ’ in artificial minds as well (AIQ).

 

 
  [ # 8 ]

Hans,

Now that would be very practical.  In the future, bots could be evaluated on a common scale.  An analog scale, not a binary yes/no ‘can this bot pass the Turing Test’ scale - because, none of them can.  One day probably one will, but until then, your AIQ would be of great practical value.

 

 
  [ # 9 ]

Hutter and Legg have already developed a generalized test for artificial intelligence.  http://www.vetta.org/publications/

 

 
  [ # 10 ]

Well one thought I had about machine intelligence tests was this: it would be able for bots to have autonomous evolution because objective measures would be in place.

 

 
  [ # 11 ]
Victor Shulist - Feb 7, 2011:

just to add to the above, when Grace would say:

I didn’t understand the entirety of your input.  I did however, understand

“I went to your house”

You could ask questions like “Where did I go?” and get back “to my house”.

But when asked “when did I go to your house?”, it would reply with “I don’t know” (since it didn’t know the word ‘yesterday’ or how it was semantically connected). 

After entering “yesterday is an adverb” it would be able to answer it.

I don’t think so. “yesterday is an adverb” tells you the part of speech (which you might also be able to automatically guess at if the word was not in the dictionary).  In the input:
“I went to your house accidentally.” accidentally is also an adverb and would not fit the “when did I go to your house?” query.

you could then ask her if she knows what yesterday meanings,  and if she replies with nothing, then you’d know you have to tell her:

“yesterday is an adverb”

this will update her database.

THEN

“I went to your house yesterday”.
she will know that adverb ‘yesterday’ modifies ‘went’ (after applying her semantic rules).

Although the bot would know ‘what’ the word modifies, it would not know ‘how’ it is being modified.

To answer a “when” question the bot needs to have the concept of “Time” and you need to be able to also tell it something like:
Yesterday is a date calculated as now minus one day.
This would assume the bot understands the concepts of Time (date, now, day) and Math (calculated, minus, one).

 

 

 

 
  [ # 12 ]
Merlin - Feb 7, 2011:

Although the bot would know ‘what’ the word modifies, it would not know ‘how’ it is being modified.

To answer a “when” question the bot needs to have the concept of “Time” and you need to be able to also tell it something like:
Yesterday is a date calculated as now minus one day.
This would assume the bot understands the concepts of Time (date, now, day) and Math (calculated, minus, one).

Grace knows how it is being modified.

Because in addition to knowing that yesterday is an adverb, she also knows that there are different types of adverbs :

~~~~~~~~~~~~~~~~~~~~~~~~

Adverbs of Time: Adverbs of time provide information on when something happens.

For example: We’ll let you know our decision next week.

Adverbs of Frequency: Adverbs of frequency provide information on how often something happens.

For example: They usually get to work at eight o’clock.

Adverbs of Degree: Adverbs of degree provide information concerning how much of something is done. For example: They like playing golf a lot.

Adverbs of Comment: Adverbs of comment provide a comment, or opinion about a situation.

For example: Fortunately, there were enough seats left for the concert.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

When Grace considers generating a parse tree where there is an adverb X modifying the predicate verb, she then checks if X is an adverb of time (or any of the above types).

So she knows

“I went to your house last Friday night”  (group of words like “last Friday night” being adverb)
“I went to your house this morning”
“I went to your house yesterday”

EVEN…

“I went to your house when your uncle tom was visiting”
where the entire subordinate clause “your uncle tom was visiting” functions as an adverb OF TIME.

Now I would have to tell her:

yesterday is an adverb of time

for that to work.

So the conversation really would be:  (again, assuming she knows all words except ‘yesterday’):

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

me: I went to your house yesterday

grace: I did not understand the entirety of your input.  The parts I did understand are: “I went to your house”

me: yesterday is an adverb

grace: thank you, what type of adverb?

me: adverb of time

me: OK, updated WORDDATA Directory

me: when did I go to your house?

me: yesterday

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


OR . ..

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

me: I went to your house yesterday

grace: I did not understand the entirety of your input.  The parts I did understand are: “I went to your house”

me: yesterday is an adverb of time

me: OK, updated WORDDATA Directory

me: when did I go to your house?

me: yesterday

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Right now, I have a perl script (‘new.pl’) which I run to enter new words.  Anytime I enter an adverb, the script asks me to specify the type of adverb (any of the ones listed above).

Thus, when I add the functionality to input part of speech learning via NLP, the IFLO (formerly called a reactor), will generate a response asking for the type of adverb) *OR* if the parse tree indicates the user HAS specified the adverb type, it won’t of course ask, but go right ahead and update WORDDATA directory (where Grace keeps track of the parts of speech of words).

I can’t wait to complete my grammar rules, so I can start making the IFLOs to accept learning new parts of speech (and sub-types of speech) via NLP.

I don’t want to hijack this thread regarding my own project, but it IS basically on topic of bot understanding smile

 

 
  [ # 13 ]

By the way, she also knows past, present and future forms:

“I went to your house tomorrow”

here she knows that “went’ is past tense of ‘go’, and ‘tomorrow’ is future tense BUT ‘tomorrow’ is adverb of time modifying ‘went’.

When I get to writing the IFLO for this, she will respond with “What? That makes no sense” smile

@Toby—awesome site.  Going to read a couple of papers this evening… definition of intelligence… hum.. should be most interesting.

 

 
  [ # 14 ]

I want to keep this thread on AI understanding, so if I have grace specific comment I’ll post on your grace thread.

I did want to highlight the difference in understanding between:
yesterday is an adverb
yesterday is an adverb of time (general time)
yesterday is an adverb and a date calculated as now minus one day (specific time)


I would say that AI understanding is a continuum that has not been well defined. The concept of understanding in humans is fuzzy with many having partial, but good enough understanding. We use the education metric, and our experience in everyday conversations, to gauge relative understanding.

 

 
  [ # 15 ]
Merlin - Feb 8, 2011:

I want to keep this thread on AI understanding, so if I have grace specific comment I’ll post on your grace thread.

no problem, i can get carried away smile

Merlin - Feb 8, 2011:

The concept of understanding in humans is fuzzy with many having partial, but good enough understanding. We use the education metric, and our experience in everyday conversations, to gauge relative understanding.

Especially our understanding of understanding.

seriously, fuzzy?  Yes, but not in all cases.  Certainly if I tell you

“My birthday is in September”

that is not fuzzy—ok, ok, it is fuzzy WHEN my birthday is, since I didn’t specify the year, but certainly you understand that sentence 100% don’t you?

Now in the cases of ‘fuzzy’ (instead I should actually use unclear, fuzzy has some definite meanings, especially in fuzzy theory which people may get confused) understanding, I think it will be the bots ability to figure out what question to ask, to make it clear. 

So even in the cases where we are initially unclear, I do think that humans have clear understanding after some interactive dialog, don’t you ?  Now where one is not in an interactive situation, like reading a book, you can be unclear about what is written, in that case you remain that way, or read further to resolve the ambiguity.

 

 

 

 

 1 2 3 >  Last ›
1 of 4
 
  login or register to react