AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

The World of AI is Not Enough
 
 
  [ # 31 ]

I’m wondering if there is a 3rd peice of the puzzle. 

Knowledge (similiar to data)

Intelligence (similar to program logic—program code, or a running process)

-and-

Understanding (or perception, the act of making sense of your input and relating it to previously acquired knowledge)

To understand, we are given a problem, either visually or auditorily, or, for most chatbots, textually.    We can’t apply our previous knowledge to a given situation if we don’t -understand- that situation, if we can’t make sense of what we are seeing, or can’t make sense of a piece of naturual language input that we are given.

Understanding allows us to correlate what peices of knowledge we acquired previously with the information given in the problem statement we are facing. 


Understanding is the ability to integrate many peices of information and realize how they fit together, and what those connections mean in terms of how to apply existing knowledge to solve problems.  Without understanding, knowledge is simply data—- bit patterns of ones and zeros (or magnetic charges) on a disk drive.  Without understanding, we will only ever have ‘mechanical’ intelligence, chess, checkers, go, whatever, but not real-world problem solving intelligence. 

I think systems already have knowledge and intelligence, but not understanding, and that is why no one really wants to call any process going on inside a computer ‘thinking’, but rather simply ‘processing’.  Without understanding, we will continue to only use mechanical alogorithms and often end up with unsatisfactory results (the famous: ‘a hamsandwich is better than nothing, nothing is better than a million dollars, thus a ham sandwhich is better than a million dollars’ kind of thing).

Thus, I believe, that ‘real world’  computer intelligence won’t happen until we have all 3 peices.  I think we have knowledge and intelligence for the most part, but we are very weak on understanding.  Now, there are some that subscribe to the belief that a machine must also be conscious and the symbol grounding problem (http://en.wikipedia.org/wiki/Symbol_grounding) must be conquered also, in order to have true artificial intelligence, but I am not one of them.  I think that ‘simulated understanding’ would suffice as long as the same results can be achieved, that is, I think intelligence should be judged functionaly (yes, the same motivation as the Turing Test), that is, what it can do, not what it is.  I don’t care if say, a mathematical operation is done quite differently in a silicon machine than in a biological one, as long as the results agree.

Perhaps understanding is part of knowledge, or perhaps it is a result of intelligence acting on knowledge, or perhaps it is the 3rd ‘primary colour’.

Dave, your point is well taken regarding Ohm’s Law.  Knowing it is one thing, but understanding allows you to know when and how (and perhaps why) to use it in a specific situation, by fully understanding that situation, and the formula, and how the formula applies to that situation.

 

 
  [ # 32 ]

Understanding as the primary goal leads to analysis paralysis which is not very intelligent (except by Victor’s definition since he emphasizes processing as being intelligence.  More processing (code, logic, runs) is more intelligent.)

 

 
  [ # 33 ]

Gary, that’s not what Victor meant at all. I think your definition of “intelligence” he has dubbed “understanding”. Let’s not play semantic games here people.

In fact “more processing = more intelligent” runs completely counter to his statement that “Knowing [a fact] is one thing, but understanding allows you to know when and how (and perhaps why) to use it in a specific situation”. Sounds more like he equates “understanding” to determining whether or not to engage in this or that data manipulation process. And that ability to choose a processing path (or no path) based on previous knowledge separates “thinking” from merely “processing”. Whether or not you want to call this “understanding” or “intelligence” is irrelevant I think, so long as we’re clear on what functionality we’re interested in.

I am interested to hear you elaborate on “Understanding as the primary goal leads to analysis paralysis”.

 

 
  [ # 34 ]
C R Hunt - Aug 10, 2011:

I am interested to hear you elaborate on “Understanding as the primary goal leads to analysis paralysis”.

It is an activity with no end!  Make sense?  Not by understanding because that is begging the question.

Victor Shulist - Aug 9, 2011:

I’m wondering if there is a 3rd peice of the puzzle. 

Knowledge (similiar to data)

Intelligence (similar to program logic—program code, or a running process)

-and-

Understanding (or perception, the act of making sense of your input and relating it to previously acquired knowledge)

C R Hunt - Aug 10, 2011:

Gary, that’s not what Victor meant at all. ...

In fact “more processing = more intelligent” runs completely counter ...

That is his very words extended with “If something is such, then more of something is more of such.”  No meaning change at all, no semantics involved.

Of course, you are free to put words in his mouth…  That’s the magic of communication…  I can’t help it if he says one thing and then seems to contradict himself when you try to understand and “fill in the blanks”.

C R Hunt - Aug 10, 2011:

... Whether or not you want to call this “understanding” or “intelligence” is irrelevant I think, so long as we’re clear on what functionality we’re interested in.

BTW, intelligence isn’t a function, it is the creation of function.  If you want to say creation of function is a function, have fun trying to define what that is in concrete terms.  Every time you do (record the rules of the game), the game changes.

 

 
  [ # 35 ]

functionality:
- concerned with actual use rather than theoretical possibilities
- the range of operations that can be run on a computer or other electronic system
- The capabilities or behaviours of a program, part of a program, or system, seen as the sum of its {features}.

Take any definition you want, but when you’re interested in how a program creates a function, you’re interested in the program’s functionality. Now as I already said,

Let’s not play semantic games here people.

 

 
  [ # 36 ]

Heard all this before like a broken record.  Since this thread has been hijacked from clarifying the definition of AI, especially in relation to knowledge and the different kinds of what that may be, into old topics of understanding and scientific, empirical classification of observed behavior (it’s the observable results, not what’s is going on inside - as though that is theory),  I’m gone.

 

 
  [ # 37 ]

Well, the beauty of AI is that we can analyze the methods by which we achieve an end result as well as the i/o itself. So Gary, what types of algorithms constitute intelligent processing vs. clever (but unintelligent) output generation?*

*Just so we’re clear, the input does not have to be externally generated. I’m talking about any potential process by which a program updates or maintains it’s current state, even if the “input” is no more than the passage of time.

 

 
  [ # 38 ]
Victor Shulist - Aug 9, 2011:

I’m wondering if there is a 3rd peice of the puzzle. 

Knowledge (similiar to data)

Intelligence (similar to program logic—program code, or a running process)

-and-

Understanding (or perception, the act of making sense of your input and relating it to previously acquired knowledge)

To understand, we are given a problem, either visually or auditorily, or, for most chatbots, textually.    We can’t apply our previous knowledge to a given situation if we don’t -understand- that situation, if we can’t make sense of what we are seeing, or can’t make sense of a piece of naturual language input that we are given.

Understanding allows us to correlate what peices of knowledge we acquired previously with the information given in the problem statement we are facing.

Gary Dubuque - Aug 11, 2011:

BTW, intelligence isn’t a function, it is the creation of function.  If you want to say creation of function is a function, have fun trying to define what that is in concrete terms.  Every time you do (record the rules of the game), the game changes.

Victor and Gary,
Based on your definitions, can I now call Skynet-AI “intelligent”? gulp
At least when it comes to math functions?

 

 

 
  [ # 39 ]
C R Hunt - Aug 10, 2011:

Sounds more like he equates “understanding” to determining whether or not to engage in this or that data manipulation process. And that ability to choose a processing path (or no path) based on previous knowledge ......

Exactly.  I think that was reasonably clear smile  If we understand declarative knowledge (in a rich sense, in natural langauge) and we understand procedural knowledge, we know which procedures to use with which information.  Right now computers don’t really understand either : a database server doesn’t understand or relate the bit patterns in its tables, and a CPU doesn’t understand the meaning of executing a procedure : it is only a single CPU Opcode at a time (or more of course depending on how many cores are in use).

Merlin, I believe so, it’s been a while since I used it (whats the url again), so yes, I think one could argue it has a certain ‘mathematical intelligence’.

 

 
  [ # 40 ]
Merlin - Aug 11, 2011:
Gary Dubuque - Aug 11, 2011:

BTW, intelligence isn’t a function, it is the creation of function.  If you want to say creation of function is a function, have fun trying to define what that is in concrete terms.  Every time you do (record the rules of the game), the game changes.

Victor and Gary,
Based on your definitions, can I now call Skynet-AI “intelligent”? gulp
At least when it comes to math functions?

That’s true—Skynet does create functions based on text input in order solve problems. Congrats on your intelligent bot, Merlin. wink

 

 
  [ # 41 ]

Beta:
http://home.comcast.net/~chatterbot/bots/AI/Skynet/home_v4.html

Discusion of the latest version.
http://www.chatbots.org/ai_zone/viewthread/316/P105/

I feel like a proud papa! smile

 

 
  [ # 42 ]

Ok, I’ll bite.

First, I find it very condescending to define “function” to me.  That is just plain hostel.

Second, Skynet is creating functions that are not changing its functionality.  They are functions that are strictly extensions of the definition of what it is. They are functions for something outside the core of the functions C R is demanding to define to declare intelligence. Skynet is not intelligent in the sense that it adapts and grows in its ability to create knowledge, just like Wolfram Alpha is not AI.  Wolfram Alpha changes because intelligent agents, people, are modifying its functions.  Skynet is repeatable because it creates the same information again and again, which is to say, it is not creating knowledge, but instead is repeating the same knowledge of how to solve the math problem it is given (by generating a function.)

Functions, as a general term, is what C R has steered this discussion towards.  Away from AI and its relation to knowledge.  Note, in the beginning the premise was that having a large database of facts is not what makes intelligence.  I have suggested that the massive size of such a store enables a program to present the artifacts of intelligence, thus appearing as though it has the intelligence itself.  This is the kind of (emergent) functionality it appears to satisfy C R.  Because we often use those results to judge intelligence, that is, our instruments of measurements influence our “understanding” of the theory.  It is not the absence of functionality as one might have long ago when one lived with a species that had yet to evolve into a highly intelligent being.  You can not describe that future functionality of the entity until it is “created”.  Nor can you adequately define the course, in the beginning, that the entity evolves through in the “growing” of its intelligence.  That is dramatically true if the entity evolves beyond your level of intelligence.  Yet we still try to nail this functionality down by using our own points of reference.

Understanding is what Victor has steered the thread towards. And he went so far as to separate understanding from intelligence in the same manner as he proposed knowledge as a separate piece.  Only he confused the whole thing by later suggesting understanding is intelligence (as C R interpreted.)  I pointed out that understanding, being the driving force, leads to a algorithm that does not halt.  In Victor’s context, his program exhausts its search because it is incomplete and constrained to not really understanding.  It can only resolve the meaning of its input to a shallow depth.  It doesn’t infer beyond the limited set of “hard coded” rules it has and does not extend nor rewrite those rules dynamically based on its vast experience as Cleverbot might. And especially it does not ever rewrite how those rules are processed, but Victor does that reprogramming.  Therefore it may act smart, but it is not intelligent as Victor is. 

Nor is intelligence understanding. If we say intelligence uses knowledge as this thread assumed when it sought to classify types of knowledge, then understanding could be a type of knowledge which intelligence requires. The knowledge being the act of making sense of inputs, that is, extracting useful information in a form that is compatible with its “thinking” and “remembering”.

Again, call it semantics if you want.  We have already seen the originator of this thread leaving it because of that excuse (even though I’m pretty sure he really knows what intelligence and knowledge is.)

I am offended by you playing the “semantics” trump card.

 

 
  [ # 43 ]
Gary Dubuque - Aug 11, 2011:

First, I find it very condescending to define “function” to me.  That is just plain hostel. [...] I am offended by you playing the “semantics” trump card.

Well I found it hostile that you chose to willfully misinterpret my words to argue against a point no one had made. The word “functionality” isn’t even synonymous with “a function”, and I think my meaning was quite clear. And all of this was especially galling considering I’d just commented on differentiating arguments based on principle from those based on semantics.

Gary Dubuque - Aug 11, 2011:

Skynet is repeatable because it creates the same information again and again, which is to say, it is not creating knowledge, but instead is repeating the same knowledge of how to solve the math problem it is given (by generating a function.)

This is a good point. At what point, though, does the methodology a bot uses to solve a problem (a form of creating knowledge, I would argue*) become an intelligent process? Does it have to generate the algorithm for problem solving itself? From what building blocks? What’s allowed to be innate (written by the bot’s creator) and what must the bot put together for itself?

*Because one is generating a factual statement that requires more context than just the input knowledge to ascertain.

For example, a human’s concept of “more” vs “less” I would consider innate. Even infants and primitive mammals, with no real concept of numbers, can recognize the difference between more objects and less. So would an addition module be allowed to be written by the botmaster? And then must the bot decide, based on positive and negative feedback, whether to use that module to solve a specific type of word problem? Or would such a hypothetical bot still be just using a piece of knowledge it was given to solve word problems (and therefore not intelligent)?

In a nutshell, my question is, where do you draw the line?

Gary Dubuque - Aug 11, 2011:

Functions, as a general term, is what C R has steered this discussion towards.  Away from AI and its relation to knowledge.

Absolutely not. The only one using the word “function” is you. But I did ask about algorithmic methods for knowledge generation/processing (ie, algorithmic methods that generate intelligence). Call this “functions” if you will, that’s fine. But my question was directly related to knowledge and how it’s used.

Gary Dubuque - Aug 11, 2011:

Note, in the beginning the premise was that having a large database of facts is not what makes intelligence.  I have suggested that the massive size of such a store enables a program to present the artifacts of intelligence, thus appearing as though it has the intelligence itself.

Absolutely agreed. In fact I said earlier,

C R Hunt - Aug 11, 2011:

Gary, I like the way you distinguish knowledge and intelligence. Intelligence is the ability to recognize, structure, and manipulate pieces of information. You need intelligence to store/generate/use knowledge, so testing knowledge is a simple way to tell if a human is smart. Unfortunately, computers are great at storing knowledge, so one needs to be more careful about how to test for intelligence. I think that’s why the first thing many people do when they talk to a chatbot is try word problems and the like to test the program’s ability to manipulate new information. (“Create knowledge” as Gary said.)

Gary Dubuque - Aug 11, 2011:

This is the kind of (emergent) functionality it appears to satisfy C R.  Because we often use those results to judge intelligence, that is, our instruments of measurements influence our “understanding” of the theory.

Satisfy? I believe (as stated in the quote above) I’ve already made it clear that the appearance of being knowledgeable is a misleading rubric by which to judge chatbot intelligence.

Gary Dubuque - Aug 11, 2011:

You can not describe that future functionality of the entity until it is “created”.  Nor can you adequately define the course, in the beginning, that the entity evolves through in the “growing” of its intelligence.

(emphasis mine) So we can’t determine the method by which one creates an intelligent agent until it’s already made? Well, we might as well throw in the towel now. Who will ever agree once it is created? We can’t even agree on the various mechanisms by which our own intelligence is generated.

Incidentally, the bold part of your statement makes a fantastic definition of an ‘emergent’ entity. The irony is so glaring I just have to point it out. LOL

Gary Dubuque - Aug 11, 2011:

Understanding is what Victor has steered the thread towards. And he went so far as to separate understanding from intelligence in the same manner as he proposed knowledge as a separate piece.

My original comments on Victor’s “understanding” vs. your “intelligence” were directed at him as much (or more) as to you. Hence the “people”.

Gary Dubuque - Aug 11, 2011:

I pointed out that understanding, being the driving force, leads to a algorithm that does not halt.

This isn’t obvious to me. I’d still like you to elaborate on this point.

Gary Dubuque - Aug 11, 2011:

In Victor’s context, his program exhausts its search because it is incomplete and constrained to not really understanding.  It can only resolve the meaning of its input to a shallow depth.  It doesn’t infer beyond the limited set of “hard coded” rules it has and does not extend nor rewrite those rules dynamically based on its vast experience as Cleverbot might. And especially it does not ever rewrite how those rules are processed, but Victor does that reprogramming.  Therefore it may act smart, but it is not intelligent as Victor is.

I didn’t know we were discussing any particular bot (Victor’s or otherwise) but rather how a bot that achieves intelligence/understanding/whatever must behave. When did Victor say that a bot’s methods for deriving understanding must be “hard coded”?

Gary Dubuque - Aug 11, 2011:

Nor is intelligence understanding. If we say intelligence uses knowledge as this thread assumed when it sought to classify types of knowledge, then understanding could be a type of knowledge which intelligence requires. The knowledge being the act of making sense of inputs, that is, extracting useful information in a form that is compatible with its “thinking” and “remembering”.

I think of “understanding” as the knowledge of how disparate pieces of information are correlated. So it still falls within the set of “knowledge” rather than the process of “intelligence”, but is specifically derived by using intelligence.

Then again, all of this is just semantics. wink

 

 
  [ # 44 ]
C R Hunt - Aug 11, 2011:
Gary Dubuque - Aug 11, 2011:

  I pointed out that understanding, being the driving force, leads to a algorithm that does not halt.

  This isn’t obvious to me. I’d still like you to elaborate on this point.

It isn’t to me either.  In my project I haven’t experienced this yet.  However, there are multiple levels of understanding in my bot that happen or will happen (still working out some design considerations) at different stages….  I won’t hijack here, be I’ll be updating my own thread soon.  (3 weeks holidays in September and I plan on working on it throughout smile )  (I think I’ll through my router in the river so I’m not distracted during that time!)

C R Hunt - Aug 11, 2011:
Gary Dubuque - Aug 11, 2011:

  In Victor’s context, his program…....

  I didn’t know we were discussing any particular bot…..

Yes, this is news to me also.    Same comment as above.    However, I do agree with the point of the bot generating its own arguments/rules ... *from* natural language, or ‘Natural Language Inference’.  NLI would be the ‘Holy Grail’ of achievements for any chat bot developer!  (I don’t know about others, but I won’t even -start- on this functionality until probably late *next* year… unless I were to win a lottery and had more time than just weekends to work on it smile ).
One step at a time, a bot must walk before it can run.  I firmly believe understanding, at least ‘shallow understanding’ as Gary says, is an absolutely necessary (and very logical) first step. 

Thus my first goal is to achieve understanding to the level of QA functionality - telling the bot complex free form NL statements and asking equally complex questions.  This would be ‘level-1 understanding’ functionality…  whereas ‘level-10’ or something, would be total NLI.

 

 

 
  [ # 45 ]
C R Hunt - Aug 11, 2011:
Gary Dubuque - Aug 11, 2011:

First, I find it very condescending to define “function” to me.  That is just plain hostel. [...] I am offended by you playing the “semantics” trump card.

Well I found it hostile that you chose to willfully misinterpret my words to argue against a point no one had made. The word “functionality” isn’t even synonymous with “a function”, and I think my meaning was quite clear. And all of this was especially galling considering I’d just commented on differentiating arguments based on principle from those based on semantics.

So I looked up the “real” definition instead of your misrepresentations.  BTW, your first bullet is a definition of practicality.  Who’s playing semantic games?

Functionality is the quality or state of being functional; especially : the set of functions or capabilities associated with computer software or hardware or an electronic device
or
1. The quality of being functional.
2. A useful function within a computer application or program.
3. The capacity of a computer program or application to provide a useful function.
or
You’ll find “functionality” in dictionaries, but it’s almost always used as a pretentious and inaccurate substitute for “function” or ”usefulness.”
or
The ability to perform a task or function; that set of functions that something is able or equipped to perform.
or
In each case, the site’s purpose describes the function it must perform. Its quality of functionality is determined by how well it performs that function.
or
1. the quality of being functional
2. (Electronics & Computer Science / Computer Science) Computing a function or range of functions in a computer, program, package, etc.

C R Hunt - Aug 11, 2011:

The only one using the word “function” is you. But I did ask about algorithmic methods for knowledge generation/processing (ie, algorithmic methods that generate intelligence). Call this “functions” if you will, that’s fine. But my question was directly related to knowledge and how it’s used.

And I never said functionality was a synonym of a function, but you changed the use of the word “function” into something else quite often.

C R Hunt - Aug 11, 2011:

functionality:
Take any definition you want, but when you’re interested in how a program creates a function, you’re interested in the program’s functionality. Now as I already said,

Let’s not play semantic games here people.

I call foul!

C R Hunt - Aug 11, 2011:

Gary, I like the way you distinguish knowledge and intelligence. Intelligence is the ability to recognize, structure, and manipulate pieces of information. You need intelligence to store/generate/use knowledge, so testing knowledge is a simple way to tell if a human is smart. ... (“Create knowledge” as Gary said.)

I never said intelligence is that ability.  In my mind that is not even a principle (or part of a principle) of intelligence.  One knows how to recognize, structure, and manipulate pieces of information. An intelligent entity gains this knowledge.

In the last post that seems to have disappeared, I explained methods verses intelligence and emergence as the effect of the products (like how a program creates a function) of intelligence.  This is in line with the origin of this thread seeking to define terms for other features for intelligence and the testing thereof.  Instead of repeating it even one more time, you can look it up on Wikipedia.  You don’t need me to explain what is commonly known for AI (like weak AI vs strong AI).  I shouldn’t need to tell you why you can’t define intelligence by specific methods, practicality, or whatever you what to call “it” (unless you are stuck in weak AI, the faking of intelligence by fixed algorithms.)

Toborman - Jul 20, 2011:

I believe our multiple uses of the term AI has led to its ambiguity. To help us clarify our meanings, I suggest we use the following additional terms: AHAI, AHAK, and AHAB, meaning Artificial Human Adult Intelligence, Artificial Human Adult Knowledge, and Artificial Human Adult Behavior.  This may offer a different perspective on testing, as well.

I didn’t talk about a bot, I talked about Victor’s context where he has yet to address the halting issue and why.  But I’ll throw it back to you.  When is his (or anybody’s) invention going to understand?  If it seeks foremost to understand, what stops it from meditating on the nature of all things as triggered by the input data, given it has the vast resources of knowledge for really understanding?  If you pragmatically find that line you are so avid in documenting, the wise men of the world will need to know.  I guess you can say “when it makes sense”.  Then it makes sense when it understands?  It makes sense when it knows what you inputted?  Well, you propose understanding is more than the data inputted.  How much more?

Victor Shulist - Aug 9, 2011:

To understand, we are given a problem, either visually or auditorily, or, for most chatbots, textually.    We can’t apply our previous knowledge to a given situation if we don’t -understand- that situation, if we can’t make sense of what we are seeing, or can’t make sense of a piece of naturual language input that we are given.

Understanding allows us to correlate what peices of knowledge we acquired previously with the information given in the problem statement we are facing. 

Understanding is the ability to integrate many peices of information and realize how they fit together, and what those connections mean in terms of how to apply existing knowledge to solve problems.  Without understanding, knowledge is simply data—- bit patterns of ones and zeros (or magnetic charges) on a disk drive.  Without understanding, we will only ever have ‘mechanical’ intelligence, chess, checkers, go, whatever, but not real-world problem solving intelligence.

Oh, I get it now.  The understanding is in the knowledge base, not the input.  The input only associates (correlates) to the understanding.  Then again Victor says it integrates many pieces of information.  Information being an extension of data by processing (giving it meaning), there we go down the rabbit hole again.  Like this question from earlier in this thread:

Gary Dubuque - Jul 23, 2011:

I’ve been trying to figure out how I could create an example to use for testing.

Given: Tom is a human.
Given: Tom is a male.
Given: Tom is a sibling of Gary.
Given: Tom is happy.
Q: Describe Tom.
A: ???

What is the answer?  If there is a large body of knowledge, how much is used to describe Tom?  Would “Tom is alive” be an appropriate answer?

I believe Data in Star Trek had this problem too, only it was in the outputting of his understanding.  He sought not to “go over the head” or “get lost in the details”.

I say, “You know what I mean.” You say, “Understanding you is intelligence.”  You say Po-tat-o”, I say “Po-tot-o.”

I’m done.  These issues are “dead air”, the same things over and over again.  Bye.

 

This topic is closed, it's not possible to reply

 < 1 2 3 4 5 > 
3 of 5
 
  login or register to react