AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

The World of AI is Not Enough
 
 
  [ # 46 ]

Defending C R Hunt, in debate, invites the following questions:

If a function is not functional
but is then functionalized with functionality,
is that functionalism or a functionalist?

If a person is not personal
but is then personalized with personality,
is that personalism or a personalist?

Is it the function of a person,
or the person of a function?

 

 
  [ # 47 ]

Better yet, is it the function of an agent or the agent of a function?  We are talking AI here.  Or are you saying person to refer to a chat bot?  Kewl!

Hey, look up Principles of Natural Intelligence on Google.  You’d be amazed (maybe.)

 

 
  [ # 48 ]
Gary Dubuque - Aug 11, 2011:

Note, in the beginning the premise was that having a large database of facts is not what makes intelligence.  I have suggested that the massive size of such a store enables a program to present the artifacts of intelligence, thus appearing as though it has the intelligence itself.

Gary, you have read my mind. By using a large database, assessment of mental ability may be obscured by the extensive knowledge rolleyes of the agent.

My attempt to create a mental apptitude test suite with predictable results is predicated on the belief that the results of the test would be more predictable if all the data required for the answer were available in the problem statement.  In order to accomplish this, I treat data and process as separate entities. If the agent’s answer is as predicted, then I give a point to the agent as an indication that the process being tested worked. As you pointed out, no single test can show the overall apptitude of an agent; however, a robust test suite should let me compare some capabilities of various agents.

My own agent, Harry Workman, has passed each of the currently documented tests. I consider that he has the mental apptitude akin to an autistic child learning English as a second language.

err…maybe not that smart.wink

 

 
  [ # 49 ]
Gary Dubuque - Aug 13, 2011:

So I looked up the “real” definition instead of your misrepresentations.  BTW, your first bullet is a definition of practicality.  Who’s playing semantic games?

Every definition I listed was listed in a dictionary under “functionality”. I intentionally skipped the definition “The quality of being functional.” because it didn’t seem to really add anything besides saying a noun can also have an adjective form. smile But let’s take your first definition,

Gary Dubuque - Aug 13, 2011:

the set of functions or capabilities associated with computer software or hardware or an electronic device

That’ll do. I meant the capabilities associated with computer software. If your program is generating functions, it has that capability. If it is creating knowledge (your definition of intelligence) then it has that capability.

I think this was clear from the original context, but now we’ve exhausted the subject. If it takes this much back-and-forth to define a clear term like “functionality”, why are we even bothering with “intelligence”??

Gary Dubuque - Aug 13, 2011:

And I never said functionality was a synonym of a function, but you changed the use of the word “function” into something else quite often.

Yes you did. After the post when I chose the word “functionality” (woeful mistake apparently) you jumped in with,

Gary Dubuque - Aug 13, 2011:

BTW, intelligence isn’t a function, it is the creation of function.  If you want to say creation of function is a function, have fun trying to define what that is in concrete terms.

What?? What the hell does that have to do with the statement,

C R Hunt - Aug 13, 2011:

Whether or not you want to call this “understanding” or “intelligence” is irrelevant I think, so long as we’re clear on what functionality we’re interested in.

???  All I was trying to state was that “intelligence” and “understanding” were the capabilities we’re interested in.

Gary Dubuque - Aug 13, 2011:
C R Hunt - Aug 13, 2011:

Gary, I like the way you distinguish knowledge and intelligence. Intelligence is the ability to recognize, structure, and manipulate pieces of information. You need intelligence to store/generate/use knowledge, so testing knowledge is a simple way to tell if a human is smart. Unfortunately, computers are great at storing knowledge, so one needs to be more careful about how to test for intelligence. I think that’s why the first thing many people do when they talk to a chatbot is try word problems and the like to test the program’s ability to manipulate new information. (“Create knowledge” as Gary said.)

I never said intelligence is that ability.

I added back in the part of my quote you replaced with ellipsis (in bold). Deleting that part makes it sound like I’m implying knowledge testing is a sufficient test for AI. Which is exactly the opposite of what I said. The sentence before that part is basically a restatement of what you said:

Gary Dubuque - Aug 13, 2011:

To simply answer your wish to distinguish knowledge from intelligence: Intelligence creates knowledge.  Knowledge does not make intelligence. Knowledge is an artifact of intelligence which is why we can use it to test for IQ.

So yes. Yes you did say intelligence is that ability.

Gary Dubuque - Aug 13, 2011:

In my mind that is not even a principle (or part of a principle) of intelligence.  One knows how to recognize, structure, and manipulate pieces of information. An intelligent entity gains this knowledge.

So your saying recognizing, structuring, and manipulating knowledge…is knowledge? A learned thing that a bot must learn as well? Am I understanding you correctly?? If so, I have to disagree. From birth our brains are growing and pruning connections based on external input—that is, knowledge is causing us to physically change in response so that we can better structure that knowledge. And we never learned how to do it.

For example, an infant must learn how to interpret visual stimuli into a coherent picture of their surroundings. If you blind an infant, it’s brain will never develop the ability to interpret what the eye sees. Neurologists have done experiments with animals where they force them to see the world upside down (via a mirror contraption). The animals still became wired to see the world correctly. Later, once they removed the mirror, the animals stumbled around, seeing the world upside down.

My point is, we don’t have to learn to recognize, structure, or manipulate knowledge. Our brains are hard-wired for it, growing and trimming in response to new knowledge. Why should an AI have to learn to do that?

Gary Dubuque - Aug 13, 2011:

I didn’t talk about a bot, I talked about Victor’s context where he has yet to address the halting issue and why.

Victor has already said he doesn’t understand why there’s a halting issue. I’ve said the same. You have yet to clarify why a bot can not have some internal conditions which constitute that a piece of knowledge has been understood. Humans don’t have a halting issue, and we certainly engage in trying to understand new input.

Gary Dubuque - Aug 13, 2011:

If it seeks foremost to understand, what stops it from meditating on the nature of all things as triggered by the input data, given it has the vast resources of knowledge for really understanding?

Nobody said the bot would be plunged into deep searches into it’s knowledge base, trying to plumb the depths of what it can associate with new input, into infinity. Again, humans have no problem attempting to understand something to a reasonable depth, without getting lost in “meditating the nature of all things triggered by the input data”.

Gary Dubuque - Aug 13, 2011:

I guess you can say “when it makes sense”.

I agree that we shouldn’t resolve the problem of one vague term (understanding) with another (“makes sense”). wink

Gary Dubuque - Aug 13, 2011:

Oh, I get it now.  The understanding is in the knowledge base, not the input.

No, understanding is “in” both. That is, it requires both. What pieces of data support the claim of the new data? What pieces contradict it? What pieces of data do I trust more? Are there any gaps in a piece of knowledge that are filled in with this new knowledge? Can I perform any new logic operations with known knowledge plus this new knowledge? Does my new logical inference make sense given what I know?

Etc, etc.

 

 
  [ # 50 ]

Digging up a new can of worms, because why not? smile

BTW, intelligence isn’t a function, it is the creation of function.  If you want to say creation of function is a function, have fun trying to define what that is in concrete terms.

If you ever intend to have a computer exhibit intelligence, it’s going to require a set of algorithms. That’s, like, how computers work, right? smile Manipulation of information via algorithms (functions!). Those functions may generate more functions or directly produce output, depending on what you’re trying to accomplish. But one must begin with some set of algorithms from which the rest are produced. Just like our brains wouldn’t work without the ability to build and trim neural connections. Some capabilities just have to be hard coded.

Interested in others’ opinions on this. What are the necessary building blocks—what’s allowed to be innate, and you’d still call the resulting behavior intelligent?

Especially interested in Gary elaborating on this point—how does a program “create function” without some hard-coded functions in place to do it? What constitutes a reasonable starting point? Or is artificial intelligence simply impossible?

 

 
  [ # 51 ]

I don’t think that we’re going to find answer to that any time soon, but I don’t really think it matters, to be honest. This discussion (and everyone participating in it) seems to be trying to isolate and identify something that may not be so easily treated such. The Human mind is a wonderfully complex entity (note that I said ‘mind’ - NOT ‘brain’), and as a micro-ecology of the larger entity that is a Human being, is greater than the sum of it’s parts; and as such, cannot be so easily ‘dissected’ and examined separately from that whole without losing some of the context that defines it. What we term as ‘intelligence’ (which we can’t even seem to be able to agree on a definition of) is no different. To my thinking, even so seemingly simple a task as describing and defining intelligence is like two deaf people trying to describe a symphony - or even a TV jingle, for that matter.

All that aside, however, while we’ll likely never be able to agree as to what ‘intelligence’ is, we can certainly entertain the notion of being able to come up with a set of protocols and tests that can indicate the approximation or illusion of intelligence. After all, who’s to say that what we, ourselves, exhibit isn’t ‘actual’ intelligence, but just a close approximation? smile

Also something that I think we need to consider is whether we’ve got our sights set too high at this stage in the game. For example, we consider a child of 5 years old to be intelligent if said child can read fluently from a second grade reading primer (as much as three years early), yet for a full grown adult this is NOT considered to be intelligent. In fact, if that adult isn’t capable of going beyond that ability, then said adult is clearly considered to be ‘developmentally delayed’, or even (gasp!) ‘retarded’. Perhaps we’re using the wrong standards by which we’re judging AI. Perhaps we’re treating an ‘infant’ as an ‘adult’, and judging too harshly. And before someone pops off with “but we’ve been developing Artificial Intelligence for over 50 years, and have been thinking about it for far longer!” - Let me remind you that not every “living” entity has the same life cycle, let alone developmental cycle. Computers are not people, nor are they mayflies, or ‘main sequence’ stars. And Artificial Intelligence (as a field of study, among other things) is not a computer. There is no logical, coherent way to describe the life cycle (or even the development cycle) of an entity we’ve never seen both the beginning and end of. This is simple logic. Thus, how can we be able to say with any degree of certainty whether we’re judging correctly what level of intelligence we should be testing for? My thoughts on this are simple: start small, with low expectations, and keep increasing the difficulty level of the tests at a slow, gradual rate, until we see a failure. Then keep pushing at that level while monitoring progress. Seems simple to me, and far more productive than bickering over the meanings of words. raspberry

 

 
  [ # 52 ]

@CR, it is obvious you don’t know what intelligence is and you keep denying, trying to change, what I say it is.  Nor do you care to find out from the rest of the AI community because you keep hammering me to spoon feed you text - “how does intelligence create knowledge?  It must be an algorithm…”  I keep saying an algorithm is not what intelligence is.  An algorithm is a product of intelligence, it is what intelligence creates. Duh.

Like Dave said, a child doesn’t start out with knowledge of how to do much.  By your definition, a child is not intelligent.  It doesn’t have functionality!  And we have to discuss functionality here because you insist.  And it needs to include things like parsing and understanding because that’s the practicality of AI.

Humans do have the halting problem (much more often than you’ll admit), hence the fancy term “analysis paralysis”.  Do you think I just made that up?  Hey, do you and your friends always know what place to go to for that casual dinner out? (“Where do you want to go?” “I don’t know, where do you want to go?” Oh the irony of it all, there is so many options to pick from and we don’t know one.)

The mirror experiment is something that’s learned, right?  You say we recognize things before we ever learn?  You say we organize stuff before we have the stuff?  And structure it too?  Like the Hopi Indians know time like you do - not.  Organizing, recognizing, structuring - that is the stuff - there is no separation of “things” and the “organization of things”, etc.

Principles of Natural Intelligence (if you would have looked into it online - and these are probably obsolete):
(1) The modal principle subserves feeling [Vertebrate]
(2) The diagonalization principle subserves coherence [Reptile (Amphibian)]
(3) Action is subserved by the decision [Mammals]
(4) The problem of finitization resolves into a figural principle [Primates]
(5) Finally, the phenomena of analysis reflect the action of the indexing principle [Homo sapiens]

There are you happy now?  Again I am just repeating other sources as I did with that list of definitions.  I don’t necessarily believe these are the “features” of strong AI.

The source of those principles says the last principle is like the computation in a computer.  It is the principle that supports language.  I would guess then, you’d simulate in a machine the other four principles first before you do things like NLP.

Hey, where’s understanding?  Where’s logic?  Where’s the “Coverage: Remember, recall, classify, verify, interpret, translate”?  Oh well, the principles are too primitive I guess, unless this is a matter of semantics too.

C R Hunt - Aug 10, 2011:

Let’s not play semantic games here people.

 

 

 
  [ # 53 ]

You know, if it weren’t for the fact that both you, Gary, and you, CR, are providing such good information with your arguments, I would have jumped in here a few posts back; and I’m only saying something now because the generally defensive stances you are both taking are beginning to take on a slight tinge of hostility that I would like to reduce, if I can. I firmly believe that you both have some valid points, and from what I’m reading, your viewpoints aren’t as far away from each other as you would care to admit. Like Gary said earlier, it’s the whole “po-tay-to/po-tah-to” conundrum that seems to be generating so much friction, and I’m fairly certain that neither one of you is going to convince the other. I’d like to see this discussion continue, but I’d like each of you to take a few moments to think about the pitfalls of getting too emotionally involved with the discussion at hand, and try to release some of the frustration and (I hate to use the term ‘anger’ here, but if it fits…), and start again, perhaps with a calmer attitude. You both are intelligent, mature individuals, and I’m sure that we all can enjoy this discussion without resorting to “spitwads at 30 paces”. smile

 

 
  [ # 54 ]

Oh I’ve intended to stop several turns ago.

Two points before I do. Remember the caterpillar in Alice in Wonderland described how the hare running the race can never finish.  The hare runs half way to the finish line and then half way of the remaining distance and then half way of that remaining distance, etc.  Understanding for a computer is like that.  Getting more and more information but never “Gestalting” (finitization.)

Substitute “mathematics” for “intelligence” and “formula” for “function” and the word play becomes more obvious.  I have a calculator on my computer. [Gary]Mathematics creates formulas.[/Gary] My computer does formulas.  It can formulate all sorts of knowledge like yards into meters or how fast a stone will be going when it hits the ground if you drop it from 50 feet.  There must be a formula for creating formulas, a formula for (artificial) mathematics.  [Gary]Mathematics is not a formula.[/Gary] How about the rules (and principles) of mathematics.  Let’s see, a triangle’s angles must add up to 180 degrees - that’s got to be one of the rules.  Wait, you say, in spherical geometry the rule is different? Well then I’ll use examples in the physical world.  What do you mean there are measurements that can’t be a definite value? What is this, that the length of the hypotenuse of a right triangle with sides of one unit is “irrational”?  Imaginary numbers, oh that must be a theory and not practical. Come on now, we made mathematics from the physical world, so why doesn’t my calculator have the correct answer for my simple triangle’s length, or my circle’s circumference?  I guess we can never have “artificial” mathematics.

Ok, this time I’m really gone.

 

 
  [ # 55 ]

Gary, I love you dearly, and have a great deal of respect for not only your contributions to this community, but also for your obvious passion for the subject of AI, but that made absolutely no sense to me at all. I’d rather not see you go, and I’m sure that I’m not alone here. Heck, I’d even be willing to bet that CR would rather that you stuck around, because you do have some good ideas and insights. would it not be better to just set aside our conflicting viewpoints here, even if it means abandoning this particular subject of discussion? Just a thought. smile

 

 
  [ # 56 ]

Dave - I’m thinking (and hoping) that Gary means he has gone from this thread rather than leaving the site.

 

 
  [ # 57 ]

I know. But the thought is still the same, one way or the other. smile

 

 
  [ # 58 ]

(EDIT: I split up this post into a couple considering it turned into more of a monster than anticipated.)

Gary Dubuque - Aug 14, 2011:

@CR, it is obvious you don’t know what intelligence is and you keep denying, trying to change, what I say it is.

Oh, so obvious. What a tricky philistine I am.

I agreed with you when you said that “intelligence creates knowledge.” But once we get into the (important!) detail of how this “creation” process works, we run into trouble. And it is all too easy to claim, as you did in another post, that one cannot know what this process is until it’s implemented. Why bother with this AI business at all?

I said that “creating” knowledge (intelligence) requires “recognizing, structuring, and manipulating knowledge”. How is that achieved? Sure, the AI could develop it’s own functions, but it needs some framework by which to do this…

Gary Dubuque - Aug 14, 2011:

Nor do you care to find out from the rest of the AI community because you keep hammering me to spoon feed you text - “how does intelligence create knowledge?  It must be an algorithm…”

...Hence my discussion of their being some building blocks in place for the bot to develop the methods it uses to create knowledge. If one does not intend to give the bot some starting place—some “brain”, if you will, to build from—then one might as well compile a blank page and say “AI, you take it from here!”

I’m not looking for spoon-fed text. I’m looking for direct statements. How does one create artificial intelligence without implementing some sort of code, Gary?

 

 
  [ # 59 ]
Gary Dubuque - Aug 14, 2011:

Like Dave said, a child doesn’t start out with knowledge of how to do much.  By your definition, a child is not intelligent.  It doesn’t have functionality!  And we have to discuss functionality here because you insist.  And it needs to include things like parsing and understanding because that’s the practicality of AI.

In no way does anything I’ve said imply that not having a lot of knowledge means one isn’t intelligent. If you persist in making up arguments on my part, then you can keep enjoying pointing out how ridiculous your constructions are. But we won’t really progress here will we?

The functionality I am interested is the ability to incorporate new stimuli (any external input) and organize it internally in a consistent way that improves future knowledge acquisition. Hence my discussion of vision earlier. A child starts out with an amazing ability to acquire and learn and grow, children are constantly taking in so much more of their environment than adults are, learning how to focus on important stimuli and determining just what the important bits are.

You are completely misinterpreting me. At first I believed it was accidental, but I wonder if you do not intentionally want to warp my words into something trite and ridiculous. As Dave said, I don’t think our views are so different, but you are so insistent on being offended, on driving everyone else’s opinion into parody, that we can never really have a real discussion. It’s tiring. Knock it off.

Want a productive discussion? How about addressing directly any of the many questions I’ve posed. I address your words directly. You ramble on about things I’ve never said.

 

 
  [ # 60 ]
Gary Dubuque - Aug 14, 2011:

Humans do have the halting problem (much more often than you’ll admit), hence the fancy term “analysis paralysis”.  Do you think I just made that up?  Hey, do you and your friends always know what place to go to for that casual dinner out?

If this “halting problem” does not lead to any great trouble for people, I’m not terribly worried about it for bots. One can always put terminating conditions in place to prevent perpetual “meditating on the nature of all things as triggered by the input data”.

Gary Dubuque - Aug 14, 2011:

The mirror experiment is something that’s learned, right?  You say we recognize things before we ever learn?  You say we organize stuff before we have the stuff?  And structure it too?

This is exactly what I mean about intentionally misinterpreting me. Am I really supposed to take you seriously if you insist on assuming that everyone else is this illogical?

For the sake of edification, I’ll entertain your drivel with a reply. I’m saying that infants are constantly receiving stimuli from their eyes. But the neural connections that lead to a consistent picture of their surroundings are still forming after birth. A feedback loop between the developing wiring and continuous new stimulation cause the neurons to grow and trim to correctly produce an image of our surroundings. The example with the mirrors was meant to illustrate this.

My point was that our neurons are “hard coded”, if you will, to grow and trim and respond to stimuli. They didn’t learn to do that. But they use this “hard coded” ability to learn to see. That’s the part where they demonstrate intelligent development.

What’s so interesting about an example like this is we don’t generally associate sight or any other neurological process happening outside the conscious self as being an “intelligent process”. And yet the same mechanisms of neural growth and trimming that cause us to learn to see also responsible for those abilities we do consider intelligent.

That’s why when you said “One knows how to recognize, structure, and manipulate pieces of information. An intelligent entity gains this knowledge.” I disagreed. We have the ability to think about the fact that we do all these things. This is knowledge. But the actions themselves are due to abilities of the brain that are not learned.

The “wiring” of neurons, if you will, is the “function” that the brain wrote based on input as well as its current state. But the ability to “wire” itself in the first place is “hard coded”.

 

This topic is closed, it's not possible to reply

‹ First  < 2 3 4 5 > 
4 of 5
 
  login or register to react