AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

The Lovelace Test
 
 

http://motherboard.vice.com/read/forget-turing-the-lovelace-test-has-a-better-shot-at-spotting-ai

Abstract

The Turing Test (TT) is claimed by many to be a way to test for the presence, in computers, of such ‘deep’ phenomena as thought and consciousness. Unfortunately, attempts to build computational systems able to pass TT (or at least restricted versions of this test) have devolved into shallow symbol manipulation designed to, by hook or by crook, trick. The human creators of such systems know all too well that they have merely tried to fool those people who interact with their systems into believing that these systems really have minds. And the problem is fundamental: the structure of the TT is such as to cultivate tricksters. A better test is one that insists on a certain restrictive epistemic relation between an artificial agent (or system) A, its output o, and the human architect H of A — a relation which, roughly speaking, obtains when H cannot account for how A produced o. We call this test the ‘Lovelace Test’ in honor of Lady Lovelace, who believed that only when computers originate things should they be believed to have minds.

http://kryten.mm.rpi.edu/lovelace.pdf

 

 
  [ # 1 ]

Whilst I agree with some of the qualitative ideas behind the test, the test as put forth in the paper cannot be passed by any physical device, as it takes as an implicit axiom that if something runs on computational principles then it cannot be considered to be ‘intelligent’. Needless to say, I strongly disagree with this assertion.

 

 
  [ # 2 ]

As challenges go, I’m a little unimpressed by this challenge, especially considering there are already poetry and painting robots, and as they say in my area of expertise “even a monkey can draw”. So I have a hard time imagining a convincing solution to this challenge.

On a note of personal amusement, I know of lady Lovelace’s “objection” through Alan Turing’s paper, and as was the case with Turing himself, it seems to me her words were equally misinterpreted. What she said sounded like a corporate disclaimer intended to calm down an angry mob, to assure the public that the product they built was in no way a danger because it only did what you told them to do and was therefore under complete control.

Our most detailed information of Babbage’s Analytical Engine comes from a
memoir by Lady Lovelace (1842). In it she states, “The Analytical Engine has
no pretensions to originate anything. It can do whatever we know how to order
it to perform”. This statement is quoted by Hartree (1949) who
adds: “This does not imply that it may not be possible to construct electronic
equipment which will “think for itself,”

 

 
  [ # 3 ]

The agent’s designers must not be able to explain how their original code led to this new program.

This also is terribly irresponsible: For creators of strong AI to basically have no idea what they’re doing.
Plus, if the designers can’t explain what happened inside the program, it will be just as much a behaviouristic assessment as the Turing Test and there would still be no end to the discussion. And couldn’t the designers just pretend not to know how it works?

But it does stike a chord with something Alan Turing and other AI grandmasters expressed: That intelligence often seems defined as that which we do not understand. This test however assumes that the opposite must also be true: That if we do not understand it, it must be intelligent. A textbook logical fallacy.

 

 
  [ # 4 ]
Don Patrick - Jul 10, 2014:

The agent’s designers must not be able to explain how their original code led to this new program.

This also is terribly irresponsible: For creators of strong AI to basically have no idea what they’re doing.
Plus, if the designers can’t explain what happened inside the program, it will be just as much a behaviouristic assessment as the Turing Test and there would still be no end to the discussion. And couldn’t the designers just pretend not to know how it works?

But it does stike a chord with something Alan Turing and other AI grandmasters expressed: That intelligence often seems defined as that which we do not understand. This test however assumes that the opposite must also be true: That if we do not understand it, it must be intelligent. A textbook logical fallacy.

That’s what I assumed when I first read that criteria, but after reading the paper I realized that it was actually worse than that. The criteria the paper advocates explicitly rejects any approach equivalent to computation, no matter how far removed from the ‘standard model’ of computation it might be in a practical sense, and regardless of whether the programmer actually understands it in any meaningful way.
The paper closes by denying that it’s actually advocating that thought is necessarily an irreducible process, but that denial rings hollow in light of the rest of the paper.

EDIT: As a side-note, the ‘intelligence is that which we do not understand’ definition has always seemed to be of dubious usefulness to me. It’s useful as a predictor of people’s reactions to new AI solutions to problems previously unsolvable by computers, but not really useful in any other sense.

 

 
  login or register to react