About a year ago in some AI forum (I don’t think it was this forum) I saw a link to a news item where some U.S. government agency (maybe DARPA?) was seeking “Maxwell’s equations of thought”, and they were going to spend a lot of money to achieve that. I’ve been unable to relocate that news item, unfortunately. I remember being struck by the wording of that solicitation (maybe for SBIRs?) in that their powerful lust for intelligence was showing through: it was almost as if they were saying, “We want AI *bad*, and we’ll pay whatever it takes to get it.”

Even disregarding the emotional overtones of their lust for power, I was struck by the foolishness of that direction of research, and the more I thought about it, the more foolish it seemed. Maybe other people see it differently, so let me post my reasons I thought this was a foolish direction of research for AI, so people can post any contrary opinions…

(1) emphasis on “equations”

The passion for “equations” reminds me of those old B movies about Nazis chasing scientists to get “ze secret formula” for some new weapon or some new technology. What is the big deal with equations, anyway? Equations are of course very valuable. In conventional numerical math, equations show the exact numerical relationships between multiple variables, such as how much force of compression would be needed to produce nuclear fission, of the density of atoms needed for sustained stimulated emission for lasers, all things that Nazis would have loved to know about. Another big bonus with equations is that usually the variables can be easily moved around to conveniently get an expression for the single missing piece (i.e., variable) that you need to isolate.

The main problem I see with reliance on “equations” is that there is largely an unstated assumption in the very phrase “Maxwell’s equations of thought”: that numerical math must be involved, as it is in physics, and further that the field of numerical math being used must use scalars. In fact, there exist other entire branches of math, especially group theory and topology, that for the most part do not use numbers. Even if a numerical branch of math *were* involved, the variables might represent entities such as matrices instead of scalars. With matrices, equations become much more difficult: operations such as square roots are more difficult on matrices, non-invertible (“singular”) matrices are common therefore variables might not be isolatable via division, a large number of solutions may exist, and so on.

(2) Maxwell’s equations represent a unified top-town view

The apparently separate laws and phenomena of electricity and magnetism culminated by James Clerk Maxwell, who published an early form of the equations completing AmpĂ¨re’s circuital law by introducing the displacement current term, and showed these equations predict light to propagate as electromagnetic waves. They were rewritten in by Oliver Heaviside in the more modern and compact vector calculus formalism he independently developed. Increasingly more powerful mathematical descriptions of the electromagnetic field were developed into the twentieth century, enabling the equations to take simpler forms using more advanced mathematics.

http://en.wikipedia.org/wiki/History_of_Maxwell’s_equations

As is often the case, the most general physical laws, especially unified theories, are the culmination of many years of earlier research that painstakingly created separate formulas, and only after enough such formulas were created did somebody notice that they could be combined into a single more general formula. Therefore to start with a single grandiose set of equations like Maxwell’s equations seems very naive: the only realistic approach would be to start creating the first few such equations, which probably don’t yet exist in any form, as a foundation for later research to unify those equations. It’s said that theories after arise only after practice has perfected their practical application.

(3) equations may not be the key to strong AI

I looked up what I believe to be the original published source of the expression “Maxwell’s equations of thought”, and was lead to Daniel Crevier’s book on the history of AI. Sadly, the very mention of such equations occurred only in a negative context, in that Doug Lenat basically admitted defeat in producing such equations, thereby gave up on any such approach to strong AI, and turned instead to knowledge coding as the only practical solution he could envision. His approach became the well-known Cyc project…

And so it was that in September 1988, Lenat

began an MCC research report as follows: “I would like to present a

surprisingly compact, powerful, elegant set of reasoning methods that

form a set of first principles which explain creativity, humor, and com-

mon sense reasoning—a sort of‘Maxwell Equations’* of thought.”

Although such a discovery would have been a logical outcome of Le-

nat’s previous work in programming computers to figuratively go out

into the world and make sense of what they saw, he continued: “I’d like

very much to present [those reasoning methods], but sadly,I don’t. So, instead, this paper will tell you about Cyc, the

believe they exist

massive knowledge bas project that we’ve been working on at MCC for

the last four years.”

Stemming from an admission of defeat, Cyc (short for encyclopedia)

is a $25-million research project that will last for two person-centuries.

Lenat had become convinced that no amount of finessing and fancy

footwork would ever let a machine discover by itself such elementary

facts as “Nothing can be in two places at once,” or “Animals don’t like

pain,” and “People live for a single solid interval of time.” The most

salient discovery in AI since the Dartmouth conference is that we need

to know a colossal number of these common-sense assertions to get by

in the world. Lenat convinced his MCC sponsors that the woes of the

new discipline stemmed from repeatedly trying to wriggle out of the

need to encode this knowledge manually, tedious fact after painful

assert, in machine-usable form.

(“AI: The Tumultuous History of the Search for Artificial Intelligence”, Daniel Crevier, 1993, page 240)

Even more sadly, the Cyc project itself is questionable. A friend of mine told me about the Cyc project in 1990, and I recognized immediately the extreme misdirection of the effort, as others have formally noted…

The Cyc project has been described as “one of the most controversial endeavors of the artificial intelligence history”, so it has inevitably garnered criticism.

http://en.wikipedia.org/wiki/Cyc

Nowadays I believe the Cyc project will become a terrific data point in the history of AI since its focus is so clear and it’s an all-or-nothing endeavor whose success or failure will clearly establish codified knowledge as either the essential component of strong AI or a disgracefully failed direction. A less focused ad hoc AI architecture couldn’t provide such a clear-cut data point. Unfortunately, I still believe Cyc is destined to failure, but we’ll have to wait to know for sure.

(4) suitable mathematics for AI may not even exist

I plan to post another thread on this topic alone.

As Poincare noted, the real world is its own fastest simulator (Soddy 1933). As a result, no science can precisely anticipate the future states of natural systems in the face of ongoing physical and social processes through which the creative processes of evolution and emergence even now play themselves out.

http://www.fs.fed.us/eco/jsusfor5.htm

I read somewhere, possibly in Gleick’s book “Chaos”, that some systems like weather are so complicated, so reliant on initial conditions, and so large, that it is literally impossible to simulate them well. It would literally take longer to simulate weather accurately over a long period of time than it would to just the weather do what it was going to do anyway. Other such difficult systems for simulation would be the stock market, ecosystems, quantum level descriptions of many atoms, and societies. One might also add the human brain to this list.

Imagine if the underlying “computations” of the human brain were merely a process that attempts to quickly assemble a model of the (visual, auditory, etc.) surroundings of the organism’s immediate environment. This would be like a child quickly manipulating some Silly Putty to mimic the 3D shape of an object the child sees in the real world. Then imagine trying to describe that process with mathematics. All the major existing branches of mathematics would fail miserably to succinctly describe such a process because such a process is not inherently numerical (=> numerical math fails), it is too irregular to describe with equations (=> geometry fails), it is not reliant on well-defined states (=> group theory fails), was not based on an algorithm (=> automata theory fails), and it is not reliant on connectedness (=> topology fails). Basically the assumption was false: that an existing form of mathematics was the best approach to describing the phenomenon.

My opinion stems from this observation: I believe that the mathematics of AI either doesn’t exist at all, or is some branch of mathematics that is either very obscure or hasn’t been invented yet. Even if such a branch of mathematics existed and applied to intelligence or thought, I don’t believe it would be numerical, and therefore the term “equations” would be very misleading to describe such a mathematical system.

We’ve failed miserably for 60 years to produce strong AI, and I believe one of the most basic causes is that we haven’t yet seriously reconsidered our underlying assumptions, including the assumption of the existence of something like “Maxwell’s equations of thought.”

The other path is more difficult but in the long run more rewarding,

and I hope it will be followed; I think it will be. That is to continue and

extend the present vigorous research into the associative and creative pro-

cesses of the human brain, and simultaneously to begin thinking about

wholly new kinds of computers whose thinking will be holistic and associ-

ative rather than linear. We have to go a long way back, as far back as

Babbage, and explore a branch that we passed by then.

(“2081: A Hopeful View of the Human Future”, Gerard K. O’Neill, 1981)