AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Grammar, Language, Knowledge & Thought
 
 

I love these videos from the Singularity Institute.

start off with Mr. Goertzel

http://www.vimeo.com/7320152

Also, Stuart Hameroff work is fascinating ...  good vid @ http://www.vimeo.com/7320518

 

 
  [ # 1 ]

I think Mr Goertzel is doing something similar as I am.

 

 
  [ # 2 ]

The guy *DOES* know what he’s talking about!

 

 
  [ # 3 ]

(RESPONSE, PART 1)

Whew, lots of heavy stuff in this video here to discuss, all of which took me a while to research, digest, write up, and think about! Here are my thoughts on that video after spending some time on its contents. That’s the first Singularitarian material I’ve seen in detail online, though I expect to get heavily involved with them in the future (2013). I just about have to: it’s just too exciting and my ideas fit so perfectly with what they’re doing.

“Ben Goertzel at Singularity Summit 2009—Pathways to Beneficial Artificial General Intelligence”
http://vimeo.com/7320152

Below are some of the references Goertzel mentioned, for convenience to readers here.

Web sites:

goertzel.org/Summit09 - Goertzel’s site, which contains a video of his slides in this video presentation

http://www.kurzweilai.net/ - Ramona chatbot

http://opencog.org/ - details of Goertzel’s project, they need C++ programmers (for free?)

http://wiki.opencog.org/w/The_Open_Cognition_Project

“How Bill Gates Destroyed the Universe” - cartoon made by Goertzel’s 12-year-old daughter
http://lepuppy.org/gates/
http://www.youtube.com/watch?v=6fIBssx_qVY

Society for the Promotion of Universal Nonexistence Through Malicious AI
http://www.deadlyai.org - only a joke
http://www.acceleratingfuture.com/michael/blog/2009/10/society-for-the-promotion-of-universal-nonexistence-through-malicious-ai/

Google Videos
http://www.google.com/videohp
“Stages of Ethical Development in Artificial General Intelligence Systems”
http://www.youtube.com/watch?v=T3nSsQxMEj8
“Chairman Mao Zedong” - #2 recommendation by Google Videos for the above AGI video!
http://www.youtube.com/watch?v=4OPeCX7Zl78

agi-conf.org/2010 - AGI conference in Switzerland, 2010

Companies and universities:

Novamente LLC - Goertzel’s own company

Biomind LLC - spinoff of Novamente LLC
http://wp.novamente.net/?page_id=17
http://wp.biomind.com/

Xiamen University, South China, artificial brain lab

Genescient - Goertzel worked for them, “Methuselah flies” research

Books:

“The Hidden Pattern” (Ben Goertzel, 2006)
“Universal Artificial Intelligence” (Marcus Hutter, 2004)
“Building Better Minds” (Ben Goertzel, upcoming)

 

 
  [ # 4 ]

(RESPONSE PART 2)

For my taste, I thought Goertzel’s general approach was too simplistic. Like all those people who fill up neural network conference proceedings with hybrid systems, believing they really have something good because they merely took the benefits of two different technologies (neural networks + expert systems, usually) and blindly hoped those technologies would somehow overcome each other’s inherent weaknesses without any deeper design effort,
Goertzel uses the same approach of listing the various promising technologies for AGI, then combining them, somewhat blindly hoping that something special will occur on its own as a result. Those technologies he listed are:

1. bio
2. nano
3. neuro
4. robo
5. cogno - artificial cognitive science, which was his focus during this talk

His list of ways to try to produce AGI are:

1. extend existing narrow AI programs, like Google or automomous cars or game-playing programs
2. add commonsense understanding to a Chatbot, like Ramona by Novamente on kurzweilai.net
3. study, then emulate the brain in computers
4. artificial life approach to evolution, “in silico evolution”
5. derive AGI designs via math theory, as in the book “Universal Artificial Intelligence” (Marcus Hutter)
6. integrative cognitive architecture - Goertzel’s approach in this talk
  a. LIDA architecture (by Stan Franklin)
  b. SOAR (by John Laird, Allen Newell, Paul Rosenbloom)
  c. OpenCog system (by Ben Goertzel)
(7. quantum computers, quantum gravity computers, the next talk by Stuart Hameroff)

My opinion of these:

1. extend existing narrow AI programs - ridiculous: we’ve tried this to no avail since the 1950s
2. add commonsense understanding to a Chatbot - chatbots are irrelevant, commonsense understanding is the key issue, but needs a serious foundation, not this kind of hacking
3. emulate the brain in computers - ridiculous: we’ve tried this to no avail with neural networks since the 1960s, known to be a flawed approach, I’ll post a thread about this
4. artificial life approach to evolution - ridiculous: a last resort that appeals to randomness for some kind of enlightenment
5. derive AGI designs via math theory - very promising, but a very novel new math is needed, I’ll post a thread on this topic one of these days
6. integrative cognitive architecture - the simplistic hybrid approach as I mentioned before, SOAR is fairly disparaged nowadays as a cognitive model
7. QC: ridiculous

So in general I was quite disappointed in the naivity of these approaches. At least that assures me that the Singularitarians aren’t likely to beat me to AGI for a while.

I don’t like when people mispronounce “paradigm” like “PAIR-a-dime” as Goertzel does in this video. The preferred pronunciation is like “PAIR-a-dim”, even though I know most people mispronounce it nowadays. Such mispronunciations to me signal a lack of attention to detail and reliance on other people’s ignorance, which to me always casts suspicion on the quality of the speaker’s work.

Note again the mention of how many people are expecting quantum computing (QC) to be the foundation of artificial general intelligence (AGI). I read comments to this effect all the time, and I can only conclude that the people who say that haven’t looked seriously into QC and its algorithms. It’s bad stuff, extremely difficult to program even a simple search or factoring algorithm without dipping heavily in arcane theorems from number theory
with which few people are familiar, quantum Fourier transforms, Hadamard transformations, Toffoli gates, Dirac notation, and more. QC programming is *very* different than regular computer programming, and even classical/digital computer programming is a step removed from natural computing, so QC programming is way the heck out there in terms of extremely unnatural high complexity. Fortunately, Goertzel is of my same opinion, that QC is not the way to go for AGI.

However, disappointingly, he says that even if QC is the way to go, that means “we just have to rejigger our algorithms to run on quantum computers”, which is very sad for me to hear, since that seems to identify him as one of those people ignorant about the extreme complexity of even basic QC algorithms.

Unfortunately, Goertzel admits all of his approaches are based on digital computers, which I was shocked to hear. This is extremely naive in my view. We need (at least partly) analog computers for AGI! I don’t mind imparting this critical insight to the world. Nobody is likely to take me seriously, anyway, and even if they did, they likely wouldn’t know how to implement this general principle in a useful way for AGI. If I get the time, I’ll post my reasons for this conclusion, and I’ll give examples of famous people who made fools of themselves because they didn’t realize this.

 

 
  [ # 5 ]

(RESPONSE PART 3)

There are a few other statements he throws out that have heavy implications that I don’t have time to discuss here:

()
“I saw the man with the telescope.” - classic ambiguous sentence
To me, this has implications about the need to create a new language for the human-machine interface.

()
“So that language isn’t just an empty series of tokens.” - what linguists call “symbol grounding”
To me, this statement has important implications for chatbot programming and for some of the deepest issues in AI.

()
typical chatbots: text => text
Goertzel’s system: text => knowledge => text
Same comment as above—what chatbots need to do to achieve AGI / commonsense reasoning. Goertzel is at least doing this right, and this shows deep insight.

()
richness from the real world not available in the virtual world
This has heavy implications, mirroring almost exactly what the famous AI critic Herbert Dreyfus mentioned:

  3. Since computers are not in a situation, and since no one under-
stands how to begin to program primitive robots, even those which move
around, to have a world, computer workers are faced with a final prob-
lem: how to program a representation of the computer’s environment.
We have seen that the present attempt to store all the facts about the en-
vironment in an internal model of the world runs up against the prob-
lem of how to store and access this very large, perhaps infinite amount
of data. This is sometimes called the large data base problem. Minsky’s
book, as we have seen, presents several ad hoc ways of trying to get
around this problem, but so far none has proved to be generalizable.
  In spite of Minsky’s claims to have made a first step in solving the
problem, C. A. Rosen in discussing current robot projects after the work
reported in Minksy’s book acknowledges new techniques are still re-
required:

We can forsee capability of storing an encyclopedic quantity of facts
about specific environments of interest, but new methods of organization are
badly needed which permit both rapid search and logical deductions to be made
efficiently.

In Feigenbaum’s report, there is at last a recognition of the seriousness
of this problem and even a suggestion of a different way to proceed. In
discussing the mobile robot project at the Stanford Research Institute,
Feigenbaum notes:

It is felt by the SRI group that the most unsatisfactory part of their simulation
effort was the simulation of the environment. Yet, they say that 90% of the effort
of the simulation team went into this part of the simulation. It turned out to be
very difficult to reproduce in an internal representation for a computer the
necessary richness of environment
that would give rise to interesting behavior
by the highly adaptive robot.

We have seen that this problem is avoided by human beings because their
model of the world is the world itself. It is interesting to find work at
SRI moving in this direction.

It is easier and cheaper to build a hardware robot to extract what information
it needs from the real world than to organize and store a useful model. Crudely
put, the SRI group’s argument is that the most economic and efficient store of
information about the real world is the real world itself.

  This attempt to get around the large data base problem by recalculat-
ing much of the data when needed is an interesting idea, although how
far it can go is not yet clear. It presupposes some solution to the wholistic [sic]
problem discussed in 1 above, so that it can segment areas to be recog-
nized. It also would require some way to distinguish essential from
inessential facts. Most fundamentally, it is of course limited by having
to treat the real world, whether stored in the robot memory or read off
a TV screen, as a set of facts; whereas human beings organize the world
in terms of their interests so that facts need to be made explicit only insofar
as they are relevant.
(“What Computers Still Can’t Do: A Critique of Artificial Reason”, Herbert L. Dreyfus, 1992, pages 299-300)

Here’s a quote relating to my earlier response’s comment about the amount of abstruse number theory required to program even a basic (factoring) algorithm (Shor’s Algorithm) in QC:

So it all comes down to the likelihood of our being lucky. We show in
Appendix M that the probability is at least 0.5 that a random number
a in G sub pq has an order r that is even with a^(r/2) ==/ -1 (mod pq). So we
do not have to repeat the procedure an enormous number of times to
achieve a very high probability of success. If you’re willing to accept
the fact that you don’t have to try out very many random numbers a
in order to succeed, then this elementary argument is all you need to
know about why period finding enables you to factor N = pq. But if
you’re curious about why the probability of good fortune is so high,
then you must contend with Appendix M, where I have constructed an
elementary but rather elaborate argument, by condensing a fairly large
body of number-theoretic lore into the comparatively simple form
it
assumes when applied to the special case in which the number N is the
product of two primes.
(“Quantum Computer Science: An Introduction”, N. David Mermin, 2007, page 87)

Anyway, thanks Victor, for making me/us aware of all this heavy material, much of which is new to me.

 

 

 
  [ # 6 ]

No problem Mark

You may also be interested in the AGI conference videos at http://agi-conference.org/2012/.  The latest conference was just a week ago.

your comment regarding Goertzel’s approach:

“For my taste, I thought Goertzel’s general approach was too simplistic”
—Very much agreed; most AI researches are limited to this ‘predication is everything’ view of intelligence.

“I saw the man with the telescope.” - classic ambiguous sentence
To me, this has implications about the need to create a new language for the human-machine interface.

I believe an AI can deal with ambiguity of this nature.

 

 
  [ # 7 ]

Ray Kurzweil (father of the Singularity discussion) has just joined Google as Director of Engineering.
http://www.zdnet.com/google-hires-kurzweil-a-look-at-the-returns-7000008844/

Google is one of the few companies that has enough data (and the hardware to use it) to start to overcome the sparsity of examples problem in machine learning.

A fundamental problem that makes language modeling and other learning problems difficult is the curse of dimensionality. It is particularly obvious in the case when one wants to model the joint distribution between many discrete random variables (such as words in a sentence, or discrete attributes in a data-mining task). For example, if one wants to model the joint distribution of 10 consecutive words in a natural language with a vocabulary V of size 100,000, there are potentially 100000^10 −1 = 10^50 −1 free parameters.

A Neural Probabilistic Language Model
https://d19vezwu8eufl6.cloudfront.net/neuralnets/reading_list/Neural probabilisic language models.pdf

2. add commonsense understanding to a Chatbot, like Ramona by Novamente on kurzweilai.net

Commonsense learning for chatbots creates a number of interesting problems.
On the one hand, there are repositories that you could use to help bootstrap the learning process.
Concept Net (760+MB compressed):
http://conceptnet5.media.mit.edu/

But how should we filter, store and employ the data if we have it?

There are other components of AGI that are learned processes. Things like math or the directions to your house, or how to respond to a given input, are taught to humans. Often times with much more patience than we would have interacting with a chatbot.

Knowledge representation and retrieval is still to be solved if we are to create a more general artificial intelligence.

 

 
  [ # 8 ]
Merlin - Dec 17, 2012:

Ray Kurzweil (father of the Singularity discussion) has just joined Google as Director of Engineering.

Yes, I just saw that this morning, too. I considered posting it in the news section of this forum, but didn’t. Would you care to do us the honor?

Merlin - Dec 17, 2012:

Knowledge representation and retrieval is still to be solved if we are to create a more general artificial intelligence.

Exactly. Knowledge representation (KR) is probably the most critical missing piece of AI, and that’s the topic of another thread I plan to post, unless somebody beats me to it. With proper KR, those problems of training/learning you mentioned would almost entirely disappear, I believe, although likely they would be replaced by other problems, like maybe capacity or maybe 2D circuit limitations.

I was about to add an excerpt from some online page to support my claim about KR being probably the most critical unanswered question in AI, but discovered to my surprise that there are no good lists I could find anywhere online that list open problems of AI! One guy’s blog supposedly listed 21 of them, but that’s gone now. One list of three is common online, but is pretty stupid, I think. Sometimes I feel like this whole field is absolutely lost! People can’t decide on a definition of intelligence, people don’t realize the importance of analog processing, people keep talking about quantum computers, no science seems to want to list first principles anymore, nobody can even agree if we’re still in “AI Winter” or not, and to be honest, I haven’t seen a single good, new idea in AI in 20 years, no exaggeration. The five current directions I see in AI (chatbots, agents/swarms, data mining, semantic web, Bayesian nets) all date back to the ‘80s or even earlier, some *much* earlier!

Not even the following list of unsolved problems in neuroscience mentioned KR:
http://en.wikipedia.org/wiki/List_of_unsolved_problems_in_neuroscience

Drat. If you want something done right, you gotta do it yourself… :-(

 

 

 
  [ # 9 ]

Ah yes, Mr Ray Kurzweil - has anyone here read his latest book?

Image Attachments
ray_kurzweil_article.jpg
 

 
  login or register to react