AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Is AI-research looking for a ‘quick fix’
 
 
  [ # 16 ]
Carl B - Mar 29, 2011:

Strong AI will likely only be attained through extensive collaboration and standardization of the various components of AI (lexicon, NLP, memory, emotion, self, learning, reasoning, etc.).

The idea that a single person is so smart that only they can make the first strong AI is kind old fashioned in this age of cloud computing and social interconnectedness, which act as force (or brain) multipliers when applied to specific challenges.

Something like Google (example) could be a good example of how it will eventually be achieved imo- thousand of super smart people focused on the individual processes and concepts and backed by Googilian $$$, but able to assemble and distribute it as something useful/practical to the GP.

So, back to my original thought, as individuals fiddling with isolated solutions, the little guy really benefits from the “easy” and open source components for pseudo machine intelligence, but no one individual will likely ever be the unitary father/mother of Strong AI.


Google or IBM’s Watson could continue to be tweaked or evolved until someday it spontaneously ignites into a flame of intelligence. On the other hand, having deep pockets and lots of resource does not guarantee the ability to create the next big thing. The tech landscape is littered with examples.

“There is no reason for any individual to have a computer in his home.
Ken Olsen, President, Digital Equipment, 1977”

Sometimes disruptive breakthroughs happen do to the focus of the singular vision of an individual or small team. Apple, Microsoft, and Google all had their roots in humble beginnings.

Think about the stories of Bill Gates, Steve Jobs and Walt Disney. There is nothing wrong with “Big Hairy Audacious Goals”.

 

 

 

 

 
  [ # 17 ]
C R Hunt - Mar 29, 2011:

I would agree with this. I think the reason isn’t necessarily that this generation of computer programmers/roboticists is not enthralled by the problem of strong AI, just that it isn’t a practical pursuit. Both in terms of funding (the magic word), and the scope of the problem.

I do agree that money has (unfortunately) a lot to do with it. Monetizing on research is effectively what drives research financially in the first place. However, there is a lot of information available that points to the idea that there might be a lot of money in strong-AI, so funding for such a project should not be a big problem. Funding of course also needs a properly defined plan and that, together with the simple fact that nothing has worked for the last 60 odd years, might be an important factor in the current situation.

C R Hunt - Mar 29, 2011:

The closest I’ve found is the MIT Computational Cognitive Science Group.

Thanks for the link, I’ll check it out smile

 

 
  [ # 18 ]
Merlin - Mar 29, 2011:

...IBM’s Watson could continue to be tweaked or evolved until someday it spontaneously ignites into a flame of intelligence.

Don’t hold your breath for it. The Watson team has stated in several press outings that Watson is NOT artificial intelligence, it’s a brute force search machine that they now hope to sell to big corporations more or less ‘as is’ (i.e. making my point for the ‘quick fix’).

 

 
  [ # 19 ]
Hans Peter Willems - Mar 29, 2011:

... it seems that most official projects are NOT aiming at integrating but instead aiming at specific applications that have little to do with strong-AI. Instead most seem to be aimed at expert-systems, robotic solutions that don’t need real AI, etc.

This is where I would go back to the Google model, where coordinated brut force is being applied to numerous individual issues like “Google AI” that drives the “instant” feature, Google Voice (both voice recognition and TTS), the indexing of the entire interwebs (the ultimate human and machine info db), etc.

Epic: http://www.youtube.com/watch?v=OQDBhg60UNI

While these could be dismissed as “specific applications that have little to do with strong-AI”, you could say the same thing about an understanding only a neuron, a gaglion, the eye, optic nerve, auditory sensing, etc. etc.- all interesting topics on there own, but needing to be reverse engineered, understood, and/or combined to one degree or another until you figure out at what point you have made a “person” or Intelligence.

 

 

 
  [ # 20 ]

Carl, I do agree with you for the most part. However, where I differ is in the ‘and/or combined to one degree or another’; most projects (I know I generalize here, but it serves the purpose of the discussion) are NOT aiming at that ‘integration stage’. So while many projects are indeed working on a ‘part of the puzzle’, most are not working on making their ‘part’ actually fit ‘into the puzzle’.

 

 
  [ # 21 ]

Hi Hans,
Let me see if I understand your point with an example.

During the 1960s the US spent a considerable amount of resources going to the moon.  With a vision to put a man on moon…and an infinite seeming supply of money…multiple organizations worked on design, constructing, and testing individual pieces of a big project….delivery systems, command module, lunar module, recovery system, etc.  It happened in 1969. 

Now to get to that point, technology had to develop. For example these were all required, liquid fuel rocketry (Goddard), radio transmission (Marconi), invention of the transistor, navigation computers, launching of satellites, putting man into space, space suits, astrophysics, etc.  I believe the time line for all of this was maybe 70 years or so.

So, I think you are saying the AI community is content creating and selling “launching satellites” to those who want them. However, no one is working on a concerted effort to get to the moon or Mars.  Is that about right?

I do believe that many, if not most of these smaller bits are in fact being worked on by various individuals, organizations, and academic institutions.  The question is ‘who will seize the opportunity’ to start putting it all together? If there is such an entrepreneur such as those that brought us the PC and operating systems, then the first significant AI program will still be found lacking in features…and maybe even give us a blue screen of death. =)

Wanna Solve Impossible Problems? Find Ways to Fail Quicker

I like that title.  I backed up from working on my Walter bot (way to big and ambitious) to working with my much smaller XBot….since I needed to figure out all sorts of smaller problems quickly.

Thanks for the topic. I hadn’t thought about this very much.

Regards,
Chuck

 

 
  [ # 22 ]
Hans Peter Willems - Mar 29, 2011:

Let’s put another angle into this discussion; can anyone point me to any official projects that are working towards strong-AI, general-AI, AGI or what you name it, either by trying to do it all or by integrating several other (smaller) projects together?

Cognitive Computing Research Group - University of Memphis
http://ccrg.cs.memphis.edu/index.html

The CCRG’s research revolves around the design and implementation of cognitive, sometimes “conscious,” software agents, their computational applications, and their use in cognitive modeling.


CONAG: A REUSABLE FRAMEWORK
FOR DEVELOPING “CONSCIOUS” SOFTWARE AGENTS
http://ccrg.cs.memphis.edu/assets/papers/ConAg - a reusable framework for developing.pdf

 

 
  [ # 23 ]

http://en.wikipedia.org/wiki/Blue_Brain_Project

 

 
  [ # 24 ]
Chuck Bolin - Mar 30, 2011:

So, I think you are saying the AI community is content creating and selling “launching satellites” to those who want them. However, no one is working on a concerted effort to get to the moon or Mars.  Is that about right?

A good analogy indeed smile

Chuck Bolin - Mar 30, 2011:

I do believe that many, if not most of these smaller bits are in fact being worked on by various individuals, organizations, and academic institutions.  The question is ‘who will seize the opportunity’ to start putting it all together?

Yes, but there is a problem with that. Let’s get back to your analogy; the result (going to the moon) was reached in a fairly short time period, and I think that was because the overall goal was stated BEFORE everybody started to work on all the little bits and pieces. In AI-research many projects are working on bits and pieces that might or might NOT fit into a greater scheme. The problem that I see in this approach (as an engineer) is that there is no common ‘specification’ that the parts are to adhere to if they are to be implemented in a greater scheme. So I’m looking at this from an engineering point of view; you need to know what you are building, to get the arts right. To me it seems that exactly that is (currently) missing in AI-research.

 

 
  [ # 25 ]

Merlin, Gary, thanks for the links, I’ll check them out.

 

 
  [ # 26 ]
Gary Dubuque - Mar 30, 2011:

http://en.wikipedia.org/wiki/Blue_Brain_Project

Ah yes, I’ve read about this before in an article about the heavy discussion among AI-researchers about the (in)validity of this approach. To my opinion (that I share with many AI-researchers) anything based on, or related to neural networks is a dead end (sorry to Jan Bogaerts, that’s how I see it).

The press coverage on the Blue Brain project was also a bit hyperbole; they stated that they had simulated a rat’s brain while in reality they only simulated ONE cortical column (as can be read on the Blue Brain website).

To the point of this discussion, the problem with neural-net related projects is that these projects are in fact NOT working towards a grand design that will/can implement all the small pieces. Instead these projects focus on the belief that everything and all will be solved by implementing it as a neural network. There is very little overlap (none I’ve found so far) between neural-net related projects and (for example) NLP-research or ‘functional design’. Neural net researchers believe that everything will mystically emerge from modeling the human brain in software (instead of modeling the human brain’s BEHAVIOR).

So, to not derail this discussion and stick to the topic, the Blue Brain project is definitely not the kind of project I was referring to (a project that is aimed at a ‘grand design’ and to implement all the pieces).

 

 
  [ # 27 ]

Carl B,

In general, we don’t have mainframes at home yet.

 

 
  [ # 28 ]
Hans Peter Willems - Mar 30, 2011:

The problem that I see in this approach (as an engineer) is that there is no common ‘specification’ that the parts are to adhere to if they are to be implemented in a greater scheme. So I’m looking at this from an engineering point of view; you need to know what you are building, to get the arts right. To me it seems that exactly that is (currently) missing in AI-research.

The problem I see (as a scientist tongue laugh ) is that you can’t set specifications at the exploratory phase of research. When you’re still testing new ideas, there needs to be the freedom to “build” in impractical ways. Once something works with duct tape and a prayer, you can pass it off to the engineers to standardize. wink

 

 
  [ # 29 ]
Hans Peter Willems - Mar 30, 2011:

[...] Neural net researchers believe that everything will mystically emerge from modeling the human brain in software (instead of modeling the human brain’s BEHAVIOR).

So, to not derail this discussion and stick to the topic, the Blue Brain project is definitely not the kind of project I was referring to (a project that is aimed at a ‘grand design’ and to implement all the pieces).

I don’t agree. I think the goal of re-creating the human brain is very much in line with the goal of creating strong AI. Whether or not you think such an approach will be successful is an entirely different question. smile The researchers in this case aren’t developing what we traditionally think of as a neural net. They are actually modelling neural cells themselves, with a neural net-type architecture emerging as a result (presumably).

The reason I think such a project will fail is that we aren’t just our neurons. We are everything our neurons are connected to. Recreating a human brain won’t automatically imbue it with intelligence—it must be trained, just as ours is. I’m not sure how possible that is to do with a simulated brain.

However, I am of the belief that if someone could reproduce all the neurons/neural connections/neurochemical activity in a single person’s brain accurately, one could recreate that person’s mind at the moment the connections were copied. If one could find a way to stimulate and read out the response to such a digitized brain, communication with it could be attempted.

I call this a “belief” because there’s still so much we don’t know about the human mind. Think about brain damaged, vegetative people. For the most part, their brains are functioning correctly. We still don’t fully understand what’s going wrong. Would we realize how close we’d come if we reproduced a vegetative brain? Or would we think we’d taken a fundamental wrong step once the project didn’t produce a thinking mind?

 

 
  [ # 30 ]
8PLA • NET - Mar 30, 2011:

Carl B,

In general, we don’t have mainframes at home yet.

No, but we do have something even better- cloud computing and peer-2-peer networks.

The cloud computing in particular could enable anywhere access (better than hauling around 30 cores in your trunk) where as peer-2-peer could act as a multicore system.

 

 < 1 2 3 4 >  Last ›
2 of 5
 
  login or register to react