AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Is AI-research looking for a ‘quick fix’
 
 
  [ # 31 ]
C R Hunt - Mar 30, 2011:

The problem I see (as a scientist tongue laugh )

smile

C R Hunt - Mar 30, 2011:

...is that you can’t set specifications at the exploratory phase of research. When you’re still testing new ideas, there needs to be the freedom to “build” in impractical ways. Once something works with duct tape and a prayer, you can pass it off to the engineers to standardize. wink

Yes, but by that you just declared the last 65 odd years as the ‘exploratory phase’ raspberry

C R Hunt - Mar 30, 2011:

I think the goal of re-creating the human brain is very much in line with the goal of creating strong AI.

But that was not my point; I stated that those projects (neural net related) are not aiming at integrating other types of AI research like NLP and functional design.

C R Hunt - Mar 30, 2011:

Whether or not you think such an approach will be successful is an entirely different question. smile The researchers in this case aren’t developing what we traditionally think of as a neural net. They are actually modelling neural cells themselves, with a neural net-type architecture emerging as a result (presumably).

My statement towards this might have been a tad generalizing, but you get the point wink

C R Hunt - Mar 30, 2011:

The reason I think such a project will fail is that we aren’t just our neurons. We are everything our neurons are connected to. Recreating a human brain won’t automatically imbue it with intelligence—it must be trained, just as ours is. I’m not sure how possible that is to do with a simulated brain.

Our views on this point are pretty much in sync.

C R Hunt - Mar 30, 2011:

However, I am of the belief that if someone could reproduce all the neurons/neural connections/neurochemical activity in a single person’s brain accurately, one could recreate that person’s mind at the moment the connections were copied. If one could find a way to stimulate and read out the response to such a digitized brain, communication with it could be attempted.

Again I agree with you. Reproducing the brain digitally might be the only way to get to ‘mind uploading’ in the way that Ray Kurzweil is predicting. But I see that as a whole different avenue of research apart from the development of ‘strong-AI’. In one case we are trying to ‘copy’ consciousness, in the other case we are trying to ‘create’ it.

 

 
  [ # 32 ]
Hans Peter Willems - Mar 30, 2011:
C R Hunt - Mar 30, 2011:

The problem I see (as a scientist tongue laugh )

smile

C R Hunt - Mar 30, 2011:

...is that you can’t set specifications at the exploratory phase of research. When you’re still testing new ideas, there needs to be the freedom to “build” in impractical ways. Once something works with duct tape and a prayer, you can pass it off to the engineers to standardize. wink

Yes, but by that you just declared the last 65 odd years as the ‘exploratory phase’ raspberry

Yeah, what’s your point, a lot of theories have been tested, and we’ve learned a lot of those years.  Strong AI is probably the most complex endeavor of man kind.

The same applies to say physics.  The equivalent to strong AI for physics could be considered perhaps the Grand Unified Theory (jump in and correct me CR, if need be).

One could argue, for thousands of years, we have studied physics, and damn it, we still don’t have the G. U. T. figured out… science failed I guess?  wrong.  It is still in its exploratory phase.  No, we’re not at the holy grail of physics, but we have learned a hell of a hot, and have created many wonderful things with our knowledge so far.  AI will develop the same way.

C R Hunt - Mar 30, 2011:

there needs to be the freedom to “build” in impractical ways.

 

I very much agree.  The invention of the semiconductor, the first one created in 1947 by John Bardeen, Walter Brattain, and William Shockley, was a lot larger than today’s transistors, but the principle was what mattered, then we refined and made them smaller, smaller, and faster, where we had integrated circuits then the microprocessor.  Theory, test, adjust, theory, test adjust.

In my parser, in first version, (Perl), it didn’t care about processing time.  I just wanted the functionality, and i wanted the proof of concept.  Then, I sat down and figured out every possible way I could speed it up, and v3 I made about a 600X speed increase *and* dramatically decreased the set of codes.

 

 

 
  [ # 33 ]

Actually . . probably T. O. E.  (theory of everything) would be better analogy (again, I’ll turn it over to you CR)

 

 
  [ # 34 ]
Hans Peter Willems - Mar 30, 2011:
C R Hunt - Mar 30, 2011:

...is that you can’t set specifications at the exploratory phase of research. When you’re still testing new ideas, there needs to be the freedom to “build” in impractical ways. Once something works with duct tape and a prayer, you can pass it off to the engineers to standardize. wink

Yes, but by that you just declared the last 65 odd years as the ‘exploratory phase’ raspberry

I did, didn’t I? Sad. But I would argue that computing has changed so much in the meantime—not just the hardware available, but our very understanding of what information is—that we have come a long way. Long enough to understand the naivete of early speculation about the requirements and time scale necessary to create AI. Maybe long enough (for some) to see the futility in looking for one magic algorithm for intelligence. Certainly long enough for many to turn to other pursuits, alas.

Perhaps the problem is not that researchers aren’t looking to incorporate their work into a generalized AI, but rather that they see generalized AI as the work of a future generation. The works of today are but potential cogs in a machine not yet conceived, or perhaps even not yet conceivable. That is, we must wait until more pieces of the neurological/computational puzzle are found before we can hope to see the larger picture.

Hope that’s not true. We shall see.

 

 
  [ # 35 ]

correction: above , replace ‘semiconductor’—=> transistor.

 

 
  [ # 36 ]

Ah, hello Victor. We must be on at the same time. I like you’re analogy. Either proof of a GUT theory or a TOE works well. Although perhaps we can look to even less grand problems and hope to find a Mr. Higgs along the way. smile

The analogy actually extends further. There is such a glut of theories for unification and such beyond The Standard Model physics, that we have reached a saturation point. What the field really needs are a few killer experiments to clear the playing field and leave the strongest to fight it out. I think strong AI may prove to be a similar beast. Philosophers can wax on all day about what it means to be conscious or intelligent or a million different vaguely defined but emotionally evocative adjectives. But actually pointing to a program/animal/drunk guy on the subway and saying “There! Tell me what you think of that!” will help clarify the issue in ways mere speculative talk cannot. Just MHO, of course. smile

 

 
  [ # 37 ]

you’re—> your

Seriously, why don’t we have an edit button? I’ve never seen it abused. Was there a reason it was taken away??

 

 
  [ # 38 ]
C R Hunt - Mar 31, 2011:

Philosophers can wax on all day about what it means to be conscious or intelligent or a million different vaguely defined but emotionally evocative adjectives.

Very good.  Yes, philosophy has its uses, but when it turns into beating a dead horse or going into an endless pointless cycle as I’ve seen in these forums, it serves no purpose.

The idea that a program that could pass a TT, is still not intelligent, because… hum… it doesn’t have this thing called X, but we can’t even DEFINE what X really *is*, is utter nonsense.

It’s like saying, hum, we have super sonic jets, but you know what, I’m really going to put my philosopher hat on and say, you know what, its not really flying… sure it lifts off the ground, but I don’t know, it is just missing something.  I don’t really know what that is, I can’t define it, but you know, it just doesn’t have something, so I’m not impressed with super sonic jets; Let’s write them off, as not “TRUE FLYING”.

Edit button - yes, very annoying isn’t it ?

 

 
  [ # 39 ]

Oh, and there is no real test for X that everyone is happy with.  And the only way to know if an entity possess X is to actually *BE* that entity.  But since I believe that your approach doesn’t have X (and even if it *does* have X, it better implement it using *MY* approach), then your system is not really understanding or intelligent.  I’m sure you know what X is ! smile

 

 
  [ # 40 ]

My guess is that the majority of the world will declare a specific artificial entity to have strong AI when that entity can deal with the same level of generalities, ambiguities and have the same flexibility as the human mind.  It will be a functional test, Alan Turing was correct. 

The Chinese room thought experiment was JUST THAT - a thought experiment.  One that has been sufficiently beaten to death many times to my satisfaction (I’m with the ‘systems reply’).

Many people that thought experiments to be proofs - they are not.    I compare the Chinese room argument to the once believed argument that the world could not possibly be rotating as fast as it does—otherwise when we would leap into the air, the Earth would turn below us and we’d land a few feet from where we initially were—- intuitions/thought-experiments cannot always be trusted.

 

 

 
  [ # 41 ]
Victor Shulist - Mar 31, 2011:

The Chinese room thought experiment was JUST THAT - a thought experiment.  One that has been sufficiently beaten to death many times to my satisfaction (I’m with the ‘systems reply’).

I don’t know all that much about linguistics, but here’s my question:  Is there a mechanical process for translating one *human* language into another?

 

 
  [ # 42 ]

The basis of all these discussions so far has been the notion of a shared reality.  Whether it be the symbols represented by language which drills down into semantics and logical reasoning or concept building from some intrinsic core to create a grand model of thought, these approaches rely on the icons of a collective consciousness of the species.  One of the binding elements we’ve considered is experience which some may suggest is manifested as “results”.  I suspect we hold the hope that a most extremely large corpus of writings or other artifacts can be mined for truth and even common sense.  That begs the questions like the Chinese room thought experiment explores.  It is all in the processing like the data crunching of information technology?

Once psychologists interpreted dreams according to schemes that today we might consider liken to Tarot cards or astrology readings as we attempt to map the brain activity our machines measure to thoughts we suggest. Still there are things documented that don’t fit into these theories.  For example, reincarnation has been proven by case studies.  Near death experiences have many, many times demonstrated out-of-body experiences.  Even physics promotes quantum theory which defies understanding of why it is that way but could be drawn into a thought experiment of entanglement between people’s “processing” leading to telepathy. So when using a collection of “transistors” to activate philosophy of how things are done, we should keep an open mind that the secret to deep AI may not be in fixing errors in what has been done in the past, but instead, more likely to come from what we don’t understand, what we don’t know now.

There seems something hollow and essentially missing in believing incrementally, bit by bit, we can assemble some small pieces to form that deep venture. Granted the whole can be more than the parts, gestalt, that is. Unfortunately this is not art that is absorbed, it is the artist which creates the art. This AI is not a story as in sci-fi, nor a mechanical actor such as exists at Disney World, not some device that falls into the “uncanny valley” which scares more than suspends disbelief.  We can’t trick ourselves into the illusion of intelligence like we do with other things. True AI research honestly seeks to answer “I think, therefore I am.”

There are some who don’t dwell in the abstract, but rather build the infrastructure (like an artificial brain) to experiment scientifically.  They may find the way to tap into the universal knowledge by sympathetic computations harmonizing to their biological cousins faster than the clinical abstract projections of rules and mathematics use in the traditional AI endeavors. For they seek to create instead of refine by clustering datum into emergent “ideas”.

But chatting with a robot is another problem altogether. I still maintain creative “story telling” is essential which is best so far deployed in transcripts like interactive plays.  With AI, making good stories should be easy.

 

 
  [ # 43 ]

For example, reincarnation has been proven by case studies.

Really, I’d like to see 1 that does. I’d also like to know how the hell they did that study.

Near death experiences have many, many times demonstrated out-of-body experiences.

 
Again, thin ice my friend!! There are simpler explanations, that make much more sense (like brain activity slowly dying out, causing hallucinations).

 

 
  [ # 44 ]
Victor Shulist - Mar 31, 2011:

Yeah, what’s your point…

There was NO point, but there WAS a smiley.

Luckily CR did get it smile

C R Hunt - Mar 31, 2011:

I did, didn’t I?

wink

C R Hunt - Mar 31, 2011:

But I would argue that computing has changed so much in the meantime—not just the hardware available, but our very understanding of what information is—that we have come a long way. Long enough to understand the naivete of early speculation about the requirements and time scale necessary to create AI. Maybe long enough (for some) to see the futility in looking for one magic algorithm for intelligence. Certainly long enough for many to turn to other pursuits, alas.

I think that’s a fair assessment.

C R Hunt - Mar 31, 2011:

Perhaps the problem is not that researchers aren’t looking to incorporate their work into a generalized AI, but rather that they see generalized AI as the work of a future generation.

This does vibrate with me, but I do think (as I stated in my initial post) that ‘leaving it up for future generations’ also sounds like ‘why bother with it NOW’ (and let’s make some money first).

Toby Graves - Mar 31, 2011:

Is there a mechanical process for translating one *human* language into another?

I don’t think so. It could theoretically work between languages that share a similar rule-base, but as soon as you move to iconic languages (like Chinese) all bets are off.

Gary Dubuque - Mar 31, 2011:

There seems something hollow and essentially missing in believing incrementally, bit by bit, we can assemble some small pieces to form that deep venture. Granted the whole can be more than the parts, gestalt, that is. Unfortunately this is not art that is absorbed, it is the artist which creates the art. This AI is not a story as in sci-fi, nor a mechanical actor such as exists at Disney World, not some device that falls into the “uncanny valley” which scares more than suspends disbelief.  We can’t trick ourselves into the illusion of intelligence like we do with other things. True AI research honestly seeks to answer “I think, therefore I am.”

Beautifully stated. I do think that building pieces without thinking about the ‘grand design’ will not bring us the real thing. I honestly think that ‘I think, therefore I am’ needs to be taken into the equation.

 

 
  [ # 45 ]

Toby Graves - Mar 30, 2011:

Is there a mechanical process for translating one *human* language into another?

Hans-
I don’t think so. It could theoretically work between languages that share a similar rule-base, but as soon as you move to iconic languages (like Chinese) all bets are off.

The current “mechanical” process is Google Translate. It takes “crowd sourced” data as its original translation template and then applies it to new material.

 

 < 1 2 3 4 5 > 
3 of 5
 
  login or register to react