AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Genesis of ‘True’ Artificial Intelligence Assistant in the Making! (by 2012)
 
 
  [ # 16 ]

@CR: It’s never a good thing when my name is mentioned in these posts, lately… downer Am I the “Bad Guy”, now? raspberry

@Genesis: I’ve not been participating in the discussions here all that much, but that doesn’t mean I’m not interested. I find that even those with “bs claims” have ideas and viewpoints that can provide a positive contribution to the field of AI, even if they only give us “concepts” to consider. A great many times, thinking “outside the box”  ends up giving us more than just a bigger box. smile

@Carl: ~14 Billion is only 2.8 Million times longer than 5 Thousand, but other than that… raspberry

@Andrew: At least I see the humor in your remarks, my friend. As long as it doesn’t inflame others into emotional harangues and inciting a flame war, we’re ok. smile

@everyone: The seemingly “official” mantra around here of “claims require proof, and extraordinary claims require extraordinary proof” is a valid viewpoint to take, and I see nothing wrong with it, in general. However, let’s not start a witch-hunt just because someone’s claims aren’t backed up with reams of evidential data, or thousands of lines of code. Also, I don’t really care who first inserted the religious references, nor is it particularly relevant to the discussion. We have plenty to discuss here as it is, without tossing in “red herrings”. smile

Ok, I’ve put away my Forum Police Badge, and am now going back to quietly enjoying the discussions.

 

 
  [ # 17 ]
C R Hunt - Oct 10, 2011:

What Google is doing is clustering words according to how they have been used most often, guessing about your input based on its experiences. This seems rather similar to what you intend to develop. Heck, even the image search associates text clustered near images with the image itself. That’s as close as you can get to what you call concepts without being able to really “see” images. (And they can do that too now.) How are you going to teach your system to process images and recognize objects?

I tested the image search by taking 7 photos. Two are different types of cup, a cup with water, a degree deodorant, a fork, a watch and for good measure a picture of me drinking water.raspberry

It failed to recognize all but one of them and that was the deodorant, which is due to the fact that the deodorant had its label on it. if I stripped the label it won’t be able to recognize that it was a deodorant, giving it a score of 0/7. 0%. rolleyes

So no, this is nothing like what i’m trying to do. The image search is just like the text search. Its just a search and retrieval of something that matches the input. There is no intelligence at all. You are way too generous when you say: “guessing about your input based on its experiences.” The truth is, there is no guessing (prediction) or experience going on. Its just a database of the most searched term.

Just like I demonstrated how entering “what” into google will bring up “what is my ip”
Why? Because that is the most searched thing that starts with “what.” Type in “who is” and you will get “who is steve jobs” Why? His death has caused a surge of searches for who he is. Therefore it gained alot of points, bringing it up to the top of the search food chain. Google is basically based on a point system.

How am I going to get my system to process images and recognize objects? The pattern recognition module will take of processing initial inputs for ALL kinds of patterns, including visual ones (ex: shape, depth, etc..)

C R Hunt - Oct 10, 2011:

And your second sentence is the exact reason why no one will believe that your bot is “True AI”, even if it does exactly what you’re proposing. One can always pull abstract words (“think”, “reason”) out of one’s hat and claim that a computer ain’t doing it.

Well they would have to define what the definition of thinking and reason is.
My definition of thinking is to manipulate or simulate thought.
My definition of reasoning is to compare thoughts.

Ex:
A is equal to 1
B is equal to 2
C is not given

One of them is equal to 3, which one is it?

Thinking: What is A equal to? What is B equal to? What is C equal to?
Reasoning: since A is equal to 1 and B is equal to 2 that means C is the one equal to 3.

C R Hunt - Oct 10, 2011:

Well, there was this group that taught a computer Civilization II by giving it the manual to read. wink

But it doesn’t understand the game it self.

 

 
  [ # 18 ]

Oow, so not only will you solve AI on your own, while doing that you will also solve the image processing problem to recognize what exactly is on a photo? Let me email all those people who are working on that (pssh, only a couple) that they need to find another job wink

Ok, so far the sarcasm. Genesis, you have interesting ideas, but there are some problems with your posts that causes everybody to jump on it. First, claiming that it’s ‘true AI’ always spark people around here. Secondly, you still use vague terms like `understanding’ and `reasoning’.

For example, in your last post you ask `But it doesn’t understand the game itself’. But when does a system `understand’ a game? Hunt’s example-paper uses the manual to create an internal representation of the game with its possible states and legal moves. With this representation, it can work out a good strategy. It can `think’ (by your definition) because it can simulate though, it can simulate what its enemy can do and how it can counter its moves. It can `reason’ (by your definition) because it can compare multiple simulations and thoughts and find the best one.

Like Hunt, I admire your professionalism and your polite responses. But please, be more specific why your system differs from other systems.

 

 
  [ # 19 ]

thoroughly entertaining thread. Haven’t had such a good read in a while.

@CR: remind me never to go into discussion with you, you just know to much detail about to many things,... respect.

 

 
  [ # 20 ]

But we don’t have an AI that can play chess, checkers, spades, hearts, tonk, tic-tac-toe, backgammon, etc on the fly! What ever board/card game you present it with. Why? mainly because we don’t have AIs that can actually “understand” the game but rather AIs that work with predefined rules.

Check out AIXI.  It learns to play everything.  I say that with 5% sarcasm.

My personal feeling on the definition of “understanding” is that its meaning has been artificially elevated by humans.  Whichever cluster of neurons that gives us that little “ding” sound or a warm fuzzy feeling internally has biased the view of “understanding” toward human experience.

 

 
  [ # 21 ]
Mark tM - Oct 11, 2011:

Oow, so not only will you solve AI on your own, while doing that you will also solve the image processing problem to recognize what exactly is on a photo? Let me email all those people who are working on that (pssh, only a couple) that they need to find another job wink

Well I won’t say “solve AI”. I don’t believe AI is a puzzle to be solved. I believe there are various roads that can lead to the goal. It doesn’t have to be singular.

Also, we already have good algorithms for recognizing objects in pictures.

Exhibit A: http://www.youtube.com/watch?v=xPd6REexvyc
Exhibit B: http://www.youtube.com/watch?v=P9ByGQGiVMg#t=3m16s

What we don’t have presently right now is a way to map the data into a memory in an intelligent way that is meanful and can be recalled for later use.

Mark tM - Oct 11, 2011:

Ok, so far the sarcasm. Genesis, you have interesting ideas, but there are some problems with your posts that causes everybody to jump on it. First, claiming that it’s ‘true AI’ always spark people around here. Secondly, you still use vague terms like `understanding’ and `reasoning’.

Well, that is the goal of the project. If it doesn’t end up as a True AI then I failed. I left myself no wiggle room.

Mark tM - Oct 11, 2011:

For example, in your last post you ask `But it doesn’t understand the game itself’. But when does a system `understand’ a game? Hunt’s example-paper uses the manual to create an internal representation of the game with its possible states and legal moves. With this representation, it can work out a good strategy. It can `think’ (by your definition) because it can simulate though, it can simulate what its enemy can do and how it can counter its moves. It can `reason’ (by your definition) because it can compare multiple simulations and thoughts and find the best one.

Not quite. Though I didn’t read the paper throughly, I just glanced through it.
But what I mean by understanding a game is this. Lets take the game of chess. I can recall the time I learned about chess and how it works. It was in middle school. I sat there in class and observed people playing. I observed the pieces and found a pattern of their movements. From there I understood the mechanics of the game. I understood the patterns on the board and how it relates to the pieces. I understood that the smallest piece on the board (pawn) could only move up one spot after its initial move.

The rules of the game should not have to be given to the AI, just like the rules of chess was not given to me. I self-generated rules based on the patterns I observed. Rules of games must come naturally to the AI. Thats the only way it can really stick. If it does stick, then it can be reused later.

Thats what I mean by understand. And because I understood the game of chess.

When I first saw checkers, the first thought I had was that “This is extremely similar to chess!”
Why? Because the seeds of chess was planted deep into my memory banks, so when I saw something similar to it, it activated. The AI can then observe checkers and from previous experience from chess, it can make general inferances, the pattern on the board, the pieces. It will form predictions of checkers and as it observes someone playing it. It will grasp it way much faster.

If you ask any chess master he will tell you that the reason they are so good is because they have seen so many move plays. So next time it happens they can recall it. For example if I moved around like this. A pattern can be formed so next time I do it the AI would think, hey he did that before. Any pattern can be recognized. Including how a player moves and specific plays.

Some components of my theory is not nothing new. 70% of the existing techonology today wasn’t a new invention by the person who introduced the product. What they did is called innovation. The ipod touch and the iphones didn’t have anything new, its just the way they implemented several inventions together. There had been touch devices way before the itouchs.

So yeah, the components of my theory is not something new. What’s different is the way i’m implementing it.  My theory is based solely on how the “everyday” usage of the mind works. I will try to go abit my in detail as time goes.

 

 
  [ # 22 ]
Dave Morton - Oct 11, 2011:

@CR: It’s never a good thing when my name is mentioned in these posts, lately… :downer: Am I the “Bad Guy”, now? :raspberry:

Nah, you’re the peacekeeper. It’s convenient to invoke you because who could be offended by Dave? tongue wink

Genesis - Oct 11, 2011:

I tested the image search by taking 7 photos.  [...] It failed to recognize all but one of them and that was the deodorant, which is due to the fact that the deodorant had its label on it. if I stripped the label it won’t be able to recognize that it was a deodorant, giving it a score of 0/7. 0%. :rolleyes:

You’re right, that’s a pretty poor showing. How many top computer scientists would you say google dedicated to this project? What about your strategy will make you more successful than they? And where in the world will you find the time to acheive this goal while completing the others you’ve mentioned? And in a year no less!

Dave Morton - Oct 11, 2011:

The seemingly “official” mantra around here of “claims require proof, and extraordinary claims require extraordinary proof” is a valid viewpoint to take, and I see nothing wrong with it, in general. However, let’s not start a witch-hunt just because someone’s claims aren’t backed up with reams of evidential data, or thousands of lines of code.

Good advice. And like Genesis said,

Genesis - Oct 11, 2011:

If I make any claims and don’t demonstrate it, then disregard it and forever classify me as a looney. I can’t even assure you that my theory works till I’m able to complete phase one.

Nobody wants more loonies on the forum and nobody is eager to dump you in that category. I think people would maintain a more positive and engaged attitude if, rather than claiming out the gate that you’ve thought up an extraordinary, unique scheme for “True AI”, you instead laid out in particular what you hope to achieve and how you intend to approach those goals. You have done the latter only in a general sense, and frankly the breadth of what you are attempting and the timeline you’ve set up are suspect. They seem to speak to inexperience rather than an informed foundation for your project.

But no matter whether this is proven to be the case or not, I’m still interested in seeing your project develop, and look forward to updates. If you want to discuss concepts (“vaporthreads”, as Carl aptly dubbed them), I’m sure members will have a lot of fun doing that, as long as they aren’t touted as anything else. I’m curious to see your reply to Mark’s most recent comments particularly.

Jan Bogaerts - Oct 11, 2011:

@CR: remind me never to go into discussion with you, you just know to much detail about to many things,... respect.

Ha ha, too bad it’s knowing a lot about a few specific things that leads to new advances. smile I have nothing but the highest respect for your chatbot work. It’s inspiring stuff.

 

 
  [ # 23 ]

Whoops, looks like I posted before seeing your reply. Reading it now…

 

 
  [ # 24 ]
C R Hunt - Oct 11, 2011:

... You have done the latter only in a general sense, and frankly the breadth of what you are attempting and the timeline you’ve set up are suspect. They seem to speak to inexperience rather than an informed foundation for your project.

I heard something on a TV show yesterday that seems to fit here:

If you’re confident (or even hopeful) about the outcome, it’s quite likely that you haven’t studied the problem well enough.

Now granted, the comment was with regard to something completely unrelated to this discussion, but I think that it may apply here. The field of AI has been one of intense study for over a half century (nearly a full century, if you go back to the esteemed Dr.{Mr.?} Turing), so setting a time frame of only one year seems just a little ambitious to me. Don’t get me wrong here; I’m absolutely the last person to dissuade anyone from their course, and I’m willing to do anything in my power (which is, admittedly, limited) to help folks keep on track, but there have been teams of people working on this for a number of decades now, so…

Ok, shutting up now. smile

 

 
  [ # 25 ]
Toby Graves - Oct 11, 2011:

But we don’t have an AI that can play chess, checkers, spades, hearts, tonk, tic-tac-toe, backgammon, etc on the fly! What ever board/card game you present it with. Why? mainly because we don’t have AIs that can actually “understand” the game but rather AIs that work with predefined rules.

Check out AIXI.  It learns to play everything.  I say that with 5% sarcasm.

My personal feeling on the definition of “understanding” is that its meaning has been artificially elevated by humans.  Whichever cluster of neurons that gives us that little “ding” sound or a warm fuzzy feeling internally has biased the view of “understanding” toward human experience.

the PacMan agent is hampered by partial observability. PacMan is unaware of the maze structure and only receives a 4-bit observation describing the wall configuration at its current location. It also does not know the exact location of the ghosts, receiving only 4-bit observations indicating whether a ghost is visible (via direct line of sight) in each of the four cardinal directions. In addition, the location of the food pellets is unknown except for a 3-bit observation that indicates whether food can be smelt within a Manhattan distance of 2, 3 or 4 from PacMans location, and another 4-bit observation indicating whether there is food in its direct line of sight. A final single bit indicates whether PacMan is under the effects of a power pill.

http://www.youtube.com/watch?v=RhQTWidQQ8U

Well they got it to play pacman, but that’s it. Regardless, it didn’t play pacman like humans do. But just like any other algorithm (neutral net..etc) Its a reinforcement learning with penalties and rewards and its pacman specific, can’t be extend to other games.

A simple definition of understanding would be “seeing the full picture”. Since that implementation of AIXI doesn’t have the capacity to see the full picture means it won’t be able to play other games.

Think of it like playing a new game with a blind fold on and having someone give you a 1 if you are doing good and a 0 if you are doing bad. You may eventually get good at the game after playing 1000 games (which is the # of games nn usually needs to be a good checkers player) and if you had a memory of a computer that can remember all your moves and their respective score. but you will never understand what you are actually doing until you take the blind-fold off.

 

 

 

 
  [ # 26 ]
Dave Morton - Oct 11, 2011:
C R Hunt - Oct 11, 2011:

... You have done the latter only in a general sense, and frankly the breadth of what you are attempting and the timeline you’ve set up are suspect. They seem to speak to inexperience rather than an informed foundation for your project.

I heard something on a TV show yesterday that seems to fit here:

If you’re confident (or even hopeful) about the outcome, it’s quite likely that you haven’t studied the problem well enough.

Now granted, the comment was with regard to something completely unrelated to this discussion, but I think that it may apply here. The field of AI has been one of intense study for over a half century (nearly a full century, if you go back to the esteemed Dr.{Mr.?} Turing), so setting a time frame of only one year seems just a little ambitious to me. Don’t get me wrong here; I’m absolutely the last person to dissuade anyone from their course, and I’m willing to do anything in my power (which is, admittedly, limited) to help folks keep on track, but there have been teams of people working on this for a number of decades now, so…

Ok, shutting up now. smile

I understand the gravity of this project and those like it. But AGI is not about being complex. Its simply a system capable enough to learn just about anything and whatever input we feed into the system will define it.

What I’m trying to do is program an ai system in its simplest form in order for it to grow naturally. We have to start from the absolute basic and go from there.

When we think about AGI, we have this picture of a humanoid robot that is conscious, knows everything about the world, and can accomplish any task. But that can’t be true, because remember we were born with intelligence and yet we have to go through an average of 22 years of schooling from birth in order to attain the knowledge we even have today. No way a robot will be able to compress that time frame into several orbits of the moon around the earth.

So its important to note that in order for it to work, we need the simplest version of the system. Think of a single cell, so insignificant, yet so intelligent.

Think of a simple system that is able learn as we humans do (through patterns). It would be able to learn checkers and then use that knowledge to learn chess and every other card or board game we input to it.

 

 
  [ # 27 ]

Isn’t it time we had a forum topic especially for wackos and nut-cases?

Genisus is yet another “you’re-doing-it-wrong and i-know-all-the-answers” nut job who has apparently been struck by divine inspiration and is here to make the world right again. It would be entertaining if he had anything to say that hasn’t already been said dozens of times by similarly deluded individuals (and we have so many here), but he has not got anything novel to say, let alone anything that has been properly thought through yet.

Therefore, rather than stringing such folks along, wouldn’t it be kinder to relegate their posts to a special topic where they could see immediately from the other threads that they do in fact have a lot to learn and nothing much worth saying yet, barring a few pleasantries and a question or two? At the very least, they might be prompted to resume taking their medication and become productive again, instead of just a figure of fun for a few days.

There are a lot of very talented and inspiring people on this forum, and I’m quite certain that Genisus could become one of them if he were to knuckle down and get to work on his ideas, instead of just making wild claims about what he’ll be capable of in the future. Even when he finds out he was wrong this time, it won’t be failure, it will be progress.

Keep moving forward.

 

 
  [ # 28 ]

Ok, let’s talk specifics.

When phase one is completed, Ariadne will be able to:

- Recognize words

Do you intend to use available databases of words (such as WordNet) or to build from scratch? If from scratch, how will the bot store words? How will it form an idea of whether a word represents an object/action/idea/etc. without the visual component of your bot? Or will there be some internal physics engine you will link words to?

- Recognize sentences

Again, do you intend to use statistical parsing methods, hard grammar rules, or what? If you intend the bot to figure out grammar rules on its own, then the same questions as before: how will it know what each word represents? How will it learn what objects take actions without some other input to relate to? (Or even, how will it learn that objects are even capable of actions?)

- Recognize punctuation and capitalization.

Do you mean the significance of these symbols? How will it do this?

- Recognize when a user is typing and stops typing to take a break.

Is its sense of time innate or learned as well? I would argue a person’s sense of time is not learned.

- Recognize when a user mistypes a word.
[...]
- Makes word predictions as you type.

Either you can approach this the google way (statistics) or by having some sort of internal representation of the situation at hand and guess at intent that way. If you have an internal representation, it sounds like you will need some sort of physics engine. How do you intend to approach this?

- Recognize patterns in words.

What do you mean? Word stemming? Levenshtein distances? Please elaborate.

 

 
  [ # 29 ]
Andrew Smith - Oct 11, 2011:

Isn’t it time we had a forum topic especially for wackos and nut-cases?

Genisus is yet another “you’re-doing-it-wrong and i-know-all-the-answers” nut job who has apparently been struck by divine inspiration and is here to make the world right again. It would be entertaining if he had anything to say that hasn’t already been said dozens of times by similarly deluded individuals (and we have so many here), but he has not got anything novel to say, let alone anything that has been properly thought through yet.

Therefore, rather than stringing such folks along, wouldn’t it be kinder to relegate their posts to a special topic where they could see immediately from the other threads that they do in fact have a lot to learn and nothing much worth saying yet, barring a few pleasantries and a question or two? At the very least, they might be prompted to resume taking their medication and become productive again, instead of just a figure of fun for a few days.

There are a lot of very talented and inspiring people on this forum, and I’m quite certain that Genisus could become one of them if he were to knuckle down and get to work on his ideas, instead of just making wild claims about what he’ll be capable of in the future. Even when he finds out he was wrong this time, it won’t be failure, it will be progress.

Keep moving forward.

That is exactly what I set out not to do. If you ever took a class on programming or even writing. You would know the first thing to do is to outline every aspect of the program and what the finished work is going to look like or a rough draft before the essay. Sharing some of my rough draft and how the finished product might work is not a crime. You surely can consider me a nut job, I’m okay with that. Does it worry me? no, but if you believe so then bless your heart.

Its an amazing thing that all the technological inventors in history were consisdered nut jobs when they first birth their idea. When cell phones were being discussed by physics, the media laughed and said they needed a reality check. Same goes for every major technological advancement. Flying? Well that’s impossible…oh wait. rolleyes

So I’m quite honored Andrew and I hope to become the 1 out of a thousand whose wierd and radical notion succeeds. But one thing you should know, I don’t believe there is a singular path to AI. There are numerous and Its just only a matter of time till we attain it. Opencog is for example one in a hurdle who could also reach that status.

You can go read their roadmap here: http://opencog.org/roadmap/

But I guess Ben is a nut-job aswell right since he gives speeches and have a projected time-frame of what his project will be capable of doing if accomplished? I’m glad to be apart of a nice crew tongue wink

I also made it clear I was about 20% into phase one, this week I will be working on spatial and sequential pattern recognition.

 

 
  [ # 30 ]
Genesis - Oct 11, 2011:

That is exactly what I set out not to do. If you ever took a class on programming or even writing. You would know the first thing to do is to outline every aspect of the program and what the finished work is going to look like or a rough draft before the essay.

You should not confuse engineering with research. When you set out to build something that is well defined, then you can make a plan like that. But when you set out to discover something new, you have to first determine everything that is old. Once you know and understand best practice you will be in a position to improve on it, but until then you will just be blathering randomly. You have to understand the rules before you try to break them.

Don’t get me wrong, I think you have a lot of potential. However I am not willing waste your time or mine by pandering to your ego or by trying to find any sense in the repetitive ignorant drivel which seems to be the current limit of your capabilities.

 

 

 < 1 2 3 4 >  Last ›
2 of 6
 
  login or register to react