AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Genesis of ‘True’ Artificial Intelligence Assistant in the Making! (by 2012)
 
 
  [ # 31 ]

Incidentally, Pacman can be played by following certain patterns for each screen. No intelligence is required. These patterns are freely available on the internet.

http://www.math.montana.edu/~hyde/pacman/

 

 
  [ # 32 ]
Steve Worswick - Oct 11, 2011:

Incidentally, Pacman can be played by following certain patterns for each screen. No intelligence is required. These patterns are freely available on the internet.

Yet learning those patterns requires intelligence. Who would bother if they didn’t understand the benefits of doing so?

I saw a documentary recently which showed a troupe of monkeys that have a particular dietary preference. They eat the large nuts of a certain tree, but they can only get at the contents of the nuts by pounding them in a certain way with specially shaped rocks. It takes each individual monkey about 8 years to learn how to make the rocks and use them to open the nuts.

Go figure.

 

 

 
  [ # 33 ]

I’ve heard the “extraordinary claims…” bit many times and all that is really needed is some proof, not extraordinary proof.

And I wouldn’t mind seeing something like this:

“I’ve been working on natural language for a long time, and I finally noticed something that was overlooked.  I made a few demos of my code implementing the overlooked idea, and it seems to be fixing many problems that have been plaguing us.  Check it out and let me know what you think, other opinions would be great. “

Now the tone of that would keep me reading.

 

 
  [ # 34 ]

I absolutely agree with that, Toby, but to my way of thinking, your example doesn’t exactly fall under the heading of “extraordinary claims”. smile  I usually reserve that particular category for things like broad, sweeping statements of complete success, unrealistically short time frames, or other, seemingly far-fetched statements of that nature. I also try to avoid demanding “proof”, unless claims get to the point of being extremely outlandish, at which point I usually try to diplomatically point out to the claimant that it’s time to “put up or shut up”. smile

 

 
  [ # 35 ]

I absolutely agree with that, Toby, but to my way of thinking, your example doesn’t exactly fall under the heading of “extraordinary claims”.

It was supposed to be an example of the opposite of an extraordinary claim.

 

 
  [ # 36 ]

Oops? My bad. smile

 

 
  [ # 37 ]
Dave Morton - Oct 12, 2011:

Oops? My bad. smile

Heh.  No worries.  smile

 

 
  [ # 38 ]

Genesis, if you do not mind me asking- what is your relationship, if any, with Rhonda Software?

 

 
  [ # 39 ]
C R Hunt - Oct 11, 2011:

-Recognize words-
Do you intend to use available databases of words (such as WordNet) or to build from scratch? If from scratch, how will the bot store words? How will it form an idea of whether a word represents an object/action/idea/etc. without the visual component of your bot? Or will there be some internal physics engine you will link words to?

It will be built from scratch and feed with Basic English literature. It will store input and their pattern respectively. If I launched the program with a clean slate of memory and input “who is this” to it. The pattern Recognition function looks for sequential and spatial patterns in the input. When it finds a pattern, it maps it into the memory. Then it will also each row of the input (for the example, the letter and space) into its memory as an object (as I explained in page 1).

Later when an input is streamed to the memory, objects similar to it will activate. For example if I entered “who”. The object “w”, “h” and “o” will activate. each time those three objects activate together, their relational link gets stronger. After a while the objects will pass thresh-hold and a new concept will be born which is the word “who”.

This way the program will grow naturally.

C R Hunt - Oct 11, 2011:

-Recognize sentences-
Again, do you intend to use statistical parsing methods, hard grammar rules, or what? If you intend the bot to figure out grammar rules on its own, then the same questions as before: how will it know what each word represents? How will it learn what objects take actions without some other input to relate to? (Or even, how will it learn that objects are even capable of actions?)

No grammar rules, no statistical parsing, no pre-programmed inputs. Every thing is self discovered. Grammar is a learned thing, so the AI has to self generate grammar rules on its own.

In Phase one the AI is not built to recognize visual input because its not intended for it to understand the meaning of the word. But the concepts of text in-it-self and how one text character or group of characters relate to each other.

Phase one intention is to build the simplest version of the system and make sure all the components of the system is working before I scale it up.

But as far as sentences goes. The AI will recognize a pattern of maybe the punctuation mark “.” and the capitalization of the first word after it. It may then recognize pattern of the starting input beginning with a capital word and therefore come to the conclusion that the capitalization of the first word and the punctuation mark “.” is the beginning and end of a collection of words and therefore make a new concept representing that.

C R Hunt - Oct 11, 2011:

-Recognize punctuation and capitalization.-
Do you mean the significance of these symbols? How will it do this?

By recognizing its patterns. Capitalization usually happens after a “.” That is a pattern.

C R Hunt - Oct 11, 2011:

- Recognize when a user is typing and stops typing to take a break.-
Is its sense of time innate or learned as well? I would argue a person’s sense of time is not learned.

Its innate and you are right, a person’s sense of time is not learned. There is this discovery done on it by Physcist michio kaku from BBC.

C R Hunt - Oct 11, 2011:

- Recognize when a user mistypes a word. -
- Makes word predictions as you type. -
Either you can approach this the google way (statistics) or by having some sort of internal representation of the situation at hand and guess at intent that way. If you have an internal representation, it sounds like you will need some sort of physics engine. How do you intend to approach this?

If you watch some of type into a screen. You will notice that you are able to predict what words he is currently typing before he completely types it. Some of that is influenced by the context of the writing. But some are not.

Without context: If you were typing: “what is her nam”. I’m able to predict to a high decree of accuracy that you meant name. But lets say you mess up and put “what your naeme” You going back and deleting “e” to correctly spell the word is a pattern. This pattern can be mapped into the AI.

As far as future word prediction goes. Its based solely on knowing the context of the current writing or recognizing a series of input.

Watch closesly: “With great power…” the concept that holds the rest of that statement just fired in your brain. But it didn’t just fire the statement, it fired the rest of the statement “comes great responsibility”. This is why people are able to finish people’s sentences.

This is how it will work in the AI aswell.

C R Hunt - Oct 11, 2011:

- Recognize patterns in words.-
What do you mean? Word stemming? Levenshtein distances? Please elaborate.

Lets say the AI got inputs of “jumped, wanted, etc” it will recognize the pattern of the added “ed” and map it as a concept. Next time the input occurs, lets say it received input it was not familiar to like “amused”. The concept of “ed” activates.

Understanding the meaning of words and giving concepts like “ed” its actual meaning is not part of the phase one. Phase one is solely to demonstrate that the system can recognize and intelligently map these patterns.

Phase two will deal with acquiring the meaning of words and concepts and it won’t be by attaching a camera to the program. That is way too sophisticated. Since intelligence is not about complexity, we can improvise.

 

 
  [ # 40 ]
Carl B - Oct 12, 2011:

Genesis, if you do not mind me asking- what is your relationship, if any, with Rhonda Software?

None. I just find their algorithms fascinating. Though they are not doing anything really special. There are variants of these type of algorithms that are openly available.

Steve Worswick - Oct 11, 2011:

Incidentally, Pacman can be played by following certain patterns for each screen. No intelligence is required. These patterns are freely available on the internet.

http://www.math.montana.edu/~hyde/pacman/

Andrew has it on point. There are patterns for every game and everything. Its discovering these patterns, intelligently mapping them into memory for later use is what intelligence is.

For example: Chess patterns (new link)

 

 
  [ # 41 ]

Interesting thread so far smile

Ok Genesis, let me try to summarize phase 1. You will try to find patterns in combinations of letters, words and sentence-fragments. These will mostly be N-grams, so a certain combination of letters or words, correct? If such an N-gram is perceived often enough, it becomes a ‘concept’ as you call it, but basically it means that the chance of perceiving this N-gram has passed a certain theshold.

One problem I spot here is data storage. Storing all possible combinations of letters, characters and words plus their chance of occurrence is a humongous job.

But let’s continue. After doing this, and feeding it training data to teach the system all the N-grams and chance, then what? You now have a database which tells you what kind of combinations occur with what chance. As Hunt already said, this is actually exactly what Google does. You say it’s different, because it does not understand words such as `Howesedfaboutttyrlnowfrt’ but that is because it is not trained to do so. It was not created to extract patterns and words from words like this. However, and this is the important part, with all the data that is in Google’s database, they could do this, because all the required information is in the database. It’s only a matter of statistics and calculating likelihoods of events.

 

 
  [ # 42 ]

Genesis, When you say you are at 20% of phase 1, does that mean, coded or concept definition?

 

 
  [ # 43 ]
Mark tM - Oct 12, 2011:

Interesting thread so far smile

Ok Genesis, let me try to summarize phase 1. You will try to find patterns in combinations of letters, words and sentence-fragments. These will mostly be N-grams, so a certain combination of letters or words, correct?

Correct.

Mark tM - Oct 12, 2011:

If such an N-gram is perceived often enough, it becomes a ‘concept’ as you call it, but basically it means that the chance of perceiving this N-gram has passed a certain theshold.

Yep.

Mark tM - Oct 12, 2011:

One problem I spot here is data storage. Storing all possible combinations of letters, characters and words plus their chance of occurrence is a humongous job.

Not quite, the memory component of the system is not a typical storage database. It won’t store all combinations of letters. It will only store 26 objects, one being each letter of the alphabet. The combinations are links that connect to other objects its related to. So a link can be generated for the object “w” that connects to the objects “o”, “r” and “d”.

Each time those combination of letters occur, the link is strengthened. If it passes a threshold then a new concept is created based on the link as “word”. If it doesn’t pass the threshold then it will eventually be forgotten.

The combination of words (concepts) work in similar fashion. Words used together often are brought closer in the memory to form a cluster of words with each word in the cluster probably having its own cluster of words.

So we have a cluster of concepts with clusters of concepts. Think of it like the universe, which has galaxies, which in itself are clusters of stars and clusters of galaxies (galaxy groups). What brings these concepts together? The memory manager (See Pg. 1) Think of it as an outside gravity force, pulling concepts together and also capable to pull them back apart. These clustering will prove to be useful.

And its really not about chances of occurring, its how often a pattern occurs. You learn things by repetition. If something happens once, you will likely forget it. If something happened multiple times, it sticks some where deep in the recess of your mind.

Mark tM - Oct 12, 2011:

But let’s continue. After doing this, and feeding it training data to teach the system all the N-grams and chance, then what? You now have a database which tells you what kind of combinations occur with what chance. As Hunt already said, this is actually exactly what Google does. You say it’s different, because it does not understand words such as `Howesedfaboutttyrlnowfrt’ but that is because it is not trained to do so. It was not created to extract patterns and words from words like this. However, and this is the important part, with all the data that is in Google’s database, they could do this, because all the required information is in the database. It’s only a matter of statistics and calculating likelihoods of events.

The memory doesn’t deal with chance or statistics, its deals solely on how strong a concept and how close it is to other concepts and object. Chance and statistics is of itself a modeled concept in the brain, which is learned. But we can agree to disagree about what Google does and how its data in its DB is stored.

But there are other components. Google just does search and retrieval, “what” brings up “what is my ip”

There is no thinking (which is using the results to search the memory again for additional results.) You can call it genuine inspiration. Your mind continuously does this, we create and perceive in our mind simultaneously and our mind does this so well that we are not even aware that its happening.

Or generating inputs to query the memory with. I used this just yesterday on my Structured design test. I was stuck on a problem that asked the definition of a word (“_____ check”). I interrogated my mind by thinking “check…check…check…something check does…”

One result that activated was “desk checking”. Then I queried my mind for the meaning which resulted in the sentence meaning activating in my mind. But it didn’t match the definition of the word that I was looking. So I discarded it and started another query in which made apparent in the query to discard any anything that deals with “desk checking”.

But unfortunately i was unable to retrieve the answer. Why? I think its because I didn’t study that part of the test. But since it was a bonus question so its doesn’t hurt me anyway.

But this is how our mind works. I will get into that later.

Nor is there reasoning (comparing a concept with another concept)

Jan Bogaerts - Oct 12, 2011:

Genesis, When you say you are at 20% of phase 1, does that mean, coded or concept definition?

coded, I have about 305 lines of code. Which is good. There are 3 major parts of phase one.

1) Pattern Recognition (which consists of 40%)
2) Thinking (which consists of 20%)
3) and Reasoning (which consists of 40%)

I have only complete about 20% of pattern recog module. It can successfully map objects of letters and concepts of words in a controlled environment at the moment.

But there is a long way to go to get it where it should be. I will be working on the spatial and sequential part of the recognition all day today. I will also be programming three layers of the pattern recog. Depth is a necessity when you want to detect varieties of patterns.

 

 
  [ # 44 ]
Genesis - Oct 12, 2011:
Mark tM - Oct 12, 2011:

One problem I spot here is data storage. Storing all possible combinations of letters, characters and words plus their chance of occurrence is a humongous job.

Not quite, the memory component of the system is not a typical storage database. It won’t store all combinations of letters. It will only store 26 objects, one being each letter of the alphabet. The combinations are links that connect to other objects its related to. So a link can be generated for the object “w” that connects to the objects “o”, “r” and “d”.

How many bytes of memory do you think this will need? 26?

Do you even know how to calculate it?

Have you ever even done any computer programming before?

If so, in what language(s) and on what projects?

It still sounds like you haven’t got a clue what you are talking about.

 

 

 
  [ # 45 ]
Andrew Smith - Oct 12, 2011:
Genesis - Oct 12, 2011:
Mark tM - Oct 12, 2011:

One problem I spot here is data storage. Storing all possible combinations of letters, characters and words plus their chance of occurrence is a humongous job.

Not quite, the memory component of the system is not a typical storage database. It won’t store all combinations of letters. It will only store 26 objects, one being each letter of the alphabet. The combinations are links that connect to other objects its related to. So a link can be generated for the object “w” that connects to the objects “o”, “r” and “d”.

How many bytes of memory do you think this will need? 26?

Do you even know how to calculate it?

Have you ever even done any computer programming before?

If so, in what language(s) and on what projects?

It still sounds like you haven’t got a clue what you are talking about.

Of course I have no clue what I’m talking about. I would be disappointed if you said otherwise. If I had a dollar for each negativity, down-right false assumptions, and conceited point of view in your post, I would be rich by now wink

But its really amusing. I get a chuckle out of it. LOL

But yes, I have programmed before. I have been programming since I was 12. I program in C++, VB, and PHP. I also was apart of a team that was building a driver-less vehicle for the DARPA urban challenge. Just to give you a brief complexity of the project. Stanley, the car that won the Grand challenge had millions of lines of code (and that was the grand challenge, not the urban!) Unfortunately, I had to drop out of the team because I was losing concentration on my school work.

Some of the multi-player card games I made when I was 15-16.

http://i156.photobucket.com/albums/t39/Bladerskb/gamescreen1.png
http://i156.photobucket.com/albums/t39/Bladerskb/Untitled-2-1.png
http://i156.photobucket.com/albums/t39/Bladerskb/decl.png

 

 < 1 2 3 4 5 >  Last ›
3 of 6
 
  login or register to react