AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

ASTRID: Analysing Symbolic Tags Relates to Intelligent Dynamics
 
 
  [ # 46 ]

When you start thinking about bot memories you need to think about how the AI will be used/implemented. If you need to do a more in-depth process that is adjusting the bots on-line memory (like parsing and loading/compressing a database) you may need a sleep mode. Computer architectures allow for a number of different strategies depending on how fast access is needed and how much space is required.

Caching algorithms might give you some food for thought.
http://en.wikipedia.org/wiki/Cache_algorithms

Hans said, “I’ve just started mapping out the basics for the ‘mind operating system’ that will eventually run my model. It is as basic as possible and has some functional similarities to the human brain. So far I’m mapping out memory, perception, interaction and behaviour. When the design of this mind-OS’

other prior art:
http://www.goertzel.org/papers/SingularityPath_files/image006.jpg

http://mind.sourceforge.net/diagrams.html

 

 
  [ # 47 ]

Merlin, thanks for the links.

The caching won’t be a complex issue because we are talking very long time spans (computer-wise), i.e. hours or even days. So persisting information in a structured way over that period can be handled pretty simply in a database.

As for the ‘prior art’ you linked to; I seriously don’t believe that the answer is going to be found is these kinds of complexity. I’ve mentioned before that adding complexity to a model always implicates you need even more complexity to handle that complexity. In my own design, I’m constantly looking to simplify the model even while adding things to it.

 

 
  [ # 48 ]
Merlin - Feb 26, 2011:

[...]other prior art:
http://www.goertzel.org/papers/SingularityPath_files/image006.jpg

http://mind.sourceforge.net/diagrams.html

Merlin, thanks for the link to the “diagrams” page. Site Meter clued me in that someone had followed your given link, and I arrived here by backtracking the linkage.

It sounds like Hans Peter Willems has a very ambitious project going on.

Last night I was very busy in my Mentifex AI project. 1.5 years ago I had bought iForth (Intel-Forth) over the ‘Net from Marcel Hendrix in The Netherlands, and I had to download the iForth package over Wi-Fi onto an Acer netboook back then. Last night I verified that my MindForth AI code ran in both 32-bit Win32Forth and in the potentially 64-bit Dutch iForth. I had to haul my Acer netbook yesterday into a Wi-Fi tavern just to work with the iForth codebase. It went really well, and at http://www.scn.org/~mentifex/mfpj.html I wrote up my “lab notes”. Well, back to the salt mines now. Just thanking you for the link. -Arthur

 

 
  [ # 49 ]
Hans Peter Willems - Feb 26, 2011:

Merlin, thanks for the links.

The caching won’t be a complex issue because we are talking very long time spans (computer-wise), i.e. hours or even days. So persisting information in a structured way over that period can be handled pretty simply in a database.

Depending on how you envision the implementation of your theories (embedded in a cell phone or a robot with limited resources versus running in the cloud or a PC), you might want a “sleep mode (non-realtime)” where the volatile data is sifted/compressed and reformatted to run in restricted resources. Would everything that an AI knows need to be stored in local resources, or could it just pull things down as needed from the net? When I implemented Skynet-AI, I chose the second approach. The AI runs locally but anything that is not explicitly required is pulled in as needed or referenced to a web service.

Hans Peter Willems - Feb 26, 2011:

As for the ‘prior art’ you linked to; I seriously don’t believe that the answer is going to be found is these kinds of complexity. I’ve mentioned before that adding complexity to a model always implicates you need even more complexity to handle that complexity. In my own design, I’m constantly looking to simplify the model even while adding things to it.

I would agree. Simplicity is best. Some of your discussion on synthetic senses/perception reminded me of elements in Arthur’s view of an AI mind.

 

 

 
  [ # 50 ]
Arthur T Murray - Feb 27, 2011:

It sounds like Hans Peter Willems has a very ambitious project going on.

Arthur, thanks for your reply. Of course I agree that my project is ambitious.

I don’t want to discredit your work and/or approach, by the way. Your schematics will give me some more insight into how others (you, in this case) perceive the working of a brain. It will certainly give me some things to think about and probably use in my own model. Validating your own model (before building a working model and test it) is one of the hardest things to do, and looking to work from others might help to put things into perspective. So for that, at least, thanks for your efforts.

 

 
  [ # 51 ]
Merlin - Feb 27, 2011:

Depending on how you envision the implementation of your theories (embedded in a cell phone or a robot with limited resources versus running in the cloud or a PC), you might want a “sleep mode (non-realtime)” where the volatile data is sifted/compressed and reformatted to run in restricted resources. Would everything that an AI knows need to be stored in local resources, or could it just pull things down as needed from the net?

Point well taken. I have not yet been thinking of implementation technicalities. The first implementation will be developed and running on a fairly standard PC. The way I see it, and in line with my ideas on the ‘necessity of simplicity’, processing power is not that important but data-storage IS.

I’m also convinced that in a few years the computing power in a mobile device will surpass anything and everything we currently have available in consumer computing. There is already a test going on to control a whole satellite (in space) from a single Android phone (that is embedded inside the satellite).

 

 
  [ # 52 ]
Hans Peter Willems - Feb 27, 2011:

Point well taken. I have not yet been thinking of implementation technicalities. The first implementation will be developed and running on a fairly standard PC. The way I see it, and in line with my ideas on the ‘necessity of simplicity’, processing power is not that important but data-storage IS.

I have found that there is a hierarchy of “memory” that an AI/bot needs. For example a list of every noun or definition in English would take up a lot of space, but most would not come up in day to day conversation and may not be an efficient use of resources. I try to create the equivalent of human, short term (stuff needed all the time) vs long term (stuff needed infrequently) memory.

My philosophy is best summed up by this clip from The Matrix.

Hans Peter Willems - Feb 27, 2011:

I’m also convinced that in a few years the computing power in a mobile device will surpass anything and everything we currently have available in consumer computing. There is already a test going on to control a whole satellite (in space) from a single Android phone (that is embedded inside the satellite).

I agree that any problem that is computer resource bound will eventually yield to Moore’s Law. In fact, there is enough compute power now to run on most platforms. Skynet-AI has run on everything from smart phones to web TV (just tested yesterday) to multi-processor PC’s. Skynet though, has a very small memory footprint.

 

 
  [ # 53 ]

I have found that there is a hierarchy of “memory” that an AI/bot needs. For example a list of every noun or definition in English would take up a lot of space, but most would not come up in day to day conversation and may not be an efficient use of resources. I try to create the equivalent of human, short term (stuff needed all the time) vs long term (stuff needed infrequently) memory.

Completely agree with you. If you want to keep the system as responsive with 100.000 ‘knowledge bytes’ as with 10, you need some levels in your memory model.

 

 
  [ # 54 ]
Merlin - Feb 27, 2011:

I have found that there is a hierarchy of “memory” that an AI/bot needs. For example a list of every noun or definition in English would take up a lot of space, but most would not come up in day to day conversation and may not be an efficient use of resources.

Research has shown that an English speaking human has an average vocabulary of between 10.000 and 20.000 words, and uses only a fraction of that in daily conversations. In terms of data storage this is a pretty small data-set to handle. Even with my symbolic tagging added (several to sometimes tens of tags per concept) this is still not much in comparison to the current state of data-storage and processing power.

However, as probably several queries will have to be made to formulate a response, some optimization should be useful. So I’ll keep your suggestions in mind.

 

 
  [ # 55 ]
Hans Peter Willems - Feb 27, 2011:
Merlin - Feb 27, 2011:

I have found that there is a hierarchy of “memory” that an AI/bot needs. For example a list of every noun or definition in English would take up a lot of space, but most would not come up in day to day conversation and may not be an efficient use of resources.

Research has shown that an English speaking human has an average vocabulary of between 10.000 and 20.000 words, and uses only a fraction of that in daily conversations. In terms of data storage this is a pretty small data-set to handle.

Morti’s spelling/substitution routines follow this model as well, to some extent. If I were to look up every word of input in a database list of over 7,000 commonly misspelled words, it would cause a fairly serious performance reduction. So what I did was create an array of the 500 most commonly used words in the English language, and check against that list first. In PHP, checking an array is significantly faster than searching a database, so using this method speeds up response time greatly over checking every word. As I migrate Morti from an AIML architecture to one that uses some form of NLP, I intend to use other, similar methods of data abstraction to improve performance, and optimize response times.

 

 
  [ # 56 ]

A small update on my progress so far.

The following human traits are now handled inside my design:

- Storing of knowledge based on ‘concept mapping’ (conceptual perception).
- Storing of experience based on ‘overloaded symbolic tags’. This is the basis for the learning system.
- Handling ‘emotions’ using the PAD-model.
- Handling sensory input (virtual sensors mapped to concepts).
- Handling planning and prediction based on reversed causality.
- Handling ‘intention’ and ‘ambition’ based on core-concepts, emotions and experience.

Stuff that is partially handled but not yet completely defined:

- Sense of self, identity.
- Behavior based on instinct, emotional state, social references.
- Reasoning; evaluation and rationalism, linked to knowledge and experience.

The further I get, the more things start to fall into place and make sense. Well at least to me, as several discussions have already shown that some of this is seemingly hard to explain. But feel free to ask and I’ll try to explain.

 

 
  [ # 57 ]

Is boolean logic operators, AND, OR and NOT considered core, or is that learned.  I imagine if it knows those 3 basics, it could learn XOR.

 

 
  [ # 58 ]
Victor Shulist - Mar 10, 2011:

Is boolean logic operators, AND, OR and NOT considered core, or is that learned.  I imagine if it knows those 3 basics, it could learn XOR.

Currently those are not among the core-concepts as I think they can be build upon other (core) concepts. But I’m not yet convinced myself that I have everything covered in the eight core-concepts that I have now. I’m working on a validation-model to validate the core-concepts for being consistent, complete and necessary.

FYI, the basic model consist of three levels:

1. Core-definition of our reality, the basic concepts that define the multi-dimensional space around us, and our interaction with this multi-dimensional space. This definition is very abstract.

2. First layer of concepts that build upon the ‘core’; this layer adds ‘world views’ to the abstraction of the ‘core’. It describes how we perceive the abstract concepts in the core-definition in ‘real world’ concepts that we can use to describe ‘other’ concepts.

3. The second layer describes ‘implementations’ of the ‘world-views’ that are in the first layer. The ‘implementations’ are the starting-point for adding ‘context’ to the concepts that are described in the first layer.

The core-concepts will not expand much further then where I am because just about everything seems to be covered by them. The first layer is defined for maybe 20% or so (probably even less) and the second layer has only a few concepts defined so far, just enough to define the scope of that layer. So I’m pretty sure that the logic concepts you point out will at least sit somewhere in those primary layers of the model.

 

 
  [ # 59 ]

I just checked and Boolean Logic is already in the model, in the first layer that’s build on the core-concepts smile

 

 
  [ # 60 ]

Some people here have voiced their opinion about my use of the word ‘concept’. At first that made me think that maybe I was using the wrong word to describe what I meant. However, talking to people outside this forum showed me that my use of the word and the meaning I give to it in my model, is consistent with the accepted vocabulary in AI research (and in philosophy for that matter).

Since then I’ve found several research papers that corroborate my views in this perspective (I linked one of those in several discussions). I mentioned David Chalmers several times, who seems to be one of the major proponents of this view.

I’ve also been charged with ‘not describing my use of -concept- properly’, so just now I thought of looking up ‘concept’ on Wikipedia and here is the result:

A concept (abstract term: conception) is a cognitive unit of meaning—an abstract idea or a mental symbol sometimes defined as a “unit of knowledge,” built from other units which act as a concept’s characteristics. A concept is typically associated with a corresponding representation in a language or symbology such as a single meaning of a term.

There are prevailing theories in contemporary philosophy which attempt to explain the nature of concepts. The representational theory of mind proposes that concepts are mental representations, while the semantic theory of concepts (originating with Frege’s distinction between concept and object) holds that they are abstract objects. Ideas are taken to be concepts, although abstract concepts do not necessarily appear to the mind as images as some ideas do. Many philosophers consider concepts to be a fundamental ontological category of being.

The meaning of “concept” is explored in mainstream information science, cognitive science, metaphysics, and philosophy of mind. The term “concept” is traced back to 1554–60 (Latin conceptum - “something conceived”), but what is today termed “the classical theory of concepts” is the theory of Aristotle on the definition of terms.

http://en.wikipedia.org/wiki/Concept

 

‹ First  < 2 3 4 5 6 > 
4 of 6
 
  login or register to react