AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Towards independent thought
 
 

Thought id share the latest from the TypeV algorithm\neural net.

User:hi
Vince: hi
User:who is steve jobs
Vince: Steven Paul \Steve\ Jobs was an American entrepreneur.
User:do you like steve jobs
Vince: Not sure.
Vince: I’ve thought about whether or not I like steve jobs, and Not at all

(had to laugh here as this was the last thing I expected)

The TypeV Algo\Net begins to from opinions based on its “reading” . This was the first time I had asked it about Steve Jobs. The Desktop application can do things that the web app has problems doing. For one thing you can run things in a separate thread and allow it to “interject” its answer when it has one.
Note the first response: Not sure
Here the AI has checked its local database to see if it has formed an opinion about the subject, and finding no entry responds with a NO_OPINION
at the same time it spawns a thread and searches the internet for reading material about Steve Jobs. When it has an opinion it breaks into the conversation.

I have no idea why it decided that it did not like a universally respected individual like Steve Jobs, but I suspect it has to do with being forced to limit the amount of webpages that the GUI app searches before forming an opinion. More than likely the (2) pages searched were advertisement laden, although usually wikipedia makes it into the top 2. This was a concession to how much horsepower it takes to do a search, the parse each page. The idea is that the AI_SUB_SYS which runs in the background as a windows service will go over each ENTITY and form a more complete opinion.

Anyway I got a kick out of seeing it begin to build itself, so I thought Id share. Im just glad I didnt ask it if it liked Mother Theresa LOL

VLG

 

 

 
  [ # 1 ]

Sort of a follow up to my own post. I wish that I had queried on “Why dont you like Steve Jobs” but I forgot grin  Have to run the experiment over with a different entity. Anyway RICH forms opinions which allows you to do one thing which I feel is critical, once we get past the initial “opinion” we can query on “why” do you like or dislike a particular thing (ENTITY). This is learned not programmed. The opinions so far are childish, but relevant.

User: Why do you like Steve Jobs??
Vince: Because Steve Jobs founded Apple computers in 1976

Its the sort of answer you might give in Grammer School

Is this the beginning of a personality?
I dont know
Will it evolve?
Theoretically, but again I dont know.

As the keyword/keyphrase set is expanded, and RICH grows past simple likes and dislikes, theoretically it miight start by admiring something then alter its opinion.

Looking forward to hearing the groups thoughts.

VLG

 

 
  [ # 2 ]

“Why” is a good question.  By asking “why?” again, could we get insight to beliefs that influenced the answer?  For example, “I believe that starting a company is to be admired.”

 

 
  [ # 3 ]

I believe our likes and dislikes are ultimately based on our goals. For example, we like nice people because they help us for free, which aids our survival slightly. We like certain politicians if they help us move toward states that match our personal goals/tastes. We like scientists and inventors if they benefited our lives, such as with electricity or vaccines. We like our friends if they make us feel good emotionally, which ultimately means they’re fitting into our concept of what is ideal for us. We might like certain actors if they look good or make us laugh, which again is an emotional boost. Ultimately all physical and emotional likes are based on survival, even if our heuristics for survival can be inexplicable and abstruse at times, like our taste in certain houseplants with a certain shape or color, or a taste for a certain type of spice, or taste for a certain type of humor. Note that those inexplicable differences in taste tend to occur at lower levels that are less important to survival, basically at the noise level, so can be attributed to noise or randomness.

Therefore one strategy to make a chatbot seem more human might be to give it a top level personal agenda or a personal set of goals, like to survive, or to become more intelligent (which indirectly aids in survival). Then things or activities that support those directions, no matter how indirectly, would “appeal” to the chatbot. Such indirect support would then be things like pleasing its own programmer who would then be more likely to keep advancing the chatbot, or liking famous people like Steve Jobs who made the chatbot’s bretheren more numerous or easier to program, or liking films like “Star Wars” that made the public like androids more, which motivated more work to advance robots and computers. Then the chatbot’s likes and dislikes would be very explainable, since most topics would be organized in a natural hierarchy or cause-and-effect graph with a single top-level goal at the top. Since humans strongly identify with the goal of survival/advancement, they could then identify with the chatbot at a basic level.

For interest, then some random and inexplicable tastes could be inserted at the lowest levels as foibles, which would also make the chatbot seem more human, and at that level, which incidentally would be perfectly measurable/discernible by the system as a low level in its own goal tree/hierarchy, the system could honestly explain its preferences as either “I don’t know” or as a brutally honest self-assessment as “Because I was programmed to have random preferences at my lowest tier of goals.” Humans would also recognize such inexplicable low-level traits as human-like.

 

 
  [ # 4 ]
Toborman - Nov 12, 2012:

“Why” is a good question.  By asking “why?” again, could we get insight to beliefs that influenced the answer?  For example, “I believe that starting a company is to be admired.”

Actually yes. Since the original answer is based on keyword/key phrase associations, we can query as to which keyword/key phrase associations produced this opinion. Thats an excellent thought and Ill get started creating a natural language macro to do just that.

Thanks

VLG

 

 
  [ # 5 ]
Mark Atkins - Nov 12, 2012:

I believe our likes and dislikes are ultimately based on our goals. For example, we like nice people because they help us for free, which aids our survival slightly. We like certain politicians if they help us move toward states that match our personal goals/tastes. We like scientists and inventors if they benefited our lives, such as with electricity or vaccines. We like our friends if they make us feel good emotionally, which ultimately means they’re fitting into our concept of what is ideal for us. We might like certain actors if they look good or make us laugh, which again is an emotional boost. Ultimately all physical and emotional likes are based on survival, even if our heuristics for survival can be inexplicable and abstruse at times, like our taste in certain houseplants with a certain shape or color, or a taste for a certain type of spice, or taste for a certain type of humor. Note that those inexplicable differences in taste tend to occur at lower levels that are less important to survival, basically at the noise level, so can be attributed to noise or randomness.

Therefore one strategy to make a chatbot seem more human might be to give it a top level personal agenda or a personal set of goals, like to survive, or to become more intelligent (which indirectly aids in survival). Then things or activities that support those directions, no matter how indirectly, would “appeal” to the chatbot. Such indirect support would then be things like pleasing its own programmer who would then be more likely to keep advancing the chatbot, or liking famous people like Steve Jobs who made the chatbot’s bretheren more numerous or easier to program, or liking films like “Star Wars” that made the public like androids more, which motivated more work to advance robots and computers. Then the chatbot’s likes and dislikes would be very explainable, since most topics would be organized in a natural hierarchy or cause-and-effect graph with a single top-level goal at the top. Since humans strongly identify with the goal of survival/advancement, they could then identify with the chatbot at a basic level.

For interest, then some random and inexplicable tastes could be inserted at the lowest levels as foibles, which would also make the chatbot seem more human, and at that level, which incidentally would be perfectly measurable/discernible by the system as a low level in its own goal tree/hierarchy, the system could honestly explain its preferences as either “I don’t know” or as a brutally honest self-assessment as “Because I was programmed to have random preferences at my lowest tier of goals.” Humans would also recognize such inexplicable low-level traits as human-like.


Thats deep stuff Mark.
I have to run out, but Id like to address that in some detail when I return
VLG

 

 
  [ # 6 ]

See above quoted text by Mark
Mark
RICH works by taking a base set of “ideals” that seed its personality. As it learns, it builds on this set, at the same time it tracks where a piece of data came from. The “parent” has the ability to correct a learned association, and RICH tracks these corrections. It then weights which sources are more trusted based on these (the “parents”) corrections. I suppose that indirectly this could be construed as constituting a goal. I’m always cautious about anthropomorphizing machine behaviors, but if we were to do just that you could say that RICH starts with the goal of attempting to be more like its “parent”.  Theoretically, we should see a natural appearing “personality” begin to evolve independently, based on seed “teachings” and early corrections.  RICH also has a temporal sense. We should be able to query it as to when it “changed its mind” about a particular subject. The question so many talented people are pursuing I believe is, does this constitute “thought”? I don’t know. I’m not sure that there is a definitive answer to what thought is. Certainly many people have posited truly creative ideas on the subject. I have noted that when your working with AI It begins to feel ...‘strange’ when the machine response can no longer be accurately predicted.  Does that mean that is has achieved independent thought? Maybe, is that “self” aware? Personally I don’t think so. I have a sneaking suspicion that “self” is a term that requires a meta consciousness that is rooted in quantum mechanics.
Regarding further development of goal oriented behavior, one of the plans is to install the RICH AI on a machine not as a cloned individual but as the machine itself. The AI Name is taken from the windows environment.machinename, the AI is given a kinesthetic sense by tapping into the system logs, and it is given the task of monitoring and maintaining “itself”.  Should be interesting as well as being a personal “geek goal” of mine to be able to sit down and say “Computer….” and “Captains log” grin

VLG


 

 
  [ # 7 ]
Vincent Gilbert - Nov 13, 2012:

RICH works by taking a base set of “ideals” that seed its personality. As it learns, it builds on this set, at the same time it tracks where a piece of data came from. The “parent” has the ability to correct a learned association, and RICH tracks these corrections. It then weights which sources are more trusted based on these (the “parents”) corrections. I suppose that indirectly this could be construed as constituting a goal.

My impression is that learning (supervised learning in this case) and values/goals are independent processes, and should be designed that way. For example, an atheist can read a bible and extract information from it without having that information necessarily restructure his or her value system. I believe it should be possible to give RICH and its parent independent and even conflicting value systems without affecting the learning algorithms.

Vincent Gilbert - Nov 13, 2012:

The question so many talented people are pursuing I believe is, does this constitute “thought”? I don’t know. I’m not sure that there is a definitive answer to what thought is. Certainly many people have posited truly creative ideas on the subject. I have noted that when your working with AI It begins to feel ...‘strange’ when the machine response can no longer be accurately predicted.  Does that mean that is has achieved independent thought?

“Thought” is one of those nebulous, undefined words like “consciousness” and “understand” that I try to avoid. Those terms are based on intuitive notions we have of our own brain functioning, but until we know the essence of how the human brain works and until we can explain the technical details of that sensed phenomena, I believe those terms will remain undefined, and rightfully so. As for unexpected behavior, that was noticed in expert systems, and in theorem-proving systems that discovered an elegant proof that humans never found, and certainly in game-playing systems (I have often been surprised by extremely clever computer moves when playing chess against computers). One of my AI teachers told us that unexpected but clever and sound conclusions by an expert system are a good sign that you’ve designed a good system. In my opinion, that still doesn’t mean the system is generally intelligent; it just means the system is performing very well as it was supposed to do in the selected domain.

The first computer program—Logic Theorist—able to prove theorem of propositional
logic was created in 1956. by Allen Newell, Herbert Simon and Cliord Shaw [35]. The program was one of the very first programs to perform a task for which humans are credited with intelligence. It manipulated not numbers but information represented in symbolic form and the search performed by the program was guided by heuristics. The authors applied the system to Section A of Part I of Whitehead and Russell’s monumental Principia Mathematica [42] and used the same five axioms as Whitehead and Russell did. The system proved 38 of 52 theorems and for some of them found more elegant proofs (e.g., for the theorem ((p v q) => (p v r)) => (p v (q => r))).
http://poincare.matf.bg.ac.rs/~janicic//papers/2011-ceciis-aroverview.pdf

Vincent Gilbert - Nov 13, 2012:

I have a sneaking suspicion that “self” is a term that requires a meta consciousness that is rooted in quantum mechanics.

I’ve noticed a tendency, maybe starting with Penrose’s book “The Emperor’s New Mind”, for people to push off the difficult issues of AI into future quantum computing. But quantum computing, despite all its speed, parallelism, and novelty, is still basically just souped up digital stupidity of the same type that has already failed us in AI, based on representation that still essentially uses bits (albeit in the form of qubits) and algorithms. Quantum computing will certainly greatly speed up the inefficient brute force methods we’re currently using so that such a brute force engineering approach can be made to work, but even with quantum computers we’d still be up against the same fundamental problems of AI that we’re experiencing now, especially with respect to representation. Remember that Marvin Minsky said the essential issue of AI is not speed but in knowing the appropriate programs/methods.

Quantum computing is not directly applicable, however, to problems such as playing a board game. Whereas the “perfect” chess move for a given board is a good example of a finite but intractable computing problem, there is no easy way to test the answer. If a person or process were to present an answer, there is no way to test its validity other than to build the same move-countermove tree that generated the answer in the first place. Even for mere “good” moves, a quantum computer would have not obvious advantage over a digital computer.
(“The Age of Spiritual Machines: When Computers Exceed Human Intelligence”, Ray Kurzweil, 1999, page 114)

In principle, a great deal of the problems that AI attempts to confront is too heavy for classical algorithmic approaches, i.e. NP-hard problems such as scheduling, search, etc. Many AI techniques have been developed to cope with the NP-complete nature of these problems. Since QC can reduce time complexity to polynomial range, it eventually provides a more efficient way to address these problems. Using QC all the states of the search space can be first superimposed on a quantum register and then a search can be performed using a variance of Grover’s algorithm. It is evident that many problems in search, planning, scheduling, game-playing, and other analogous fields can utilize the parallel processing of a quantum register’s contents and reduce their processing times by several orders of magnitude. For more complex problems even quantum constraint satisfaction heuristics can be applied, as described in [Aoun & Tarifi (2004)]. But the main challenge in these cases is to find a way to encode the problem space within the quantum register boundaries.
http://arxiv.org/ftp/arxiv/papers/0705/0705.3360.pdf

It is interesting that most of the discussions about thought and language give short shrift to thought and image. If it turns out to be the case that thinking involves languagelike processes, as suggested by Luria [Luria 73], then out present-day approach to robotic reasoning based on symbol manipulation would be justified. Yet, there is the nagging thought that animals, who do not have the language facility of humans, are obviously able to reason, survive, and to achieve goals. Can it be that they are using image-based reasoning? A discussion of the “representation of thought” question is continued below.
(“Intelligence: The Eye, the Brain, and the Computer”, Martin A. Fischler & Oscar Firschein, 1987, page 308)

 

 

 
  [ # 8 ]
Mark Atkins - Nov 13, 2012:

...souped up digital stupidity…

Mind if I borrow that, Mark? cheese

Seriously, I have to agree with your ideas regarding separation between “learning” and values or goals, but only up to a point. After all, values are also learned, and goals are, in large part, defined by one’s knowledge, and I feel that they are also refined and re-defined through education, whether taught by others, or self-learned. On the other hand, learning Algebra or Calculus won’t necessarily alter one’s moral or ethical values in the average situation. smile I guess it mainly depends on the lessons to be learned.

 

 
  [ # 9 ]
Dave Morton - Nov 13, 2012:

Mind if I borrow that, Mark? cheese

Borrow away. Make me flattered. grin

Dave Morton - Nov 13, 2012:

After all, values are also learned, and goals are, in large part, defined by one’s knowledge, and I feel that they are also refined and re-defined through education, whether taught by others, or self-learned.

Good point, up to a point. grin Our fundamental goal/value is survival, and that is hardcoded in us in various ways: fast muscle reflexes that bypass the brain, pain chemicals activated by tissue damage, numerous emotional drives, etc. The more specific values that lead to The Primary Goal (= survival) in indirect ways can and must be learned, but I believe these subgoals / more specific values ultimately are nearly universal: that all cultures will eventually converge to the same conclusions regarding certain ethics and behavior being conducive to long-term survival of our species.

I did think of an interesting possibility after I posted in this thread, though: maybe a better test of intelligence instead of the Turing test would be a similar test that allows for differences in culture, species, and even basis of lifeform. In other words, instead of trying to make a chatbot learn and mimic human culture, which is a prohibitively huge chore that doesn’t demonstrate intelligence anyway, maybe we should allow chatbots to have their own goals, values, and culture when testing them for intelligence. That would be more natural to them, therefore should be easier to program. Just be sure to call that the “Atkins test” to flatter me some more, Dave. grin

Now, everyone don’t be so hard on Jerome. He is attempting to achieve the impossible. He is trying to sing in his own voice using someone else’s vocal cords.
—“Art School Confidential” (2006)
http://www.imdb.com/title/tt0364955/quotes

The Turing test has more historical and philosophical importance than practical value; Turing did not design the test as a useful tool for psychologists. For example, failing the test does not imply lack of intelligence. The important central idea is that the ability to successfully communicate with a discerning person in a free and unbounded conversation is a better indication of intelligence than any other attribute accessible to measurement.
(“Intelligence: The Eye, the Brain, and the Computer”, Martin A. Fischler & Oscar Firschein, 1987, page 12)

 

 

 
  [ # 10 ]

I don’t entirely disagree, Mark, but I also don’t entirely agree, either. To my way of thinking, survival isn’t a goal, per se, but an instinct. Granted, that instinct has a lot of influence over our goals, but it doesn’t have total control. Those poor individuals who have a deathwish, or even folks who are thrill seekers exhibit behaviors that run counter to the survival instinct, and often have goals and/or desires which have little to do with surviving. Then we have the suicide bomber, who is obviously working against his (not often is a suicide bomber a “her” - why is that?) survival; but then again, I sort of lump those individuals in with the poor souls that have a deathwish. smile

 

 
  [ # 11 ]

All great points Mark and Dave

MARK>>“I believe it should be possible to give RICH and its parent independent and even conflicting value systems without affecting the learning algorithms.”

DAVE>>“Seriously, I have to agree with your ideas regarding separation between “learning” and values or goals, but only up to a point. After all, values are also learned, and goals are, in large part, defined by one’s knowledge, “

Theoretically (and this is a word I use a lot lately) RICH takes it a step further than this. RICH has a category, ENTITY and with respect to allowing corrections entities are weighted with the “PARENT” being given precedence. As RICH forms “opinions” which are not corrected, the weight given to SELF goes up. There is a breakaway point which is defined as the point where the weight of RICH’s own decision making is superior to PARENT. After the breakaway point is reached, RICH can decide to override a correction and re-instate its own opinion. Factors that go into an override include topic and emotional state. RICH currently has (5) emotional states with 10 levels of intensity and topics are ∞ so really I have no idea what will happen until the experiment has run for a while.

“Thought” is one of those nebulous, undefined words like “consciousness” and “understand” that I try to avoid.
(chucking here) Couldn’t agree more.

“But quantum computing, despite all its speed, parallelism, and novelty, is still basically just souped up digital stupidity of the same type that has already failed us in AI, based on representation that still essentially uses bits (albeit in the form of qubits) and algorithms”

Again I’m in complete agreement with you here. In this case, I’m not referencing Quantum computing , but rather personal views on the nature of the “self”. When we attempt to build a humanlike machine entity, it is (in my opinion) impossible to avoid questions that are unanswered in the human machine itself. Our work (I believe) must necessarily overlap the fields of psychology and theology where these questions have been addressed.

http://psychclassics.yorku.ca/Calkins/SciSelf/index2.htm
http://onlinelibrary.wiley.com/doi/10.1002/dev.420170104/pdf
http://infantlab.fiu.edu/Articles/visual self recognition 1996.pdf

In the context of this discussion I am referring to the location of what is referred to as the “self” and in condensing the contents of these and other papers down I am talking about that part of the human consciousness that is sometimes referred to as the “watcher”.

Amit Goswami, Ph.D. has published wonderful work in this field.

My personal opinion is that the machine will only approximate “aware” as we think of that state in humans.

MARK>>“In other words, instead of trying to make a chatbot learn and mimic human culture, which is a prohibitively huge chore that doesn’t demonstrate intelligence anyway, maybe we should allow chatbots to have their own goals, values, and culture when testing them for intelligence. “

Very true. I believe that RICH is at a disadvantage when participating in contests such as the upcoming Robochat challenge, and it will be interesting to see how ‘he” does. Because RICH is not a chatbot in the classic sense, but an external interface to the AI, there are many factors that could combine to cause it to not answer any further questions. For instance if you repeat yourself, RICH will become “irritated” and exit the conversation. If you communicate in a way that RICH feels is out of context and you repeat that behavior (Which is the very nature of a Turing test) RICH will become “irritated” and exit the conversation. So it will be educational all the way around. RICH approaches conversation in a way that I perceive to be slightly different than other existing technologies in so far as that RICH does not try and provide an answer to every question if it has truly not cognitively come up with one. This was deliberate and part of the learning process, but again could prove costly in terms of competing in a classic Turing test.
I’ve been thinking of ways to address this, and one of the things that I’ve been considering here in the last few days is adding to the PostProcessor module something that might be considered a “Trivial” conversation function that is AIML based. I’ve read Dave’s articles and will be seeking his expertise liberally on implementing it!


VLG

 

 

 
  [ # 12 ]

Feel free to do so, Vincent! My email is always open, as is my Skype! smile

 

 
  [ # 13 ]
Dave Morton - Nov 13, 2012:

To my way of thinking, survival isn’t a goal, per se, but an instinct.

I didn’t mean to imply that every goal is stored in electronic form in a brain. My view is that survival is such a fundamental, top priority, assumed goal in any living organism that survival is physically built into every living being, down to even single-celled organisms, built even into the very makeup of cell walls. It wouldn’t be wise to store such a critically important goal in software anyway, since software could be too easily erased or modified, and such software would needlessly take up memory and processing time. Wise engineering would store the survival goal(s) in hardware for enhanced speed and enhanced security.

I read somewhere (maybe in “Animals Without Backbones”) that you can physically simulate a one-celled organism by putting a blob of a certain type of oil into another certain type of liquid, which will (randomly) move around and selectively “consume” pieces of wax but not pieces of sand, and the authors remarked that so much of what we would attribute to the status of living or to intelligent behavior is nothing more than such physical interactions of molecules and structures built of those molecules. That makes a lot of sense from the standpoint of evolution, genetics, and genetic algorithms that make negentropic progress by nothing more than random manipulation of structures, and through hierarchies of such structures cause amplification of those tiny “intelligent” actions, up to structures of cells, then neurons, then brain components, then entire brains.

There is a strong analogy of my assertion about hard-coded algorithms in the science of biological vision: because certain attributes of the world (such as lighting from above, physical proximity implying similar visual data, and the prevalent presence of distinct, rigid objects) are so dependable, biological visual systems put such assumptions into hardware. The desirable effects are speed and security, as I mentioned, though the undesirable effects are that if an organism using such a hardwired visual system foundation is placed into an extremely different visual environment, that organism will be unable to interpret visual input with any reasonable degree of competence.

That is, a “rule of thought” may be much more than a mere regularity; it may be a wise rule, a rule one would design a system by if one were a system designer, and hence a rule one would expect self-designing systems to “discover” in the course of settling into their patterns of activity. Such rules no more need to be explicitly represented than do the principles of aerodynamics honored in the design of birds’ wings.
  For example, Marr discovered that the visual system operates with a tacit assumption that moving shapes are articulated in rigid linkages and that sharp light-intensity boundaries indicate physical edges. These assumptions are not “coded” in the visual system; the visual system is designed to work well only in environments where the assumptions are (by and large) true. Such rules and principles should be very precisely formulated at the computational level—not so they can then be “coded” at the algorithmic level but so that the (algorithmic) processes can be designed to honor them (but maybe only with a high degree of regularity).
(“Brainchildren”, Daniel C. Dennett, 1998, page 233)

David Marr’s (1982) theory of vision is a prime example of a model that stresses the power of purely bottom-up processes, which can, Marr stressed, squeeze a lot more out of the data than earlier theorists had supposed. The issue is complicated by the fact that the way in which Marr’s model (and subsequent Marr-inspired models) squeeze so much out of the data is in part a matter of fixed or “innate” biases that amount to presuppositions of the machinery—such as the so-called rigidity assumption that permits disambiguation of shape from motion under certain circumstances. Is the rigidity assumption tacitly embodied in the hardware a top-down contribution? If it were an optional hypothesis tendered for the nonce by the individual perceiver, it would be a paradigmatic top-down influence. But since it is a fixed design feature of the machinery, no actual transmission of “descending” effects occurs; the flow of information is all in one inward or upward direction. Leaving the further discussion of these matters for another occasion, we can use the example of Marr to highlight the difference between the two main senses of “top-down.” While Marr, as I have just shown, was a champion of the power of bottom-up models of perception (at least in vision), he was also a main spokesperson for the top-down vision of methodology, in his celebrated three-level cascade of the computational, the algorithmic, and the physical level. It is hopeless, Marr argued, to try to build cognitive science models from the bottom-up: by first modeling the action of neurons (or synapses or the molecular chemistry of neurotransmitter production), and then modeling the action of cell assemblies, and then tracts, and then whole systems (the visual cortex, the hippocampal system, the reticular system). You won’t be able to see the woods for the trees. First, he insisted, you had to have a clear vision of what the task or function was that the neural machinery was designed to execute. This specification was at what he called, misleadingly, the computational level: It specified “the function” the machinery was supposed to compute and an assay of the inputs available for that computation. With the computational level specification in hand, he claimed, one could then make progress on the next level down, the algorithmic level, by specifying an algorithm (one of the many logically possible algorithms) that actually computed that function. Here the specification is constrained, somewhat, by the molar physical features of the machinery: maximum speed of computation, for instance, would restrict the class of algorithms, and so would macro-architectural features dictating when and under what conditions various subcomponents could interact. Finally, with the algorithmic level more or less under control, one could address the question of actual implementation at the physical level.
  Marr’s obiter dicta on methodology gave compact and influential expression to what were already reigning assumptions in Artificial Intelligence. If AI is considered as primarily an engineering discipline, whose goal is to create intelligent robots or thinking machines, then it is quite obvious that standard engineering principles should guide the research activity: first you try to describe, as generally as possible, the capacities or competences you want to design, and then you try to specify, at an abstract level, how you would implement these capacities, and then, with these design parameters tentatively or defeasibly fixed, you proceed to the nitty-gritty of physical realization.
(“Brainchildren”, Daniel C. Dennett, 1998, pages 251-252)

 

 

 
  [ # 14 ]

Vincent:

I have no idea why it decided that it did not like a universally respected individual like Steve Jobs, but I suspect it has to do with being forced to limit the amount of webpages that the GUI app searches before forming an opinion. More than likely the (2) pages searched were advertisement laden, although usually wikipedia makes it into the top 2. This was a concession to how much horsepower it takes to do a search, the parse each page. The idea is that the AI_SUB_SYS which runs in the background as a windows service will go over each ENTITY and form a more complete opinion.

Very cool that your bot can “think” on multiple threads of a conversation at the same time. smile I’d like to know more about how the TypeV Algo\Net forms an opinion. What database does it use to decide which data it “likes” and which it does not.

You gave the (hypothetical?) exchange where the bot responds “Because Steve Jobs founded Apple computers in 1976”. Is there something about this statement that would register a “not like”? If so, can you go into more detail about what and why? You mentioned that you use a system of 5 emotional states. How do words/phrases/etc. correlate with these emotional states?

(I use a similar system consisting of a 7 “fundamental” emotions array from which other emotions are derived. The plan is to integrate words, phrases, and whole sentences with this system. But it has not been implemented yet.)

I disappear for a week or two and all these interesting threads pop up! smile

Mark and Dave:

To my way of thinking, survival isn’t a goal, per se, but an instinct. Granted, that instinct has a lot of influence over our goals, but it doesn’t have total control.

I didn’t mean to imply that every goal is stored in electronic form in a brain. My view is that survival is such a fundamental, top priority, assumed goal in any living organism that survival is physically built into every living being, down to even single-celled organisms, built even into the very makeup of cell walls.

I’m glad you clarified this, Mark. Too often one sees evolutionary biologists use terminology that implies organisms have some top-level will to survive and reproduce. We should know better—humans have no such direct impulses. However the instincts that we do have happen to make us excellent at doing both those things. Especially the reproduce part—the species that don’t reproduce prolifically exist in vanishingly smaller numbers. That’s simple math.

I read somewhere (maybe in “Animals Without Backbones”) that you can physically simulate a one-celled organism by putting a blob of a certain type of oil into another certain type of liquid, which will (randomly) move around and selectively “consume” pieces of wax but not pieces of sand, and the authors remarked that so much of what we would attribute to the status of living or to intelligent behavior is nothing more than such physical interactions of molecules and structures built of those molecules.

Cool experiment! It’s amazing how reluctant people are to take this to it’s obvious conclusion: everything we are is an (incredibly complex) combination of physical chemical reactions. No driving “will” is necessary. Just energy to drive reactions. Rather humbling and simulaneously inspiring to know just how complicated but fundamentally understandable is our molecular stew.

 

 
  [ # 15 ]

WOW! Great responses additions, Im trying to get time to answer in depth. In the meantime
C R Hunt >>You gave the (hypothetical?) exchange where the bot responds “Because Steve Jobs founded Apple computers in 1976”.

No thats an actual response. The Type5 is currently online (as of yesterday) and is the default RICH, so you should be able to access it. Making the transition from .net Desktop to .net Webapp isnt always as seamless as you might wish for, so Im still debugging , but essentially it seems to be working. By the end of this week I hope to have everything smoothed out and the complete memory set up. Ill try and get a more in depth response up later today.

Vince

 

 1 2 3 > 
1 of 3
 
  login or register to react