AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Relationship : Language & Knowledge
 
Poll
The relation of language to knowledge, I believe is.......
They are one and the same! 0
One can communicate pieces of knowledge about the world to another using language 2
language is for communication, but doesn’t communicate knowledge 1
language has no bearing on knowledge 1
other.... 5
Total Votes: 9
You must be a logged-in member to vote
 

Hans, I’ve kicked off this thread, since I wanted to leave the other thread (“Philosophy or Results”) to its core idea.

 

 
  [ # 1 ]

Ooh… tough one, I think language is nothing more then a result of the form of our brain. Just like a bird can’t help to sing a song, we can’t help to talk. But does it have to be something more? For most people, I guess, yes. But I wouldn’t necessarily hard-link it to knowledge. I mean, I’ve uttered lots of things in my life, which, when sober, don’t really add it up to ‘knowledge’. On other times, I talk to myself, so there isn’t much communication with others involved. Perhaps, I’d guess, I go for other, maybe sometimes, communication.

 

 
  [ # 2 ]

And yet another poll to get people to vote on a premise. This is not what discussion is about, so I’ll pass.

Besides, this subject is under active discussion in other topics already.

 

 
  [ # 3 ]
Jan Bogaerts - Mar 15, 2011:

Ooh… tough one, I think language is nothing more then a result of the form of our brain. Just like a bird can’t help to sing a song, we can’t help to talk. But does it have to be something more? For most people, I guess, yes. But I wouldn’t necessarily hard-link it to knowledge. I mean, I’ve uttered lots of things in my life, which, when sober, don’t really add it up to ‘knowledge’. On other times, I talk to myself, so there isn’t much communication with others involved. Perhaps, I’d guess, I go for other, maybe sometimes, communication.

Jan,

yes, exceptions to every rule as the saying goes.  Language is certainly not the basis of some forms of knowledge.  For example the knowledge of how to ride a bike—no language involved there.

But in declarative knowledge, language plays a central role, IMHO.

Procedural knowledge?  Sure, there are other systems at work, but I think that language helps immensely in teaching someone procedural knowledge.

 

 
  [ # 4 ]

I think in English, but my knowledge does not stop there. Pattern recognition, creativity and insight happens in a different process.

What language do you think in? Or is your thought process in pictures, or some other form?

 

 
  [ # 5 ]
Merlin - Mar 15, 2011:

I think in English, but my knowledge does not stop there. Pattern recognition, creativity and insight happens in a different process.

Agreed.

Merlin - Mar 15, 2011:

What language do you think in? Or is your thought process in pictures, or some other form?

I think visually for the most part.

 

 
  [ # 6 ]

My thought processes (or, more accurately, the ones that I notice) are usually a combination of “images”, or “videos” that I “see”, and an “internal running dialog” that I “hear” within my head, either recreating past conversations, or outlining, step by step, the procedures or processes involved in whatever task I’m currently working on, or “narrating” any text that I may be reading. Thus, I would have to say that, in part, I “think” in Swahili! raspberry Ok, ok. English. smile

 

 
  [ # 7 ]

Quite correct, I also do mentally review past conversations, or content from videos I have watched.

 

 
  [ # 8 ]

Ah yes, so how important are things like introspection and reflection for AI ? I think, at least for strong AI (in the vein of ‘artificial consciousness’) these things are in fact a part of the puzzle.

 

 
  [ # 9 ]
Hans Peter Willems - Mar 15, 2011:

Ah yes, so how important are things like introspection and reflection for AI ? I think, at least for strong AI (in the vein of ‘artificial consciousness’) these things are in fact a part of the puzzle.

Of that, I have no doubt, Hans Peter. There almost has to be some sort of internal self-evaluation function or routine that continually reviews “known” data (e.g. memories, “values” or morals, etc.) in the background, that acts as a sort of “subconscious” component. It should be made to have little, if any, impact on the current “conscious train of thought”, perhaps, but it should still exist, constantly refining things, and making “adjustments” to the bot’s personality, based on the sum total of accumulated knowledge and experience. Or maybe not. smile

 

 
  [ # 10 ]
Hans Peter Willems - Mar 15, 2011:

Ah yes, so how important are things like introspection and reflection for AI ? I think, at least for strong AI (in the vein of ‘artificial consciousness’) these things are in fact a part of the puzzle.

EXTREMELY IMPORTANT! 

Central in fact.

One of the first topic of discussion for Grace will be. . .. ..the CLUES engine!  And updating of her own Data-Base (and Logic-Base).

Also, down the road will be topics like what Raymond Smullyan talked about in his incredible book “Satan, Cantor, and Infinity”

A bot that can reason some of those puzzles, SHOULD BE considered to be somewhat intelligent!

 

 
  [ # 11 ]

Dave, you just gave me a new idea: I already have been thinking about this reflection, introspection or even daydreaming (looking at a mental movie). But your hint of a subconscious ‘adjustment’ process got me thinking that maybe this is all the same. Maybe when our brain is going into freewheel mode, this subconscious process lifts into the conscious process and gives us daydreams and such. This would also answer the AI-related problem that in humans thought processes are continuous and how to emulate this in AI.

 

 
  [ # 12 ]

Another good question, although off topic a bit. . is ..

Can a bot have the equivalent of “mind / body” separation like a human has? (No, Hans Peter I won’t make a poll on it smile )

But, I am certainly going to give it a try myself.  In fact, right now, when Grace has a problem executing, for example if a file doesn’t exist (that should exist, to pull data from perhaps), I can create it, and respond ‘ok, try now’, and she determines that she should retry.  So the discussion can be about the topic at hand… and even turn into talking about things that have happened in her process of dealing with the input.

The mind/body thing ,I think,  IS possible in a bot.

The ‘mind’ will be the core processing engine, that determines what to do.  The ‘body’, the things that it uses to accomplish what to do.

Now you always need some kind of ‘resources’ like a) database connection, b) files,  c) internet connection,  d) knowledge base, 

whatever resources, or external sources are, even rules,  those rules, and all those external resources can be considered the ‘body’ portion of the system, and the engine that figures out which combination of those external resources to use, to satisfy its goals, is like the ‘mind’.

It will be an interesting experiment, if anything !

 

 
  [ # 13 ]
Victor Shulist - Mar 15, 2011:

No, Hans Peter I won’t make a poll on it smile

smile wink

Victor Shulist - Mar 15, 2011:

whatever resources, or external sources are, even rules,  those rules, and all those external resources can be considered the ‘body’ portion of the system, and the engine that figures out which combination of those external resources to use, to satisfy its goals, is like the ‘mind’.

I agree, ‘something’ has to be defined to be the ‘body’, but in my model I’m using sensors that are virtualized, hence they are just symbols that have a running value assigned to them. In reference to the discussion about feelings, it’s the signal (into the software) that counts, not specifically the hardware (sensors) that generate that signal.

 

 
  [ # 14 ]

There almost has to be some sort of internal self-evaluation function or routine that continually reviews “known” data (e.g. memories, “values” or morals, etc.) in the background, that acts as a sort of “subconscious” component. It should be made to have little, if any, impact on the current “conscious train of thought”, perhaps, but it should still exist, constantly refining things, and making “adjustments” to the bot’s personality, based on the sum total of accumulated knowledge and experience.

This is one of the next things I have planned to implement (after a proper output algorithm, and some other additions). Initially very simple though. In the form of, when you say: ‘the eyes of a human can be blue, red or green’, and it hasn’t processed this info yet, it can ask if ‘all’ animals have this property, and then further refine it’s data.

 

 
  [ # 15 ]

Interesting.  Yes, for your project, since you are focusing on more than a conversational agent (my bot’s purpose for the forseeable future will be a text chat system for purpose of IR, chatting, problem solving, some learning, and entertainment).

Given the fact that you will have these sensor inputs, it is probably much easier for you to really define what is body and what is mind.  I’m going to have to be creative, but I do have several ideas.

For example, in a lot of AI literature, they always speak of :

a) the bot itself

b) its environment

c) the goal of the bot (its “the utility function”)

d) a feedback system for positive or negative (‘good boy’, ‘bad boy’)

I guess a) the bot, could be broken down into its core engine and also its knowledge base, rules, etc.

For your project it is probably easier to define, but for me, I think what I will do is:

a) the bot itself
      this is right now clues.cpp , the main core engine that parses, and figures out how to evaluate each parse tree, trying to pick one with the most semantic relevance.

b) its enviornment - well, this could be the current state of the dialog with the user, it could also be all the parse trees that it has to work with, also its knowledge base, and its previous talks it had with the user.  maybe even its connection to 3rd party database, the interent, facebook, wikipedia, whatever.

c) the goal of the bot ... to figure out what the user is asking for, or what they are commenting on, basically what they are talking about, and figure out what combination of ‘resources’ to use in order to work out a plan , execute that plan, and give response to user. 

d) later, when i make a GUI, the user may click a “Good work” button, some kind of positive feedback (or ‘Bad Work butotn’) if it failed to come back with a good response.

So the goal, (c), is to do the parsing, the semantic inference, the usage of its resources, in order to interpret the input properly, and find a response that fits.  That is its goal.

In your system, I believe you later want your code to be put inside a physical robot perhaps?  If that is the case, it could have many “utility functions” , one of course would be ‘self preservation’.  For my bot, its only goal will be to find good responses .. . that is, the most relevant responses,  and responses that take into consideration the most of what was said in the conversation so far, to give a very useful, holistic conversation smile

 

 

 1 2 > 
1 of 2
 
  login or register to react