AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

I don’t think we will ever really know if we create self-aware AI.
 
 
  [ # 46 ]

Unity3d might give you what you need to create a prototype.

 

 
  [ # 47 ]

It took me some searching to go back to 2006 and find a program (then in beta) called VI Wonder. I think the creator was one John Calicos (forgive spelling if recollection isn’t 100%). It involved a fairly large island with trees, flowers, grass etc.  On this island were also two female, full bodied “avatars” who roamed around the island “discovering” what was there. Every now and then the two would meet to discuss and share what each had learned.

John offered the beta to several members of the forum as I recall and it was quite impressive to play with and witness the results.

Sadly, it appears to have gone by the wayside. The old site was http://www.qflux.net (quantum flux).

Almost sounds like something that would have appealed to Wouter and others.

If I turn up anything else in this regard, I’ll post it here.

 

 
  [ # 48 ]

Ironic. We do not consider colours to be “meaningless symbols” as much as words. Yet colours in such a virtual environment, and also in the real world, would be “seen” by the AI as numbers (RGB values). Should we consider it more aware for storing three numerical variables?
If the answer is yes, then surely we shouldn’t regard words as a less meaningful form of interaction, in my opinion anyway.

 

 
  [ # 49 ]

Interesting but what numbers than would you assign to various letters or words? ASCII or other values?

My point was that without the various sensors mentioned, it is just a chatbot program, runing without ever grasping or coming to terms with its surroundings / environment, much less comprehending context or meaning of pattern relationships.

Then again…it’s an observation.

 

 
  [ # 50 ]

Thanks guys for your responses, these have been fascinating to say the least. In particular that talking dog video Andrew linked is tantalizingly close to what I would love to build… The video comments mention it as part of the OpenCog project (an interesting discovery by itself), but I’m not sure if in their opensource code there’s also that dog to download.

Unity3D seems like a good choice indeed, being ready for the web and all! Shame it’s not more universally supported… I’m tempted to rather go for JS Canvas stuff that I saw many cool 3D demos for when it first came out, and I see as more future-proof on the web. (see this site for example: http://www.html5canvastutorials.com/cookbook/ and in particular the ‘game’ they build: http://www.html5canvastutorials.com/cookbook/ch8/index.html)

Lastly, Don you have a good point. If a virtual environment is in essence just ‘visual sugar’ for what internally become numbers and symbols anyway, why not just focus on those first and foremost in our quest to build an AI. Text-based RPG games come to mind smile

 

 
  [ # 51 ]

To put it most clearly: cow = “w23-h8-i9-t20-e5” and cow = “r255-g255-b255” smile
If you can devise a process to understand relationships between the one, I should think the same process is applicable to the other. It doesn’t invalidate your observation that people still consider direct sensory input to be more meaningful than an indirect representation of it. Intelligence is all brains, but human awareness appears to be part body…

I wouldn’t know how to go about making 3D virtual environments, but the most fun I had with AI was in making a simple 2D RPG game, a large field with little task-driven 2D people and item boxes. Much more time went into making the environment than the AI though (graphics, animation, collission detection).

 

 
  [ # 52 ]

Hehe, well put.

With all this talk on virtual environments to put our chatbot in, I decided yesterday - yay insomnia - to finally have a first go at something in that direction: http://www.yokobot.com/index.php?p=2dworld

The character can move around in a 2D room, detects when she’s near objects (more or less), stuff happens when touching objects like switching the light on/off, typing on the pc or reading a book, and there’s a cat and some random balls present for some reason.

Not sure where it will lead to, but just like you Don, I’m having fun building this smile And somehow just staring at this and having Yoko walk around and do some things already opens a whole new can of possibilities and questions for hooking this up to my chatbot engine. To be continued, hopefully!

 

‹ First  < 2 3 4
4 of 4
 
  login or register to react