Sensors of agents: regular senses which humans have (visual, audio, touch, smell, taste) and extraordinary senses like body sensors for mind reading. Actually a keyboard are tactile sensors of an agents as well. Speciale sensors where sequences or triggering have meaning.
Are we going to fall in love with embodied agents? Multidisciplinary research in human-robot romantic love
How to model a human-to-robot romantic relationship? Just a few elements will do: artificial emotional hormones, intelligent affective system and probabilistic parameters of love between humans and the robot…
Helium3D: 3DTV which recognises somebody sitting in a room, what they wish to view and show different images to different viewers! WOW
A revolutionary interactive 3DTV system is being created by De Montfort University Leicester (DMU), England, researchers. The €4.2 million (approx £3.7 million) project aims to develop a television that can recognise where somebody is sitting in a room and what they wish to view and interact with on their television.
Researchers believe it is a step towards truly interactive 3D video games where gamers use their bodies to control the action without the need for a controller. It could be the next step for Microsoft’s Project Natal.
The project, called HELIUM3D (high efficiency, laser-based, multi-user, multi-modal 3D display) is also exploring ways of allowing viewers who are watching the same television to each view a different channel at the same time and could even let them choose different viewing positions within the image.
For example, groups of people watching a football match in the same room could each pick the part of the stadium from which they would like to experience the action.
These glasses track our eye movements allowing us to manipulate data, augmented to what we see in the real world.
German researchers at the Fraunhofer Institute for Photonic Microsystems have embedded a head-mounted microdisplay into a pair of glasses-allowing the user to access and manipulate data with simple eye movements.
MS Project Natal: New XBox addon, your body is the controller. Voice recognition. Gesture recognition. Face recognition. Awesome!
Microsoft’s Project Natal delivers a new addon to your XBox that allows you to play without controller. The system has a 3D camera that maps the exacts position of your hands, your fingers, your feet, your header, your nose, everything in a 3D map. This allows you to control the game with only your body, in great detail, and no controller needed. Furthermore, it recognises voice and faces and supports complex video chat.
TellMe for Windows Mobile 6.1 is a one-button hub for voice commands including texting, calls, weather, pizza or mother's day gift ideas
TellMe for Windows Mobile 6.5 isn’t just an app, it’s a one-button hub for voice commands of all kinds, including text messaging, making calls, and also jumping to Microsoft Live Search with natural language queries like “weather in San Francisco, California,” “pizza in Kansas City” or “mother’s day gift ideas.”
Our brain is able to relate unfolding sentences to earlier ones, which will usually occur before the word is even finished being spoken.
We engage in numerous discussions throughout the day, about a variety of topics, from work assignments to the Super Bowl to what we are having for dinner that evening. We effortlessly move from conversation to conversation, probably not thinking twice about our brain’s ability to understand everything that is being said to us. How does the brain turn seemingly random sounds and letters into sentences with clear meaning?
EyeTable is an artificially intelligent dinner table that reads physical gestures and speech patterns and lets the participants know how the are doing.
Carnegie Mellon undergraduates Dan Eisenberg, Kevin Li and Ilya Brin have developed the EyeTable, which is described as “an artificially intelligent dinner table that reads physical gestures and speech patterns and lets the participants know how the date is going-in real time.
A 3D-eyetracking user interface on your mobile that tracks the position of your eyes and changes the display accordingly...
The Swedish design team TAT (The Astonishing Tribe), has demonstrated a 3D-eyetracking UI that tracks the position of our eyes and changes the display accordingly. This really gives a feeling that objects are behind one another. Earlier the team developed the look and feel of the T-Mobile G1’s user interface which included such innovations as the window shade menu and 9-point visual key-lock.
Screen calculates spreed of approaching finger. Ideal for small screens, says Mitsubishi.
Mitsubishi latest technology promises to detect the distance between a finger and the touch panel to allow for a whole host of new interface options. That’s done with the aid of an array of sensors that can also be used to calculate the speed at which the finger is approaching, and allow for a so-called “mouse-over function,” which would essentially let your finger control a cursor without actually touching the screen—something Mitsubishi says would be ideal for devices with small screens. Currently, it is just in prototype form (currently a 5.7-inch capacitive VGA display).