Soon we can expect tools to model #virtualhumans having #ai dialogues with us. This is a predecessor
Check out the video. Just one of the tools to model anatomical virtual humans. Soon, we’ll be able to have conversations with virtual humans, while ‘undressing’ them virtually, first by taking of their cloths, which is obviously hilarious on its own, but then also remove their skin, and strip their organs. We only need to turn off the ‘pain’ emotion, because else the virtual human would respond like a human who is scalped…
Make human-machine conversation interesting to increase the attention span
Earlier this year Gartner predicted that by 2015 “50 percent of online customer self-service search activities will be via a virtual assistant for at least 1,500 large enterprises”. Interestingly, one of the main challenges the release cited was not a customer’s willingness to interact with an automated mechanism, but the ability for it to maintain an interesting dialogue.
Fred Roberts, explains how psychology in NLI is important to create a humanlike Teneo virtual assistant.
The goal of psychology is to predict and control behavior. It is the same goal we have as knowledge engineers. This may sound somewhat Orwellian, but it is not about sinister machinations. In Artificial Solutions, we want to do everything possible to help users of our Teneo virtual assistants (VAs) quickly and conveniently find the way to the information they need. Since a VA is typically designed to answer a specific set of queries, we have a clear idea of which content should be covered. For example, it’s reasonable to expect that a VA on a bank’s website will be asked questions about banking, hence it will need to be an expert on transactions such as opening accounts and transferring money, but it is not reasonable to expect it to advise you which sofa to buy with the check you write. Anna knows every item that IKEA sells, but will be puzzled if you try to borrow money from her.
Are we going to fall in love with embodied agents? Multidisciplinary research in human-robot romantic love
How to model a human-to-robot romantic relationship? Just a few elements will do: artificial emotional hormones, intelligent affective system and probabilistic parameters of love between humans and the robot…
Awareness ability of 3D interactive conversational agents
Virtual agents’ reasoning and actions are tightly connected with their awareness ability helping to relate to the environment during a conversation. Researchers invented the combination of virtual reality and artificial intelligence techniques to simulate living conversational agents being aware of themselves, the virtual world around them, and other virtual beings existing in that environment.
Virtual humans being interviewed by mental health clinicians
Conversational virtual humans are nowadays applied to the field of neuroscience, psychiatry and psychology. The technology has evolved to a point where researchers may begin developing mental health applications that make use of virtual reality patients. Human patients with acute neurological illnesses are often confused and unable to cooperate with clinicians during medical consultations. Imagine a simulated training with virtual humans having real mental disorders, willing to talk to their doctors, and sharing their emotions!
Virtual agent that expresses its own opinions and is a sensitive listener
Virtual human Spike has its own beliefs and values. Additionally, it exhibits rude, pessimistic and confrontational behavior. Even a very cheerful person is not able to convince this virtual agent to chill out, relax or assimilate optimistic outlook. Would you like that Spike became your conversational friend?