AI Zone Admin Forum Add your forum

NEWS: survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

State of the Art
  [ # 31 ]

Suuuuure, she is, Andrew. LOL

I don’t think we’ll be recreating scenes from Westworld or Blade Runner any time soon, though. smile


  [ # 32 ]

Westworld. Yeah, those were the days:


  [ # 33 ]
Erwin Van Lun - Oct 18, 2010:

Westworld. Yeah, those were the days:

So you are a fan of Yul Brynner! Aren’t you Erwin? LOL




  [ # 34 ]

Here is another significant application of the Vocaloid software that was used to allow the HRP4-C robot to sing and dance.

Hatsune Miku is a virtual singer who is rapidly growing in popularity around the world. The only clue that her voice is not that of a real human singer is that it covers a range that no human being could achieve. She does not have a physical body and her live concerts are fronted by a stunning, larger than life, 3D hologram.

Watching the way she moves the crowd in this video I am reminded of what may well have been the first (evil) ChatBot ever created—“Maria” in Fritz Lang’s 1927 movie “Metropolis”—which was a charismatic robot created to influence the workers and leaders of the city and bring about its downfall.

The following video goes into considerable detail about how Hatsune Miku was created:

So far the performances of both HRP4-C and Hatsune Miku are carefully planned and choreographed, but I imagine it is only a matter of time before this technology can be used to create spontaneous ad hoc interaction with both individuals and crowds.


  [ # 35 ]

The second video that I linked to contains details about how the voices are composed. There is even a brief scene with the woman who provided the base “model” for Hatsune Miku’s voice, though of course she certainly couldn’t sing like that, if at all.

The technology for “tuning” arbitrary sounds has been around for decades. I can remember hearing tunes played by changing the frequencies of a dog barking while I was still at university. However the Vocaloid software that is being used by this new generation of avatars achieves a whole new level of sophistication.

For one thing, it is able to create “emotional” variations so that it won’t necessarily sound the same for each performance, just as it would with a human artist. For another thing, it emulates the effects of the singer’s body movement and disposition during the performance, including pauses for breathing. The latter effect was explained in more detail in one of the videos about HRP-4C.

Did you happen to see any of the videos of HRP-4C dancing, by the way? The original video that I posted here was just of the robot standing still and singing, but since then there have been videos of her dancing being posted all over the net:

Just for some real irony, check out the videos of HRP-4C dressed as Hatsune Miku during some of her appearances.


  [ # 36 ]

The Actroid-F is the latest version of the Geminoid project developed by Japanese roboticist Hiroshi Ishiguro. Currently it is the most life-like humanoid robot in existence, and in much of the following video it is indistinguishable from a real human being. Although not quite perfect yet, it seems poised to become the first robot to overcome the “uncanny valley” effect through its ability to reproduce the entire range of human facial expressions.

Being able to copy facial expressions is only one side of the equation though. Actroid-F does so by observing and analyzing the behavior of real humans, making it sensitive to the non-verbal cues which make up roughly seventy percent of the content of interpersonal communication.


  [ # 37 ]


I’ve been away for a few days, and now your posting this. All these videos are AWESOME!!!!!!

They actually need much more attention on Actually on our home page (but we don’t provide that option yet), but separate articles in our business tab and RSS feed is what they need!!


  [ # 38 ]

I think Watson, IBM’s Jeopardy prodigy, looks like it’s finally ready to win the big bucks:

Can’t wait to see how this turns out!


  [ # 39 ]

cool! The more is written about this, the more exciting our are will become! It’s all just in its infancy wink


  [ # 40 ]

CR: Yes, I am going to mark my calandar ! Feb 14 - 16.


  [ # 41 ]

One of the most amazing sequences in the Pixar movie “Wall-E” is when the robot transforms from an inanimate machine to a conscious “living” being. Like countless artists before them, the animators at Pixar must have already known what the following study has been able to quantify, bringing us one step closer to overcoming the uncanny valley effect.

“Now, a new study shows just how important the eyes really are when we judge whether a face is that of a living person or an inanimate object. And that ability, the researchers say, is key to our survival, enabling us to quickly determine whether the eyes we’re looking at have a mind behind them.”


  [ # 42 ]

Thanks Andrew! The posting triggered me to start a new thread to discus the uncanny valley effect, based on a article in the Economist:


  [ # 43 ]

ATT has just announced dual core smart phones that can replace laptop PCs. A docking accessory called a webtop provides a larger screen. No computing power actually comes from the dock, however. The dock has an 11.6-inch display with 8 hour battery life and weighs just 2.4 pounds.

It will be interesting to see Skynet-AI running on it.


  [ # 44 ]

@Merlin: is this state of the art in AI? Or would it be better to start a separate hardware thread on this?


  [ # 45 ]

It is not specifically AI related, except that it relates in a way to what you might be able to run on a cell phone. An AI that used to be able to only run on a server might now be able to run on your phone. It will also help in rendering and helping cross the virtual “uncanny valley”.


 < 1 2 3 4 5 > 
3 of 5
  login or register to react