A new virtual world promises seamless social interaction. Samantha Murphy is the first to speak to High Fidelity’s creator, Philip Rosedale, from the inside
I’M NOT wearing any clothing, but it’s not the first time I’ve shown up that way for an interview. Luckily, my interviewee doesn’t seem to notice, or mind. There’s a reason – he created me and everything around us.
I am one of the first people to experience High Fidelity, the prototype virtual world designed by Philip Rosedale, the creator of Second Life. He’s the humanoid avatar; I get to be a cute robot with big blue eyes. He and his 12-person team have recently raised $2.5 million to develop the world, and the fact that it’s Rosedale means it has created quite a buzz.
It’s easy to forget that Second Life, a free-roaming virtual world, was a big deal when it launched in 2003. Users could go anywhere, do anything. Reuters even had a dedicated Second Life reporter for a while. It still boasts a million active monthly users according to Rod Humble, the former CEO of Linden Lab, which owns the game. But despite this it has never really lived up to its promise.
Jeremy Bailenson, director of Stanford University’s Virtual Human Interaction Lab, thinks he knows why: the time lag resulting from the slow rendering of graphics and the lack of naturalistic avatar tracking. Both of these have been addressed in High Fidelity, he says, having also had a sneak preview.
I had a few issues with Second Life. I often conducted interviews there, but my painstakingly created avatar would often appear half-naked, with an arm stuck awkwardly above its head or facing a wall. So when Rosedale invited me to be the first to interview him in High Fidelity, I jumped at the chance.
When I log in to prepare for the interview, I am greeted by an adorable white robot in the upper left corner of my screen and a block-filled landscape reminiscent of Minecraft. As I figure out how to control my view and explore the world a bit, the robot seems to be smiling at me, tilting his head, blinking and opening its mouth as if to speak. It dawns on me that it is mimicking my facial expressions by watching them via webcam. The robot was me. Suddenly, Rosedale’s avatar materialises.
Unlike my robot’s simple face, Rosedale’s avatar is human (see picture, left). As we face each other, the most obvious difference between Second Life and High Fidelity is immediately apparent. Although it is still in early testing, High Fidelity already has the social presence that was lacking in Second Life.
As Rosedale speaks to me, he demonstrates tricks like building structures in the air with just a wave of his hand, and shows how the audio corresponds to the location and distance of his avatar. His avatar displays all his typical facial mannerisms as he speaks: eyebrows raising, lips flexing and cheeks raising into smiles. The hands gesture effortlessly, pointing and rolling to emphasise points.
The facial mannerisms are achieved using a 3D camera fitted with the same chip as in Microsoft’s Kinect. His hand movements are captured using a Razer Hydra games controller. Both allow him to speak to me as if he’s sitting across a table instead of in front of a computer.
The key to High Fidelity’s realism is how it minimises lag. It only takes 100 milliseconds from Rosedale doing something before I see his avatar do it, almost doubling the speed of video chat such as Skype. Bailenson was impressed. “I used the system to interact with a person in real time and it felt like he was in the room with me,” he says.
Rosedale has also teamed up with neuroscientist Adam Gazzaley at the University of California, San Francisco, to find out more about how to enhance face-to-face experiences online. At this year’s SXSW festival in Austin, Texas, they used a neural cap fitted with EEG sensors to create a model of one person’s brain after an MRI scan had provided the underlying structure. People were then able to tour that person’s brain using the Oculus Rift virtual realityheadset as it showed live brain activity.
This work could feed into High Fidelity, says Rosedale. He thinks that seeing someone else’s brain activity may become a part of communication in the virtual world, with people able to see and respond to changes in someone else’s brain as they chat. He has other ideas too. For example, instead of relying on external servers he plans to let people offer their computer downtime to help power the game in exchange for in-game currency.
The experience is so close to reality that I can’t help wonder if we are teetering close to the Uncanny Valley, the creepy feeling that an artificial human elicits in a real one when their facial expressions and movements are close enough to feel real but too awkward to be engaging. “We like to say we are crossing the Uncanny Valley and getting away with it,” says Rosedale.