Direct v. Indirect Manipulation in VR
The ways in which we interact with objects in the virtual world and the real world has always been an interesting point of discussion when it comes to design of both hardware and software. Indirect Manipulation refers to the manipulation of virtual objects by means of a proxy object or controller, like a mouse. An example of this is we use the mouse to move a cursor around the screen, to pick up files and drag them to the trash. Direct manipulation on the other hand refers to using our bodies to act upon virtual objects as in the use of a smartphone to press buttons, drag items around or any other task. The key here is that we are acting upon the object directly, not through the use of another tool
When it comes to Virtual Reality (VR), things are changing incredibly fast in hardware, software, motion tracking and controllers. It feels like every two months an innovation has been announced that will open fantastic opportunities for deeper engagement with the virtual world. After spending some time in a VR research lab in Sydney a couple of weeks ago, my initial thoughts were not at all about my perception of reality in VR, but more about how to exist seamlessly in those realities, specifically, how to act upon objects that do don’t exist.
The HTC Vive’s controllers were fantastic and seamless to use. When I saw them in VR, even with my hands missing, it created a sense of presence in the world that is hard to match. After a few months lacking this sort of interaction, Oculus released the Oculus Touch controllers, which had a similar feel.
These controllers are fantastic, because they allow for real-time tracking of your hand location within virtual space, but you’re still using a controller. Yes there are differences between these controllers as far as how they feel, what sensors are built into the buttons, and the types of buttons, but overall they provides a very similar experience.
At present these are the best ways to interact with VR environments because even when they are out of view, they are still being tracked. The Leap Motion Controller, is an amazing piece of technology which can track our hands and fingers in space. Unfortunately, the Leap Motion Controller uses cameras, IR LEDs and software to track finger and hand movements in space, so if you perform a fine finger movement and the camera can’t see it, the tracking won’t pick it up perfectly.
Indirect or Direct
If you haven’t tried VR yet, it’s an amazing, but quite strange experience. And here I’m not talking about Gear VR, 360 Videos or cardboard, I’m talking about VR that costs over $1000, the Oculus Rift, or HTC Vive. In these experiences, especially since we have the controllers above, there is direct tracking of movement, but still no direct manipulation possible because we cannot fully act upon virtual objects becuase those virtual objects don’t have weight, volume or texture. If anything, I’d call what we can do now Indirect Body-mapped manipulations. So let’s run through different experiences and unpack them.
First, I got my ass kicked playing ping pong. I felt that the ping pong paddles were very close to reality. Of course, I didn’t feel the ball hit the paddle as I hit it, but the use of proxy object made the experience very immersive, because just like in the real world, I’m using indirect manipulation to act upon the ball.
Next I played Valve’s The Lab, specifically the robot repair shop, which mapped a visual controller (that looked different from the Vive one) in 3D space. This was a good example of a proxy controller. I used this virtual controller to open drawers, explode schematics and observe cake. This essentially felt like I was using prosthetics to interact with the world. The traditional ways we pick up objects, by enveloping them with our hand, squeezing and lifting is still not possible with current controllers, so this indirect tracked proxy object method is simply how it will be for a while (probably just a few months).
What I found most interesting is that when I was playing the zombie game Arizona Sunshine, there were virtual hands present and I found it less intuitive to manipulate objects in that world. In a nutshell the game is a traditional FPS, where you shoot zombies. Nothing else to report. Of course a ping pong paddle is a very easy object to use when compared to a semi-automatic pistol, but there was something about seeing hands that threw me off.
By seeing hands, I automatically equated those hands to my own, but the fingers didn’t work and no matter how much I squeezed on the controller like I was trying to pick something up, it just didn’t connect in my brain. There’s lots of great research out there that has found that observed hands are perceived neurologically and nearly identical as our own (see work on Brain Plasticity, and Mirror Neuron System for more), so this disconnected experience made sense. I expected the virtual hands I saw to act as my own hands do, and when they didn’t, it was like there was interference in my brain. As zombies approached, I saw the virtual hands, and thought I should just be able to just pick up the magazines on the ground, and it took a little while longer for me to figure out that the controllers I couldn’t see could be used to accomplish this task. What’s interesting is that when I had the gun loaded and I was shooting the undead, I felt way more immersed in the experience, but when I had to crouch down to pick up more magazines, put magazines on my belt or reload my gun, I noticed a slow-down in my ability to understand what I was doing, and more importantly how to manipulate the objects I saw.
Is Direct Manipulation Possible in VR?
What’s clear, and something I’ve written about previously, is that when we go into VR, we bring with us knowledge about how the world works and how to interact with it, and if these prior experiences don’t align with what we see and do, then the experience will be not be as intuitive as it could be. I found it incredibly interesting that when the experience was setup with the use of proxy objects (like the controllers or ping pong paddles), this felt more real because I couldn’t see hands. When I did see virtual hands, I made way more errors when trying to interact with virtual objects.
Moving forward, I think about the possibilities of direct manipulation in VR. Below is a prototype controller for the Vive, and it looks very promising. The ability to sense a point gesture and the ability to sense grasping an object alleviate that cognitive misalignment, so that virtual hands can map much better to my own. In addition, while there may never be a way to pick up objects that don’t exist, the prototypes shown below skirt this issue by placing any virtual object you may want to pick up in your hand at all times, attached to your hand itself, ready to be grasped whenever you choose.
With this controller, leaning down to pick up my gun’s magazines wouldn’t require squeezing two buttons on the side of a wand-like controller, but simply grabbing it, and when I want to let go, I just let go. These controllers present a way to interact with virtual objects that is as close to direct manipulation as you can get as virtual objects will have volume AND weight.
These new controllers, if they ever come to market, are not perfect. Until we get a Star Trek-like Holodeck, there won’t be a perfect solution, but this gets us one step closer to being able to interact with objects with our hands, moving away from the concept of proxy controllers and indirect manipulation to something more natural, intuitive and immersive. As HTC, Valve, Oculus, Leap Motion and other companies continue to work in this space, I have no doubt that usability and immersion will continue to improve at at the existing blistering pace.