So we are working on a computer, what are the tasks we are hoping to accomplish? In the year 2013, we can break this question down into two areas.
- Keyboard tasks
- Spacial tasks
These have been the norm, ever since we got graphical user interfaces (GUIs) almost 30 years ago. Now, as the rise of haptic technologies start to replace the mouse, do we need to add a third category? I don’t think so. We still need to directly manipulate objects in 2D space, it’s just instead of using a virtual finger (arrow with mouse), we’re using an actual finger. We’ve added another layer to this in the form of gestures, but besides that, we’re still moving things around and interacting with things in 2D space. Even with technologies like the Leap Motion controller, we still use 3D space to accomplish tasks in 2D space, which is in itself a little strange.
Keyboard tasks will still be around for at least a couple more years. I’m writing this post on a keyboard, not by vocal recognition software, though we’re almost there. Technologies such as Siri and the Google Speech API let me speak to my computer, and convert it to readable text. The obvious difference here is that when I type, I have a different voice, a more formal voice, and one that I can keep quite consistent and accurate, wheras when I’m speaking I’ll throw in various “ums” and “ahs” that don’t translate well to the written word.
With that being said, I want to start thinking about how we actually accomplish these spacial tasks. How do we know what we are supposed to accomplish. Learning to navigate a file system is something we have to learn. We don’t automatically know how to copy and past files, to drag the to the trash or to other folders. I envision a world that moves away from the conventions of skeuomorphism, that does away with files and folders and lets us work on tasks without very much need for training. Sure we can type, but we should be getting way more visual cues from our operating systems. Want to start a new word processing document? We shouldn’t have to launch the program that does that before we do that, our devices should just know that’s what we want to do and launch the appropriate app.
Conventions surrounding the pointer arrow and the hyperlink hand will be gone in about 10 years. and we’ll need to invent new conventions to get us used to knowing what we are using our fingers to tap on. The move from skeuomorphistic properties in UI, such as embossed buttons and the fake physical releif and drop shadows will be a thing of the past, once we get used to the idea that our computers should be innately easy to use.
This is more of an introductory post to a few more articles I’ll be writing on the above topics. More to come…