When I was growing up it was DOS and then Windows 3.1. The ways in which we interacted with these systems was so abstract and so disconnected because it was…well…disconnected.
We clumsily grabbed hold of our mouses and moved a pointer around the screen. It didn’t even appear to be a hand until we got the world wide web in the early 1990s, and at this point it became clear what we were actually doing, or was it?
We could click anywhere we wanted but would it do anything? Was it a productive move?
The idea of conventions (those actions we have to learn how to perform to get a desired result) and affordances (the things we are able to do with an object or interface) within GUIs (Graphical User Interfaces) is nothing new, but it’s when you get into the idea of interface cues, things become a bit more murky.
Sure, with Windows, we had learned conventions that told us we can click on an ‘X’ or words that give us clues as to what we can do. This made sense given the separation between our interfaces and our bodies.
The problem here is that these are still conventions that we must learn, but they are not not visual cues.
With the advent of touch screens in the mid 2000’s nothing really changed. We substituted the mouse for our finger, dropped the need for a cursor (with some of us holding on for dear life) and to this day, the idea behind our interfaces being an able to talk to our own bodies is something that I haven’t really read about yet, hence the idea behind Evolutionary Interaction Design. The simple idea here is that whatever interfaces we design, whether in the 2D space, or very soon in the 3D VR or AR space, need to conform to a set of rules that govern how we interact with them.
When designing for 2D or 3D interfaces, cues should alert the user to affordances through biologically analogous visuals.
- If a virtual object is reminiscent of a real world object, the user should interact with it the same way they do in the real world.
- If an object is abstract and virtual, then it must be clear to the user how their bodies can interface with it.
Watching the Microsoft Hololens demo, I really didn’t see this concept in practice. The images we saw on screen were 2D projections in 3D space, either mapped to a wall or floating in space.
The way in which these objects were manipulated though made me sad. This same ‘My finger replaces a mouse cursor’ paradigm hasn’t changed since I was a kid, and the ways in which we will use this device (according to the demo) will continue to be abstract.
If we are seeing the real world and interacting with an Augmented Reality of the 3D world, why don’t we interact with it as such. Instead, we will pinch in the air.
Really? Who pinches in the air in the real world?
Ok Fine.
Taking a step back to 2D, we don’t really see this same principle applied in Android or iOS interfaces. While it’s true that some apps do give us interface cues based on the shapes of our fingers and hands, it is far less popular a design choice than it should be. Instead we still cling to a top bar menu reminiscent of the File menu and still use buttons that look like the ones we used 30 years ago. Our choices haven’t really expanded with the times.
In recent years we moved from the abstract and disconnected to the haptic interface. We now touch and feel the devices we use, and this is only progressing further to the point where our physical and virtual worlds are overlapping.
While it’s true we have developed conventions for interacting with these devices, these conventions miss the mark. Hamburger menus, back buttons, swipes, pinches and other such gestures and conventions don’t really feel natural. They feel like an intermediate step between the abstract mouse and the natural manipulation of virtual objects.
As I sat at my favorite Japanese restaurant the other night, my brain immediately went back to wonderful book, The Design of Every Day Things, and how when we design we must think about the intended audience and what it is we’d like them to do with the thing we’re making. When a baby has a ball, they explore it. They manipulate it in their hands, they squeeze it, throw it, bounce and and do whatever they can with it (affordances).
Door handles tell us they can be pulled by their shape.
Spoons tell us they can dig, just by their appearance.
Cups tell us they can hold stuff by the way they look.
The above products’ shapes and the innate knowledge we have about our own bodies form a perfect pairing, that allow us to see the affordances before we even touch them. We don’t need to rely on written language or learned symbology. We just use them.
Now the challenge is to apply the same principles of affordance to UX design both in the 2D and 3D space.
I’ll be writing more about this concept in the future, and tying it to concepts such as Evolutionary Educational Psychology and Cognitive Load theory to provide examples and best practices in this area.
0 Comments