Some Notes on 10/GUI
Robert Clayton Miller‘s 10/GUI desktop multi-touch idea wafted out of the ether towards me last week, and I’ve been giving it some thought after watching the video a few times.
10/GUI is unusual in that Miller describes himself as a graphic designer. Unlike people such as as Jeff Han, he is not approaching the issues from a traditional HCI-led, computer scientific, or industrial design perspective. I think that’s a good thing in some ways. Multi-touch implementations have tended to have rather more to do with ivory towers and Hollywood than is really good for them, and we need some more practical thought. 10/GUI seems a good shot in that direction.
The following are some notes on Miller’s idea, in no particular order, made as I watched the video.
Much is made of the “the vast potential of multitouch” without actually demonstrating what that might be. I kept thinking that if what was being demonstrated is supposed to be vast potential, then why could all of it be achieved just as easily with a conventional mouse? The only interaction in the video that would not be possible with a present-day mouse is shown at 00:35 – the simultaneous manipulation of four vertical sliders. Hardly a very common operation unless you’re mixing down a music recording, even assuming you can keep track of the simultaneous effects of doing something like that.
I can think of four main problems with multitouch in the context of day-to-day desktop computer interaction. It’s great to see Miller point out the first two: muscle strain in the arms or the neck, and the occlusion of targets underneath the hands. But what about the other two? Multitouch tends to lack an indicated state for screen objects because once your fingers leave the touch surface, the system doesn’t know anything about them. This produces the effect of unwanted triggering when they come back down or move into areas where objects are in other states.
The lack of a robust indicated state is a serious problem overall in multitouch, and 10/GUI possibly compounds the issue of unwanted triggering because it assumes the existence of a separate keyboard (I note that he doesn’t explain why he didn’t combine the two, but it seems sensible not to). In contrast, one of the great strengths of the mouse is that there is a clear distinction between its modes: until you click, the cursor can only signal the indicated state (aka mouse-over or “on blur”) and – crucially – the system (and usually the user) always knows where the cursor is on the screen. Clicking on something that is in the indicated state requires a conscious action. Indeed, you can even pick up and throw the mouse away and nothing on the screen will usually be affected. With multitouch, the potential for mistakes is a lot higher because you have no prior confirmation that the thing you are clicking on is the thing you intended to click. 10/GUI must therefore infer what your fingers are doing. A light resting pressure indicates a “passive” action, and a “press” or a “tap” indicates an activation. Too bad if you take your fingers off the canvass to do some typing, then return them to trigger something by accident when they land a bit too heavily, or if you drop your fingers and skid slightly when moving across the canvass and accidentally register a click when you did not intend to (operating multi-touch UIs in a moving vehicle is therefore probably never going be easy). 10/GUI apparently recognises some of this problem when it imagines a ridge on either side of the canvass, over which only clicks are registered, but the potential for problems when moving from and to the main canvass is still there.
The new Apple laptops have multi-touch pads (and they’ve recently launched a multi-touch mouse). The hardware for 10/GUI therefore seems to be here already – so what about the 10/GUI desktop software model? Miller’s own site has a oddly similar navigational feel to the 10/GUI desktop (can’t link to it because it’s Flash), so he’s obviously keen on the idea of linear progressions. Good to see he also shares my dislike of ideas like 3D windowing environments (I once tried SphereXP and it was nothing if not confusing). But again, nothing here that a conventional mouse and keyboard combination could not do, and if you include some modality and keyboard shortcuts, do better in some cases.
Overall then, something of a damp squib. If we want to improve the current human-computer interaction models we have now, what we need is perspiration over inspiration. What – if anything – about the current WIMP interface model is actually bad? Can anything really be improved? Is it in fact simply just good enough? I still believe multitouch has a place, but it’s not on the desktop. I think 10/GUI confirms that belief.
For what it’s worth, my own prescription (although not my own idea) for improving WIMP is CLUI integration. Just don’t use the word “paradigm.”