Quinthar

Why are future user interfaces so lame?

What is it with futuristic user interfaces being so pointless?  Whether it's g-Speak or Microsoft Surface (even the spherical one), all the applications are absurd -- after years of touch screen technology, everybody *still* shows off that stupid photo-organization application.

Seriously: when is the last time you dumped a bunch of pictures in a pile and manually sorted them?  I honestly don't know if I've done it once in my life; this isn't solving a problem I have.  Every time I see that application I think "you *still* haven't figured out anything useful to do with that thing?"

Everybody is trying so hard to become the next-generation UI that they've forgotten a critical lesson: *this* generation of UI is really pretty good.  It's refined through thousands of iterations of tweaks and real-world experimentation.

Take the multi-touch fad that has generally fizzled out.  Honestly: how often do you use multi-touch on your iPhone?  At the end of the day, most actions work perfectly fine with one finger: just because you *can* use two fingers doesn't mean doing so adds any value.

So I'm going to go out on a limb and predict that the user interface of tomorrow is going to look a helluva lot like the UI of today.  And that's because the UI of today looks an awful lot like the desktop of yesterday.

The key is continuity: making small, gradual steps from what you know to what you don't.  And all these fancy "next generation operating systems" keep managing to stay in the distant future because everyone else is too busy living in the now.

Which reminds me of a surprising lesson I learned when moving from Windows XP to Ubuntu (Vista was the last straw).  And that lesson was just how awesome the command line is, and how just splitting the window in VI as needed is so much more powerful than managing a bunch of stupid windows.

Indeed, the lesson was all the "advances" in UI technology provided by generations of Windows have actually *reduced* my productivity.  But like the boiling frog, I just never noticed.

At the end of the day, most data is not amazingly beautiful 3d graphics projected in a virtual cave environment.  Most of it is really boring: rows of numbers, to do lists, log files, email.  Most of it is text.  And I don't see that changing anytime soon.

So I'm looking forward to the UI of the future.  But I bet the primary mode of interaction will still be a keyboard, and it's amazing feature will not be sorting piles of random photos, but sorting, processing, and generating text and numbers in amazing ways.

Personally, I'm hoping for LCARS.

-david


3 comments:

Curtis Chambers said...

I actually use MultiTouch on the iPhone a lot. Granted, the only MultiTouch part of the iPhone is zooming in and out, which is actually extremely useful for Google Maps, and showing people photos on such a small screen. On a laptop with a large screen, zooming in and out is less useful.

However, the new Macbooks have much more in terms of MultiTouch capabilities that I'm slowly starting to get used to. Three-finger swipe left/right to go back and forward in a web browser. Four-finger swipe to see the desktop, all your windows and the application switcher,depending on which direction you swipe. It's really convenient in certain situations. If I'm typing, I do all those things with the keyboard, but if I'm mousing I can just do the action without having to move my right hand. So really it's just a complimentary set of functions to me, not a replacement for my existing actions. However, I never use the two-finger zoom/rotate actions on my laptop because I just don't need to do that.

I'll bet that for people that aren't keyboard-savvy like the nerds are, gestures will become more and more mainstream as time goes on, especially since the iPhone is training them to expect it.

In fact, I could even see MultiTouch as one of the "small, gradual steps" that you mention that leads up to the Minority Report style interface many years down the road. We started with one finger, now it's multiple fingers, next it's your whole hand, then who knows what. Things like the Wii are starting to get your whole body involved when it comes to controlling machines.

Actually by the time we get to that, I figure we'll just have direct brain implants and laugh at people that use their hands, like those punk kids in Back to the Future 2.

Tyler Karaszewski said...

"Future" anything is like that. Look at "tomorrowland" stuff from the 50's, and what they thought we'd have in 2000. It's all flashy stuff that makes you go "wow" for the five minutes you're reading the article or watching the demo, but very little of it translates into real-world usefullness, so it doesn't go anywhere and become a commercial product.

50's examples: Flying cars? Turns out that people are bad enough at driving in two dimensions at 75mph, putting them all in planes would get everyone killed. I'm sure we could build a small, car-sized flying machine, we just couldn't actually fly it (not enough people to make it practical to produce).

Other 50's magic: You've seen all the various demos of automatic ovens and self-clearing tables and such, right? IT turns out that people who have enough money to automate mundane housework tasks can just go out to a restaurant for dinner instead.

An interesting side effect of capitalism is that it has a sort of built-in "survival of the fittest" filter for products. These "future" products that get demoed generally fail when put through this filter, as they fail the cost/benefit test that's really what determines the "survival fitness" level of a consumer product.

People will constantly be developing things that seemed like a cool idea, but turned out to be too difficult to do correctly, or more trouble than they're worth, and I think that's a good thing. Most of these ideas will fail, but occasionally someone will come up with an idea like, "hey, wouldn't it be cool if all these computers at different universities could send message to each other?" And then you have genuine progress.

Carpe Web said...

I have to comment on your observation about 3-d graphics to present data, because that has been a pet peeve of mine for years. Not only does 3-d fail to enhance, it generally reduces the quality of the information. Take the ubiquitous "3-d" bar chart as the prime example. When graphics add a "depth" effect to the bars, this effect makes it more difficult (sometimes impossible) to judge the precise height of the bar. Of course, the height of the bar is the specific piece of information that the bar is supposed to convey.

Unless the third dimension in a "3-d" graph is actually used to represent another dimension of the data, it's useless at best. Even in the case where it does represent a unique dimension, it's usually better to keep it simple, of course.

Best regards,
Jim

- Jan 2014 (1) - Mar 2012 (1) - Nov 2011 (1) - Oct 2011 (1) - Apr 2011 (1) - Mar 2011 (3) - Feb 2011 (2) - Jan 2011 (9) - Nov 2010 (1) - May 2010 (1) - Mar 2010 (1) - Feb 2010 (1) - Jan 2010 (1) - Dec 2009 (1) - Nov 2009 (1) - Oct 2009 (1) - Sep 2009 (1) - Aug 2009 (2) - Jul 2009 (1) - Jun 2009 (4) - May 2009 (3) - Apr 2009 (3) - Mar 2009 (10) - Feb 2009 (5) - Jan 2009 (3) - Dec 2008 (5) - Nov 2008 (5) - Oct 2008 (5) - Sep 2008 (4) - Aug 2008 (5) - Jul 2008 (11) - Jun 2008 (8) - Feb 2008 (1) - Aug 2007 (1) -