Ars Technica posted a short video demonstrating some of the very impressive technology driving the new Kinect, which is part of the Xbox One, introduced yesterday. I was particularly impressed with the amount of 3D detail the system can process and interpret in real time. And the startling amount of ambient noise it can filter out. Check it out. It might just give you a new way of looking at the future of user interface design.
Archive for the ‘User Interface’ category
After some initial resistance, Microsoft is now permitting hackers to create novel applications for its Kinect hands-free game controller, and less than three weeks after the device’s release, some fascinating projects are already starting to appear. An article in today’s NY Times lays out some of the early ideas. This video gives you a small sense of what’s possible. The author, Oliver Kreylos, has extracted images from two of the device’s cameras — the depth image and the color image, as he calls them, and uses them to reconstruct video that can be moved and reshaped in 3D space. In this video, Mehmet Akten uses the box to do some crude in-the-air drawing with his hands. At in this one, designers Theo Watson and Emily Gobeille use the device (apparently connected to a Mac) to make a projected puppet track hand movements. Not bad for a couple of short weeks! This technology may or may not be precise enough for useful work, but I’d sure like to see somebody try connecting it to an editing interface.
You’re going to be hearing a lot about Microsoft Kinect. This add-on to the XBox game console was released yesterday, and it’s getting a lot of positive press. David Pogue, writing for the NY Times, called it “astonishing.” ArsTechnica was a bit more restrained, saying that it’s a “cool piece of tech.” The system recognizes multiple people in front of it, tracks 48 different points on their bodies in 3D, and mimics their movements on screen. It also understands voice commands. There’s no physical controller at all. Pogue described a typical first-time experience as “a crazy, magical, omigosh rush.”
Editing is mostly stuck in the UI metaphors of the ’80s and ’90s. Mouse-driven, designed to make one adjustment at time, and focused around the cycle we all know too well: adjust something, press play to see what you did, stop, make another adjustment, play again.
Some applications work differently. In Pro Tools, for example, you can be playing in one place in the timeline and editing or adjusting levels further down. When you get there, you’ll be playing the changes you just made. Sony’s Vegas editing app is live, too. Even iTunes can play music while you do other things.
Avid, Apple and Adobe have been battling it out, of course, and the competition is good for all of us. But are any of them willing to jump off into hyperspace and change the paradigm? There have been many rumors about a new version of Final Cut, but precious little actual information.
We’re going through a big paradigm shift as we move to fully file-based environments. But the changes that will affect us as artists have to do with the way we interact with our tools — how well they respond to our creative choices in real time. One day, editing is going to feel a lot more like playing a musical instrument. Kinect will help catalyze those changes, putting development money and sales volume behind new interaction models. The same thing happened with high-powered, low-cost video boards, originally created for gaming and now powering editing applications.
But here’s the twist — we still need buttons. The Ars Technica review ended with a caveat, comparing the button-less interface of the Kinect to its less sophisticated competitors from Sony and Nintendo. “The Move and the Wiimote can do so much more when it comes to controlling games, and that’s because of one thing: buttons.” That applies even more to editing. The UI of the future is going to need both — buttons and gestures. And the ability to do more than one thing at a time.
Avid’s new Smart Tool promises a more intuitive, drag-and-drop approach to timeline editing and is designed to compete head-on with Final Cut and Premiere. But for many long-time Avid editors, the first response is, “how do I turn it off?”
The dilemma is a classic one and goes to the heart of how we learn to use any tool. For newbies, an interface wants to be immediately obvious and welcoming. But power users want speed. The best interface combines elements of both and is malleable enough to grow with you as your needs evolve.
I’m just finishing a show on MC5, and have tried several approaches to the Smart Tool. Here’s the setup that I’ve settled on (so far):
- Upgrade to one of the recent patch releases (188.8.131.52 – 184.108.40.206). Trim mode in these builds will be much more familiar to long-time Media Composer users. Then enter trim mode by lassoing, or by hitting the Trim Mode button (not the Smart Tool). This gives you something closely approximating old-style trim mode. For details, see this post.
- Go to the Edit tab of Timeline settings and select the following. You may also want to select “Clicking the TC Track or Ruler Disables Smart Tools.”
- Then activate only one Smart Tool — the keyframe tool. Leave all the others off. This gives you permanent access to audio keyframes, which matches past behavior. But more important, because you are leaving one tool on all the time, the tool palette won’t reset itself when you start up MC. It’ll come back as you left it when you quit. (If a tool is on when you quit, that’s the way the system will start up. If nothing is on, the tool resets itself.)
- Assign the segment tools to your keyboard and turn them on and off from there, as needed. (By default, you’ll find them on Shift-A, and Shift-S.)
You’ll probably have to do some experimentation to get things to work for you, but those are the key ingredients in creating a more familiar, version 4-style editing experience.
The UI designer’s hardest task is to create an interface that is at once simple and powerful, which says “come play with me” to the beginner while offering maximum power to the sophisticated.
Here’s an object lesson: I’ve had the same little microwave oven for a decade. It has exactly one control — a dial for cook time. You close the door and turn the dial to the time you want. You don’t even press start.
This microwave died recently and I replaced it with its modern equivalent, from the same manufacturer. The guts seem to be the same. But the control panel now features a total of nineteen buttons, four of which serve multiple functions. It is impossible to use this thing without referencing the manual, which I now have to keep handy.
Does it do more? Yes and no. The old one didn’t allow you to set a power level. That was okay with me because it was only used to heat things up and didn’t have much power anyway. But mainly, what all those buttons do is make the thing look cool.
For example, there’s now a dedicated “popcorn” button. But it doesn’t change much. With either microwave, you’ll initially have to do a bit of experimentation to find the right setting for your brand. With the old oven that setting was a number — how many minutes you want to cook. With the new one, it’s also a number — how many times you hit the popcorn button! But you’ll have to remember that every additional punch of the button reduces cooking time rather than adding to it.
For me, the new oven is not much more capable, but far more complicated, than the old. Maybe that’s a principle of UI design — complexity accretes like barnacles and doesn’t go away until you blow everything up and start over.
We just don’t spend money on simplicity. We spend it on the impression of power and complexity. We want to know that our tiny little microwave can make a souffle, even if all we ever do with it is heat up leftovers.
John Underkoffler is one of the great visionaries of UI design, and he’s just posted his talk about 3D spatial interfaces from the TED conference this year. This is the Minority Report UI (which he helped design) as it is being implemented — in reality — now. I had the great privilege of sitting in on his class at USC recently, where they were prototyping an editing application. My reaction at the time — it’s a slam dunk. The details don’t really matter. If we could have it, we’d use it. Take a look at his video (at TED or on YouTube) and start thinking about what computer interaction might be like sometime soon. And tell me that you don’t want it now.
Meanwhile, touch interfaces just got a lot more real for post-production with the release of iMovie for the HD iPhone. Apple has made it possible to shoot and edit on one small device and to do the whole thing via touch (and for a measly $5). It’s not for pros, of course, but it points the way.
MC5 will be released in a couple of days, and for the moment, things are pretty exciting in the world of non-linear editing. But these applications point to a different, more fundamental transformation — toward natural interfaces. Just when you thought things couldn’t get more interesting, the world shifts on its axis, and everything you know is wrong.
I had occasion to become familiar with Lightroom recently. A friend loaned me a Canon 5D and I wanted to look at the raw files I was making. Adobe is running a public beta for Lightroom 3 and I downloaded it. What a slick piece of work. The first time you use it, overlays appear above the interface explaining how it’s organized and giving you enough information to get started. Large buttons for basic functions point to Adobe’s confidence that they know what you want to do — and in my case they were right. And the program itself? Fast, stable, powerful — and beautiful. And still in beta. I was able to go through my images, rate them, color correct them and create a nice web site, without so much as cracking a manual.
Adobe is doing some really nice work. I’m using InDesign to create my upcoming book and it is a model for clean, attractive and powerful software design. And they’re really focused on educating the user base. The help system, for example, links you to online videos explaining various features, in a consistent interface, but done by independent designers and trainers.
As more and more powerful software gets into more and more creative hands a focus on training like this becomes essential for users and for companies, alike.