Archive for the ‘User Interface’ category

Software that Says Come Play With Me

May 11, 2010

I had occasion to become familiar with Lightroom recently. A friend loaned me a Canon 5D and I wanted to look at the raw files I was making. Adobe is running a public beta for Lightroom 3 and I downloaded it. What a slick piece of work. The first time you use it, overlays appear above the interface explaining how it’s organized and giving you enough information to get started. Large buttons for basic functions point to Adobe’s confidence that they know what you want to do — and in my case they were right. And the program itself? Fast, stable, powerful — and beautiful. And still in beta. I was able to go through my images, rate them, color correct them and create a nice web site, without so much as cracking a manual.

Adobe is doing some really nice work. I’m using InDesign to create my upcoming book and it is a model for clean, attractive and powerful software design. And they’re really focused on educating the user base. The help system, for example, links you to online videos explaining various features, in a consistent interface, but done by independent designers and trainers.

As more and more powerful software gets into more and more creative hands a focus on training like this becomes essential for users and for companies, alike.

The Zen of Trim

April 20, 2010

I had an interesting debate with a music editor friend the other day. Frustrated (as we all are) with the hoops you have to jump through to move material back and forth between Media Composer and Pro Tools, he suggested that MC simply start using Pro Tools as its audio engine. And not just the engine — the whole UI. MC just synchronizes with PT. End of story.

I completely agree with the need to move sequences/sessions and media back and forth without conversion. But much as I envy the Pro Tools toolset (as I describe in this post), I don’t want to rely on the PT interface. Why? Because I’d have to give up Trim Mode.

My friend wouldn’t have it. “I can do anything you can do,” he insisted. I tried to explain that he can’t trim while watching picture or listening to sound. He said he didn’t need that — he just drags things around and hits play to check the work. Or he trims using PT’s trim tool. No problem staying in sync.

I wasn’t making any progress, so I finally pulled out the laptop and made a single dialog cut. My point was this: Most of my cuts are overlapped. When I adjust picture, I usually want to adjust sound somewhere else to stay in sync. With MC, I can trim all parts of an overlap while playing and watching any one of them. When I stop, I’m done. The ability to see realtime video while the cut is made, and to observe what’s happening at any part of the cut, audio or video, a-side or b-side, while keeping everything else in sync, is something I can’t get anywhere else. Not to mention the ability to do asymmetrical trimming, or trim two heads or tails, slip or slide, etc., all while watching, or listening to, any portion of the cut.

I had to show it to him three times. Each time he scratched his head, thought for a minute, and said, “well, I can do that, too.” And I kept insisting that he couldn’t. Finally, on the third go-round, came the reply, “Lemme see that again.” And then, finally, “Wow — I guess that IS pretty cool.”

His conclusion? Digi should add a trim mode and then Avid could merge the two UIs. My conclusion? Video and audio editors need different tools.

The discussion also gave me new insight into the other major editing applications, and how difficult it is to explain the power of Avid’s trim model to somebody who’s never really used it. I know, I know — plenty of people have switched from MC to FCP and have never looked back. But for a lot of us, trim mode is the holy grail. If I had to, I could work without it — I just don’t ever want to.

Editing on an iPad, Anyone?

March 24, 2010

Call me slow, but I finally watched Steve Jobs’ iPad keynote last night (it’s now available on Apple’s home page — or here). The iPad looks like it’ll be a very nice way to watch movies or read digital books, and Jobs offered a typically masterful demo of those capabilities. But what I didn’t expect was the focus on content creation. That came from Phil Shiller, who showed Pages, Keynote and Numbers.

Apple made a radical decision with the iPad, focusing entirely on a touch interface. That may seem like a natural extension of the iPhone, but you’re going to do different things with an iPad, and your fingers work differently than a mouse. A mouse is way more accurate, but it’s monotonic, with only one active region at a time. With multi-touch, you lose precision but you gain the ability to track gestures and activate multiple contact points. In terms of human-machine bandwidth, it’s probably a wash — but to make touch work you need an interface that’s tweaked differently. So Apple has quietly redesigned all of its core applications with bigger buttons and new interaction models that let you quickly do what you want with your fingers. There’s a focus on presenting you with exactly and only the tools you need for any particular task, and that ain’t as easy as it looks.

Watch, for example, how Shiller selects multiple slides and moves them around as a group (at about 1:01:00). Or how he matches the size of two images by touching them simultaneously. Or does live wrapping of text around an image (at 1:05:00). Or moves columns of figures, or uses a soft keyboard with just the symbols you need.

There’s no version of iPhoto for the iPad yet — editing an image will certainly take some unique UI work — but it seems clear that we’ll see one soon.

And so, we come to the question of post production. Would the iPad work for heavy duty editing? Unlikely. The screen is way too small, and there’s no disk interface, no Finder. But for putting together home movies while on vacation and uploading them directly to Youtube? It seems like a natural.

The question is what might happen when pros start playing with an interface like that. As the song says, once they’ve seen gay “Paree” — how ya gonna keep ’em down on the farm?

The Mouseless Interface

January 25, 2010

Some of you would probably kill for the user interface that Tom Cruise employs in “Minority Report,” with big images displayed on transparent screens and a gestural language that interprets your body movements. My sense is that an editor could get pretty tired working that way all day, but the giant canvas and the shear flexibility and organic quality of it are very compelling, to say the least.

Until now, interfaces like that required the user to wear motion capture gloves that are seen by cameras installed in the ceiling. But Microsoft is working on an add-on for XBox 360 that uses a single camera under the monitor. I was pretty skeptical about what this could do, but an article in this month’s Scientific American made me think again. The system, called Project Natal, is remarkably sophisticated, watching your body in three dimensions at 30 fps, and matching the movements of your skeletal joints to a database of biometric data they’ve developed.

Of course, we’re not playing video games in our editing rooms. And the demos Microsoft has come up with aren’t exactly my idea of an editing interface. But games mean sales volume and volume drives down costs. I could easily imagine a more focused incarnation of this technology based on the motion of your hands working in a more confined space — say the area above your keyboard. That might get pretty interesting as a way to interact with a machine.

Sony says that its similar “Motion Control” technology will be the primary interface for the upcoming Playstation 3. And other companies are working on the idea, too, including Canesta, Hitachi, GestureTek and Oblong Industries (they were technology advisors on “Minority Report”).

Video games have been a big driver in pushing down the price of graphics processors, which in turn has helped empower our editing applications. With competition between Sony and Microsoft heating up development, this technology might work the same way. The mouse has served us well for a long time now, much longer than its developers at the Stanford Research Institute probably imagined, but it can’t be the best we can do.

Multi-Touch Gets Cheaper

January 4, 2010

Computer scientists from N.Y.U.’s Media Research Lab have formed a company called Touchco to make a new kind of touch panel that will cost less and be more powerful than the ones you’re familiar with. The new technology allows for unlimited touch points, compared to the current capacitive technology that maxes out at five. And it’s pressure sensitive, so an appropriate application can respond to not only the position of your fingers but how hard they are pressing on the panel. Best of all, it’s cheap — about $10 a square foot.

Think of it as a big, infinitely configurable editing controller. Scrubbing by moving your fingers over a surface, dragging to control multi-speed playback, adjusting visual effects by touching — these are the first things that come to my mind, but I’m sure you can think of other possibilities. Check out  this video on YouTube or the many videos on the Touchco home page and imagine it working in your favorite editing application.

Details and pictures are in this NY Times Blog Post.

Year End Wrap Up

December 31, 2009

Looking back, maybe we could say 2009 was the year the playing field got a little more level. It was the year competition seemed to return, the year some of the hoopla subsided and people settled in around the idea that no single application is perfect and that each has its strengths and weaknesses.

For my money that’s a very healthy development, one that can only improve our tools. Competition drives innovation, and our applications are anything but finished. Most of us are eager to see improvements in simplicity, transparency and responsiveness.

In my dreams I hope for an interface like that in Minority Report, with a huge canvas on which to work and a lot of physical feedback. But even without such a fundamental re-imagining, there are big strides that could be made in terms of an interface that’s always on, so you could make changes and continue to work without stoppng video playback. And I look forward to what’s been described as “common timeline,” where all our tools operate on the same sequence and where exporting and importing, when needed, are so transparent that you barely notice them and collaboration becomes a lot simpler.

So, here’s to a brighter year, where our newly flattened playing field will result in significant innovation. I hope you all have a happy, healthy and safe New Year.