Does No Controller Mean More Control?
You’re going to be hearing a lot about Microsoft Kinect. This add-on to the XBox game console was released yesterday, and it’s getting a lot of positive press. David Pogue, writing for the NY Times, called it “astonishing.” ArsTechnica was a bit more restrained, saying that it’s a “cool piece of tech.” The system recognizes multiple people in front of it, tracks 48 different points on their bodies in 3D, and mimics their movements on screen. It also understands voice commands. There’s no physical controller at all. Pogue described a typical first-time experience as “a crazy, magical, omigosh rush.”
Editing is mostly stuck in the UI metaphors of the ’80s and ’90s. Mouse-driven, designed to make one adjustment at time, and focused around the cycle we all know too well: adjust something, press play to see what you did, stop, make another adjustment, play again.
Some applications work differently. In Pro Tools, for example, you can be playing in one place in the timeline and editing or adjusting levels further down. When you get there, you’ll be playing the changes you just made. Sony’s Vegas editing app is live, too. Even iTunes can play music while you do other things.
Avid, Apple and Adobe have been battling it out, of course, and the competition is good for all of us. But are any of them willing to jump off into hyperspace and change the paradigm? There have been many rumors about a new version of Final Cut, but precious little actual information.
We’re going through a big paradigm shift as we move to fully file-based environments. But the changes that will affect us as artists have to do with the way we interact with our tools — how well they respond to our creative choices in real time. One day, editing is going to feel a lot more like playing a musical instrument. Kinect will help catalyze those changes, putting development money and sales volume behind new interaction models. The same thing happened with high-powered, low-cost video boards, originally created for gaming and now powering editing applications.
But here’s the twist — we still need buttons. The Ars Technica review ended with a caveat, comparing the button-less interface of the Kinect to its less sophisticated competitors from Sony and Nintendo. “The Move and the Wiimote can do so much more when it comes to controlling games, and that’s because of one thing: buttons.” That applies even more to editing. The UI of the future is going to need both — buttons and gestures. And the ability to do more than one thing at a time.
Explore posts in the same categories: Avid, Final Cut, User Interface
November 6, 2010 at 4:45 am
I was checking this kinect thingy and although it looks quite impressive, it has quite a long latency to recognize your moves. Have you checked sony’s answer to it? the glowing balls are a bit odd but the precision is quite remarkable.
Take a look. Watch it by the end when he starts creating interfaces and moving windows.
November 6, 2010 at 7:57 am
Very impressive. And yes, I know that Kinect has latency issues. I didn’t mean to say that it is better or worse than Sony’s Move. Only that the development effort that is going into all this “beyond-the-mouse” technology will, hopefully, translate into new editing interfaces. The video is another fascinating indication of how that might happen.
Thanks for sharing!
Steve
November 6, 2010 at 3:03 pm
This is super impressive! Clearly we are headed into some version of this (with buttons). Though today I am more concerned with our ability to do more than one thing at a time, as you cover here. I wonder (with all it’s recent developments), when Avid will allow us to play and work simultaneously like pro-tools. With FCP, we can do things like move bins around, turn audio tracks on and off etc. while playing.