08-05-2010, 02:07 PM
|
#32
|
Bang-bang! Down-down!
OVR: 28
Join Date: Mar 2004
Location: Pensacola, FL
Posts: 16,781
|
Re: IGN: Kinect seems like the lame donkey in the race.
Exclusive: How does Microsoft Xbox Kinect work?
Movement Tracking
Voice Recognition
The Motor
On the mic array:
Gizmondo: Deep Inside Kinect
|
Quote: |
|
|
|
|
|
|
|
|
|
Raghu Murthi, the general manager for Natural User Interface Hardware, is holding a Kinect, stripped naked, as a dozen people gawk at its innards. The exposed metal seems cold. He's telling us about the optical system—how it sees with the three holes in its head that seem like eyes. Without the plastic housing they look like they're bulging out. We're at the beginning of day-long tour of Kinect, gathered in the Great Room, the living room you wish had, but tucked behind a sliding wall inside one of the many food courts on Microsoft's sprawling campus. 3D sensing has been around for 15 years, Raghu explains. What Microsoft has done, he says, is taken 3D depth-mapping technology that typically costs $10,000 to $150,000, and made it at volume, for cheap. |
|
|
|
|
|
Interview: Andrew Oliver, Blitz Games
|
Quote: |
|
|
|
|
|
|
|
|
|
now with the new software libraries, if you're sitting down on the sofa, it works. OK, so one big thing that people were questioning was whether you could sit on the sofa. The new libraries work, but there are certain things, like in our fitness game, where you sit on the floor where it kind of gets confused. But the most expensive motion capture systems you can get out there, probably Vicon, it's like: you can break those as well, and that's why you employ clean up animators to go an fill in all the little gaps and stuff like that. So, we don't have the luxury of having that offline clean up ability, we have to do it live.
But then, what is it that you're doing live? For example, in Biggest Loser the skeleton doesn't work when you're lying on the floor, but what we had to do was say look at it in another way. They've given us the software library, and it can't cover all cases, but we can look at the silhouette and see that the player is currently doing a press up. You can actually see that their bum's lagging, and they're bending their back. Then I would need to do a software algorithm that kind of works that out. It's just a bit of image processing. So they've given you a generic piece, which is actually pretty impressive and covers most cases - certainly all the standing up, and now sitting down. If you want to go further than that, then do it yourself in software. |
|
|
|
|
|
|
Quote: |
|
|
|
|
|
|
|
|
|
The first wave of games cover some predictable genres. Is that an expression of what developers feel comfortable creating right now, or of what publishers are comfortable funding?
It's a bit of both. They're kind of obvious big motion capture genres. For launch titles, developers got pretty much a year. It was an internal secret at Microsoft, I believe, until E3. Once it had been announced last year at E3, then it was up to developers to talk to their publishers to get them to agree to contracts. Development kits are handmade, difficult to come by and expensive. They only gave them out to developers who are working on 'proper games', so developers who had a signed game with a publisher could have a kit.
For example, Harmonix are known for karaoke and Guitar Hero so Dance Central is a natural fit. It's completely obvious that if you have motion capture, you can make a fantastic dance game, so they could very easily get that all agreed to as a launch game. People have to do what's obvious and the developer has to know that it's going to work with absolute confidence, so they can do it in a year, and that there's going to be a market. And the publisher has to believe that it's going to work, and make them money. But some of the ideas we've had since - some which are in development, and some which are going round internally now - are just like, so out there. |
|
|
|
|
|
|
Quote: |
|
|
|
|
|
|
|
|
|
There are floor moves in that game though, which Kinect might struggle with...
No. Because some will use skeletal tracking and if floor moves are important - which you can argue in that, they are - it's possible. We've proved it's possible - Kinect knows when the skeleton hits the floor. So what you have to do is jump to your own routines that work out what's now happening on the floor. It's absolutely possible. You just have to look at what your game design needs, and then work out what you have to write. These things are definitely coming, it just hasn't been done yet. Games programmers aren't used to image analysis, so that's what we're all learning, which is why we have a lot of programmers working in completely new areas. But interesting areas.
What kinds of resources are these additional software algorithms taking up on the Xbox 360 hardware?
Well that's interesting, because obviously if you're trying to run your game and look at these huge depth buffers and colour buffers, that's a lot of processing. And it's actually processing that a general CPU is not very good at. So you can seriously loses half your processing if you were to do it that way. We've found that it's all down to shaders, but turning a depth buffer into a skeleton is pretty hardcore shader programming. What you tend to do is write all your algorithms, get it all working in C++ code, and then work out how to now write that in shaders.
By shaders you mean that it's running on the GPU?
Exactly. The GPU on the Xbox is very powerful but we've all only been using it for glossy special effects. A really good example of this is Kinectimals, as the most intensive thing that you can do on a GPU is fur rendering. So that GPU is doing all the fur rendering, and I can guarantee that it's also doing a lot of image processing too. It's brilliant that the Xbox has a really good GPU and can handle both these things, but actually writing that shader code to do image analysis is hardcore coding at its extreme! |
|
|
|
|
|
__________________
Go Noles!!! >>----->
|
|
|