Tech Beat Column with Video: From kinesiology class to Gangnam Style: motion capture technology at Michigan 3D Lab
The directions were coming fast and furious.
“Roll your head!”
“Twist your wrists! Forearms! Whole Arms!”
Wearing a skintight black suit with 53 nib-like reflective markers attached, I was standing in a circle of cameras with bright red lights attached trying to concentrate and follow the instructions.
“Now twist your whole body! Good! Now left leg, front back side! Now the right!”
I was being "guided" through this series of motions by the lab's motion capture and visualization hardware specialist Steffen Heise.
Each of the eight cameras around me was trained to pick up the red light reflecting off my markers. Each camera was using those points of light to take a “picture” of me that would be combined with the seven other images to create a three-dimensional image.
Using that image, and once I was through with the light calisthenics, an animated character would appear on the screen and I would be able to control his motions.
“Now squat! Aaaand back to T-Pose!”
Just like that, I was calibrated.
For the next 20 minutes I proceed to shimmy, do the dance from “Gangnam style,” do the running man, and otherwise make a fool of myself while watching my real-time avatar do the exact same thing.
The motion capture system is one of a number at U-M, but it’s the only one that is, like everything else in the 3-D Lab, available to any student or faculty member.
“It’s really open to the general student population,” Heise said.
Melanie Maxwell | AnnArbor.com
Uses for the technology at the university have ranged across disciplines, anyone attempting to capture a distinct movement could benefit from the system.
“We’ve had dancers in here before,” lab manager Eric Maslowski said.
“If there’s a certain dance move that someone wants to learn, right now they can either learn from video or from an older dancer. But that older dancer is going to have put their own little variation on it. By capturing exactly where each joint is supposed to go, we can help people learn exactly what they want to know.”
Maslowski said musical conductors also have used the technology to teach precise joint motion that can be difficult to examine using 2-dimensional video.
For my purposes, I was more interested in see what kind of crazy things I could make my animated character do. The “person” representing me on screen, who happened to only be wearing underwear, was developed as part of the Universal Character System that Maslowski developed.
Interactive imaging and production specialist Stephanie O’Malley, who has a background in video game design, created my particular character right at the lab. You could tell he was developed at U-M because his briefs were maize and blue.
“We’re developing different body types so people can see how different bodies can be affected by different motion,” she said.
One of O’Malley’s next projects is to be able to put more clothes on the characters. She already created one model that wore high heels so that a class could examine the effect of the shoes on the rest of the body’s joint movements.
The system wasn’t perfect. My animated self’s wrists constantly were cocked at odd angles, one of his/my feet consistently were flipped upside down (which looks really weird, trust me), and twice the computer became confused as to which side of my pelvis was the front, and which was the back.
This mix-up morphed me into an odd ultimate limbo character, with my rear grotesquely sticking out in front of me and my body bent over backwards.
Even with the minor glitches, there is a wide range of uses for the technology, from perfecting my free-throw motion (keep that elbow in!) to interacting with virtual worlds.
For the time being, however, I was content to make my avatar extremely fat and dance around waving my arms doing my best Tevya impression while singing “If I were a rich man” a little too loudly.
- University of Michigan students who want access to the motion capture technology can contact the lab through their website to set up a time and location for a test capture.