Our Animation department is currently looking into the possibility of purchasing a “room less” mocap system. My understanding of the term “room less” in this context is that the mocap solution would not require a permanent installation in a dedicated room. Instead, the mocap hardware is contained entirely-ish on the system’s mocap suit(s). That said, we haven’t absolutely ruled out systems that do require more permanent installation.
I would love to hear any recommendations or really any kind of feedback that you all might have to offer on the subject.
I did a pretty long evaluation into a few different systems. One of the ideas was, if we couldn’t find a space, could we get a system that was half a day or less to setup and still get good data? We did settle on an optitrack system that, with practice, we could setup in a half a day and then tear down in less.
[QUOTE=Sariel;30001]I did a pretty long evaluation into a few different systems. One of the ideas was, if we couldn’t find a space, could we get a system that was half a day or less to setup and still get good data? We did settle on an optitrack system that, with practice, we could setup in a half a day and then tear down in less.
We recently setup using optitrack/motive. It’s cheaper than vicon for the equivalent hardware. They also have some pretty cool ultra-wide cameras (Prime 13W) that cover moderate/compact spaces which don’t need any kind of focus settings (which adds to calibration/setup time). A single license of Blade is incredibly expensive, and so far Motive does everything we need in terms of labelling/reconstruction and cleanup.
If I have any advice to give, being new to this myself, is don’t skimp out on suits and markers; x-markers break, and suits get sweaty and need to be washed constantly - having extras of every size is essential.
Get your suits from here- can’t say for sure, but they’re the same supplier for optitrack, but better quality for the same price.
I have a perception Neuron and it’s pretty amazing, for 1500 bucks you get pretty decent quality mocap, and the best part of it all is that you get finger mocap, which is amazing. The Avalanche folks had some good success with the Xsens suits as well, but those are 10x the price of the Neuron
I can vouch for the PN suit as long as all you need is previs quality data (feet sliding, snapping to the floor) for a single person. I was able to get some production level quality out of it when the specs fit that very narrow range. The finger data is very good for the price. If you’re operating at any kind of scale, you still might want to buy and setup a mocap suit, though. The straps can slide pretty easily and causing calibration issues. Their data processing is currently very lacking as are the help forums. Also, Axis Neuron isn’t very nice to work from a TD/dev perspective, whereas most other systems have capture software that allows better integration into your pipeline. I’d expect these things to improve over time, though.
If you need to capture multiple performers and/or any props, require better fidelity motion, or are expected to deliver a lot of data, just go with an optical system. Vicon and Optitrack are probably your best options. On the processing side, you should also consider that you can now do marker clean-up/solving in Maya with PeelSolve and https://github.com/mocap-ca/cleanup in addition to existing toolsets in Motionbuilder, and Blade/Motive. All of these options offer acceptable opportunities for pipeline integration and automation.
Thanks for taking the time to weigh in, Ikruel and mkapfhammer et al. You’ve provided myself and our animation team with first hand, expert (and I’m sure hard won) information and advice that’s going to be invaluable to us as we choose a solution. You guys rock!
We ended up going with a couple of Xsens MVN systems. The company offers both a more traditional “wetsuit” system and a strap-based solution. Both systems make use of the company’s “inertial” sensors. For us the strap-based approach made more sense as it was more flexible in terms of body size and type and was more custom props / limb friendly. I don’t have much direct experience with the system, but our Animation director and our riggers and animators seem to like it, sighting the modularity, the fast set up time (~15min. in most cases), the flexibility of the system and the general quality of the data as being major selling points.
One of our animators did note that the system sometimes struggles with planted limbs producing bad data, but generally these issues can be solved with a quick re-calibration (which I believe can be completed in less than a minute).