Improving Motion Capture
MIT News, Lauren Hinkel, April 29, 2022
Massachusetts Institute of Technology (MIT) and IBM researchers developed the rendering invariant state-prediction (RISP) neural network pipeline to eliminate pitfalls of motion capture by inferring environmental factors, actions, physical systemic characteristics, and control parameters. MIT's Tao Du said the method can "reconstruct a digital twin from a video of a dynamic system," which requires researchers "to ignore the rendering variances from the video clips and try to grasp of the core information about the dynamic system or the dynamic motion." RISP converts differences in images (pixels) into differences in systemic states, embedding generalizability and agnosticism in rendering configurations. RISP outperformed other techniques in simulations of four physical systems of rigid and deformable bodies—a quadrotor, a cube, an articulated hand, and a rod—and can accommodate imitation learning. ... ' '
No comments:
Post a Comment