/* ---- Google Analytics Code Below */

Saturday, July 11, 2020

Combining iPhone Videos for 4D Viz

Another example of gathering more data that you would normally with a sensor, then combining the data in order to make it useful for more than normal, purely typical human-visual needs.   Also to  create manipulative and illusional scenes.   Now people are worrying about how this promotes 'fake' scenarios, but isn't this exactly what is done in cinema?   Film too manipulates the difference between reality and  illusion.   What is suggested below is manipulated view for better results.  The results/goals can always be changed for good or bad purposes.

Carnegie Mellon University School of Computer Science
Byron Spice
July 1, 2020

Carnegie Mellon University (CMU) researchers combined iPhone videos shot "in the wild" by separate cameras to produce four-dimensional (4D) visualizations that allow viewers to watch action from various vantage points, or even delete people or objects that temporarily occlude sight lines. CMU's Aayush Bansal and colleagues employed up to 15 iPhones to capture various scenes, then used scene-specific convolutional neural networks to compose different parts of scenes. The system can restrict playback angles to make incompletely rebuilt areas invisible, maintaining the illusion of three-dimensional imagery. The method also could be used to record actors in one setting, then insert them into another. Bansal said, "The point of using iPhones was to show that anyone can use this system. The world is our studio."

No comments: