Video synthesis to supplement with real world data.
IBM’s AI generates new footage from video stills
Kyle Wiggers @KYLE_L_WIGGERS in VentureBeat
A paper coauthored by researchers at IBM describes an AI system — Navsynth — that generates videos seen during training as well as unseen videos. While this in and of itself isn’t novel — it’s an acute area of interest for Alphabet’s DeepMind and others — the researchers say the approach produces superior quality videos compared with existing methods. If the claim holds water, their system could be used to synthesize videos on which other AI systems train, supplementing real-world data sets that are incomplete or marred by corrupted samples.
As the researchers explain, the bulk of work in the video synthesis domain leverages GANs, or two-part neural networks consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples. They’re highly capable but suffer from a phenomenon called mode collapse, where the generator generates a limited diversity of samples (or even the same sample) regardless of the input. ... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment