Experimented with something like this. Took a single image of part of a machine or process, then applying some images of specific known maintenance issues, and apply likelihood under specific contexts. Then show to a human expert for analysis. NOT the same thing, we integrated much more information. But I can see this method integrated for broader use ... say deriving a short video of the maintenance issue. Thinking other uses of such constructed animation in further 'derived' animation. 'Image learning"? What else can help derive a fuller image? ...
UW Researchers Can Turn a Single Photo into a Video, By University of Washington News
A new deep learning method can convert a single photo of any flowing material into an animated video running in a seamless loop. University of Washington (UW) researchers invented the technique, which UW's Aleksander Holynski said requires neither user input nor additional data.
The system predicts the motion that was occurring when a photo was captured, and generates the animation from that information. The researchers used thousands of videos of fluidly moving material to train a neural network, which eventually was able to spot clues to predict what happened next, enabling the system to ascertain if and in what manner each pixel should move.
The team's “systemic splatting” method forecasts both the future and the past for an image, then blends them into one animation.
From University of Washington News ...
No comments:
Post a Comment