Ever since seeing early versions of holographic images ay Disney World have thought of their uses, both in interaction with consumers, and as a means to graphically interact with complex analytic data. In the 90s participated in a SIGGRAPH panel regarding their uses. Especially their use as s means of very engaging packaging and advertising. Here an excellent update.
Holograms on the Horizon? By Chris Edwards
Communications of the ACM, November 2021, Vol. 64 No. 11, Pages 14-16 10.1145/3484998
Researchers at the Massachusetts Institute of Technology (MIT) have used machine learning to reduce the processing power needed to render convincing holographic images, making it possible to generate them in near-real time on consumer-level computer hardware. Such a method could pave the way to portable virtual-reality systems that use holography instead of stereoscopic displays.
Stereo imagery can present the illusion of three-dimensionality, but users often complain of dizziness and fatigue after long periods of use because there is a mismatch between where the brain expects to focus and the flat focal plane of the two images. Switching to holographic image generation overcomes this problem; it uses interference in the patterns of many light beams to construct visible shapes in free space that present the brain with images it can more readily accept as three-dimensional (3D) objects.
"Holography in its extreme version produces a full optical reproduction of the image of the object. There should be no difference between the image of the object and the object itself," says Tim Wilkinson, a professor of electrical engineering at Jesus College of the U.K.'s University of Cambridge.
Conventional holograms based on photographic film can capture interference patterns that work over a relatively wide viewing range, but cannot support moving images. A real-time hologram uses a spatial-light modulator (SLM) to alter either the amplitude or phase of light, generally provided by one or more lasers, passing through it on a pixel-by-pixel basis. Today's SLMs are nowhere near large or detailed enough to create holographic images that can be viewed at a distance, but they are just good enough right now to create near-eye images in headsets and have been built into demonstrators such as the HoloLens prototype developed by Andrew Maimone and colleagues at Microsoft Research.
A major obstacle to a HoloLens-type headset lies in the computational cost of generating a hologram. There are three algorithms used today to generate dynamic holograms, each of which has drawbacks. One separates the field of view into layers, which helps reduce computation time but lacks the ability to fine-tune depth. A scheme based on triangular meshes, like those used by games software that render 3D scenes onto a conventional two-dimensional (2D) display, helps cut processing time (although without modifications to handle textures, it lacks realism). The point-cloud method offers the best potential for realism, although at the expense of consuming more cycles. In its purest form, an algorithm traces the light emanating from each point to each pixel in the SLM's replay field. "Light from a single point can diverge to a very wide area. Every single point source creates a sheet of refractions in the replay field," says Wilkinson.
A drawback of the point cloud is that light from every point will not reach every pixel in the target hologram, because it will be blocked by objects in front of it. That calls for software to remove the paths that should be occluded, which increases the number of branches in the code. Though it removes the need to map the light from every point onto every pixel in the SLM, the checks and branches slow down execution. Photorealistic holograms intended for use as codec test images, created using a method developed by David Blinder, a post-doctoral researcher at Belgium's Vrije Universiteit Brussel, and colleagues, take more than an hour to render using an nVidia Titan RTX graphics processing unit. However, numerous optimizations have been proposed that reduce arithmetic precision and the steps required, with some loss of quality, to achieve real-time performance on accelerated hardware.
The MIT approach uses several approximations and optimizations built around a deep neural network (DNN) made up of multiple convolutional layers that generate the image from many subholograms. This involves far fewer calculations than trying to map a complete point cloud directly to a final complete hologram. In conventional optimizations, lookup tables of diffraction patterns can help build those subholograms more quickly, but it is still an intensive process.
The DNN allows a more progressive approach to assembling the final image, which results in fewer calculations, particularly as the network can handle occlusion. The team trained the model on images of partially occluded objects and their sub-hologram patterns. The resulting algorithm can deliver images at a rate of just over 1Hz using the A13 Bionic accelerators in the iPhone 11 Pro. Without the computational optimizations provided by the DNN, the researchers suggest processing would take at least two orders of magnitude longer. ... '
No comments:
Post a Comment