Deep Learning Enables Real-Time 3D Holograms On a Smartphone (ieee.org) 25
An anonymous reader quotes a report from IEEE Spectrum: Now researchers at MIT have developed a new way to produce holograms nearly instantly -- a deep-learning based method so efficient, it can generate holograms on a laptop in a blink of an eye. They detailed their findings this week, which were funded in part by Sony, online in the journal Nature. Using physics simulations for computer-generated holography involves calculating the appearance of many chunks of a hologram and then combining them to get the final hologram. Using lookup tables is like memorizing a set of frequently used chunks of hologram, but this sacrifices accuracy and still requires the combination step.
[...]
The researchers first built a custom database of 4,000 computer-generated images, which each included color and depth information for each pixel. This database also included a 3D hologram corresponding to each image. Using this data, the convolutional neural network learned how to calculate how best to generate holograms from the images. It could then produce new holograms from images with depth information, which is provided with typical computer-generated images and can be calculated from a multi-camera setup or from lidar sensors, both of which are standard on some new iPhones. The new system requires less than 620 kilobytes of memory, and can generate 60 color 3D holograms per second with a resolution of 1,920 by 1,080 pixels on a single consumer-grade GPU. The researchers could run it an iPhone 11 Pro at a rate of 1.1 holograms per second and on a Google Edge TPU at a rate of 2 holograms per second, suggesting it could one day generate holograms in real-time on future virtual-reality (VR) and augmented-reality (AR) mobile headsets.
[...]
The researchers first built a custom database of 4,000 computer-generated images, which each included color and depth information for each pixel. This database also included a 3D hologram corresponding to each image. Using this data, the convolutional neural network learned how to calculate how best to generate holograms from the images. It could then produce new holograms from images with depth information, which is provided with typical computer-generated images and can be calculated from a multi-camera setup or from lidar sensors, both of which are standard on some new iPhones. The new system requires less than 620 kilobytes of memory, and can generate 60 color 3D holograms per second with a resolution of 1,920 by 1,080 pixels on a single consumer-grade GPU. The researchers could run it an iPhone 11 Pro at a rate of 1.1 holograms per second and on a Google Edge TPU at a rate of 2 holograms per second, suggesting it could one day generate holograms in real-time on future virtual-reality (VR) and augmented-reality (AR) mobile headsets.
Misleading Title (Score:4, Interesting)
Unless the definition of real-time has added relativity to it, of argue 1 frame a second isn't exactly real-time. Maybe add the word "potentially" in there since this requires a Crisis level GPU? Also, can we get a consumer standard definition of hologram? When I hear this word, I'm thinking depth as it relates to MY position but this journal entry has defined it similar to all of those AR demos on mobile phones.
Re:Misleading Title (Score:4, Interesting)
At least the title uses Deep Learning and not AI.
Re: (Score:1)
I always thought a hologram was a projected representation of something in 3d space, not on a screen.
Didn't you know?? Apparently we're calling 3D models "holograms" now.
Re: (Score:1)
Look at the video.
They used a laser to generate a 3d projection of both rendered and captured images, then captured the projection again with a camera for results data collection.
This is real 3d projection holography they're doing. And while they only did the green channel for proof of concept, they make the point RGB projection is easily doable.
I'd say 1fps using a phone GPU is pretty significant.
Re: Misleading Title (Score:2)
"Deep Learning" I'll take it where I can get it I suppose. But projection into 3D space is in consideration to my positioning, as it is everyone's I suppose so maybe I should have written "relative to any viewer's position at all times." I've always thought our obsession with holograms utilizing 2D images is in the wrong place. If a 4th dimensional object casts a 3D shadow, why not look at ways to use light to cast the shadow instead of trying to use a 2D shadow to guesstimate the 3D shape. Reminds me of al
Re: (Score:3)
A hologram is a lower dimensional (in this case 2D) recorded interference pattern that recreates a higher dimensional (3D) light field when it's illuminated. The world also refers to the recreated light field. You take a light source, shine it through or bounce it off the hologram, and it creates a pattern of light that is (ideally) indistinguishable from the original.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
You can generate holographic images from a 2D screen, if you can force the resolution stupidly high and put a coherent light source behind. Deep learning to drive a display that no-one has managed to manufacture outside of a laboratory yet.
Re: (Score:1)
Unless the definition of real-time has added relativity to it, of argue 1 frame a second isn't exactly real-time.
That depends entirely on the frame delay.
Re: (Score:3)
60 fps on a consumer grade GPU is realtime. Considerably better, actually.
I'm not sure what your complaint about the definition of hologram is. It has a quite precise one, and has had since Gabor invented the things in the 90s.
Re: Misleading Title (Score:2)
Re: (Score:2)
The paper is titled "Towards real-time ...". The article author (and whoever from IEEE spectrum posted the video on youtube) is intentionally misleading, as "towards" doesn't sound as exciting as "IT'S HERE NOW!"
Re: (Score:2)
https://countryeconomy.com/nat... [countryeconomy.com]
Google Edge TPU? (Score:3)
Re: (Score:2)
And why is Google Edge made of Thermoplastic Polyurethane?
Trailblazing tech (Score:4, Funny)
Hologram? (Score:1)
Re: (Score:2)
Same confusion here. Are they saying Deep Learning is powerful enough to turn 2D displays into 3D ones?
Re: (Score:2)
Looks like they're generating but not displaying holograms.
Re: (Score:2)
I don't see where you got that from. They're generating holograms, and displaying them.
The first holograms were recorded on film then projected by shining a light on the film. Today you can replace the film with a CCD, and for display, with a "spatial light modulator" which is a super expensive LCD.
So what they're doing is taking a synthetic 3D scene, using a convolutional neural network to turn it into a hologram, displaying the hologram on the SLM, shining a light through it, and voila, a projected hologr
Re: (Score:2)
TL:DR the guys get the famous royalty-free Big Buck Bunny video source, which is just 2D, extract depth from that source, and create a 3D representation which is effectively a holographic one.
So, holography (and holograms) in the context of the paper is the process of generating 3D representations from 2D sources (in this case 2D sources with depth data or from where depth can be extracted and/or extrapolated). Part of creating a hologram involves that process, because even current 3D scanning involves reco
Nearly instantly (Score:2)
Needing 24 seconds to create 1 second video is not exactly 'nearly instantly'.