Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Cellphones AI

Deep Learning Enables Real-Time 3D Holograms On a Smartphone (ieee.org) 25

An anonymous reader quotes a report from IEEE Spectrum: Now researchers at MIT have developed a new way to produce holograms nearly instantly -- a deep-learning based method so efficient, it can generate holograms on a laptop in a blink of an eye. They detailed their findings this week, which were funded in part by Sony, online in the journal Nature. Using physics simulations for computer-generated holography involves calculating the appearance of many chunks of a hologram and then combining them to get the final hologram. Using lookup tables is like memorizing a set of frequently used chunks of hologram, but this sacrifices accuracy and still requires the combination step.
[...]
The researchers first built a custom database of 4,000 computer-generated images, which each included color and depth information for each pixel. This database also included a 3D hologram corresponding to each image. Using this data, the convolutional neural network learned how to calculate how best to generate holograms from the images. It could then produce new holograms from images with depth information, which is provided with typical computer-generated images and can be calculated from a multi-camera setup or from lidar sensors, both of which are standard on some new iPhones. The new system requires less than 620 kilobytes of memory, and can generate 60 color 3D holograms per second with a resolution of 1,920 by 1,080 pixels on a single consumer-grade GPU. The researchers could run it an iPhone 11 Pro at a rate of 1.1 holograms per second and on a Google Edge TPU at a rate of 2 holograms per second, suggesting it could one day generate holograms in real-time on future virtual-reality (VR) and augmented-reality (AR) mobile headsets.

This discussion has been archived. No new comments can be posted.

Deep Learning Enables Real-Time 3D Holograms On a Smartphone

Comments Filter:
  • Misleading Title (Score:4, Interesting)

    by yeshuawatso ( 1774190 ) on Saturday March 13, 2021 @09:00AM (#61153758) Journal

    Unless the definition of real-time has added relativity to it, of argue 1 frame a second isn't exactly real-time. Maybe add the word "potentially" in there since this requires a Crisis level GPU? Also, can we get a consumer standard definition of hologram? When I hear this word, I'm thinking depth as it relates to MY position but this journal entry has defined it similar to all of those AR demos on mobile phones.

    • Re:Misleading Title (Score:4, Interesting)

      by Fly Swatter ( 30498 ) on Saturday March 13, 2021 @09:05AM (#61153764) Homepage
      But this is a 3D hologram! As opposed to those fake 2D holograms... so a 3D hologram on a 2 dimensional screen. I always thought a hologram was a projected representation of something in 3d space, not on a screen.

      At least the title uses Deep Learning and not AI.
      • I always thought a hologram was a projected representation of something in 3d space, not on a screen.

        Didn't you know?? Apparently we're calling 3D models "holograms" now.

        • by maynard ( 3337 )

          Look at the video.

          They used a laser to generate a 3d projection of both rendered and captured images, then captured the projection again with a camera for results data collection.

          This is real 3d projection holography they're doing. And while they only did the green channel for proof of concept, they make the point RGB projection is easily doable.

          I'd say 1fps using a phone GPU is pretty significant.

      • "Deep Learning" I'll take it where I can get it I suppose. But projection into 3D space is in consideration to my positioning, as it is everyone's I suppose so maybe I should have written "relative to any viewer's position at all times." I've always thought our obsession with holograms utilizing 2D images is in the wrong place. If a 4th dimensional object casts a 3D shadow, why not look at ways to use light to cast the shadow instead of trying to use a 2D shadow to guesstimate the 3D shape. Reminds me of al

      • by ceoyoyo ( 59147 )

        A hologram is a lower dimensional (in this case 2D) recorded interference pattern that recreates a higher dimensional (3D) light field when it's illuminated. The world also refers to the recreated light field. You take a light source, shine it through or bounce it off the hologram, and it creates a pattern of light that is (ideally) indistinguishable from the original.

        https://en.wikipedia.org/wiki/... [wikipedia.org]

      • You can generate holographic images from a 2D screen, if you can force the resolution stupidly high and put a coherent light source behind. Deep learning to drive a display that no-one has managed to manufacture outside of a laboratory yet.

    • Unless the definition of real-time has added relativity to it, of argue 1 frame a second isn't exactly real-time.

      That depends entirely on the frame delay.

    • by ceoyoyo ( 59147 )

      60 fps on a consumer grade GPU is realtime. Considerably better, actually.

      I'm not sure what your complaint about the definition of hologram is. It has a quite precise one, and has had since Gabor invented the things in the 90s.

    • Sounds like a one pixel hologram - like thise email trackers. ðY
    • The paper is titled "Towards real-time ...". The article author (and whoever from IEEE spectrum posted the video on youtube) is intentionally misleading, as "towards" doesn't sound as exciting as "IT'S HERE NOW!"

  • by iamhassi ( 659463 ) on Saturday March 13, 2021 @09:53AM (#61153846) Journal
    Wonder why they included Google Edge TPU as an example instead of more commonly recognized devices like a Samsung phone or a Snapdragon cpu?
  • by Plugh ( 27537 ) on Saturday March 13, 2021 @10:56AM (#61153962) Homepage
    When will there be 3d holographic realtime deepfake pr0n? I donâ(TM)t usually swing that way but Tom Cruise...
  • Someone help me out here. They're not making real holograms, and pictures of 3D objects don't require much power. Are they generating but not displaying holograms? Are they generating 3D objects and displaying them in 2D?
    • Same confusion here. Are they saying Deep Learning is powerful enough to turn 2D displays into 3D ones?

    • Looks like they're generating but not displaying holograms.

    • by ceoyoyo ( 59147 )

      I don't see where you got that from. They're generating holograms, and displaying them.

      The first holograms were recorded on film then projected by shining a light on the film. Today you can replace the film with a CCD, and for display, with a "spatial light modulator" which is a super expensive LCD.

      So what they're doing is taking a synthetic 3D scene, using a convolutional neural network to turn it into a hologram, displaying the hologram on the SLM, shining a light through it, and voila, a projected hologr

    • TL:DR the guys get the famous royalty-free Big Buck Bunny video source, which is just 2D, extract depth from that source, and create a 3D representation which is effectively a holographic one.

      So, holography (and holograms) in the context of the paper is the process of generating 3D representations from 2D sources (in this case 2D sources with depth data or from where depth can be extracted and/or extrapolated). Part of creating a hologram involves that process, because even current 3D scanning involves reco

  • Needing 24 seconds to create 1 second video is not exactly 'nearly instantly'.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...