PAH! Four decades of Star Wars: No lightsabers, no palm-sized video calls

Sort of. Leia's a New Hope


Star Wars New Hope @ 40 When Lucasfilm recently unveiled its tribute reelto the late Carrie Fisher, one of the most memorable monologues in cinema sat right in its center.

“General Kenobi. Years ago, you served my father in the Clone Wars... Now he begs you to help him in his struggle against the Empire...”

Reading those words, we can see the princess, tiny and shrouded in projected light.

Princess Leia hologram

At the time of the US release of Star Wars, Episode IV, a New Hope forty years ago on May 25, that hologram princess seemed the purest movie magic, brought to life through the wizardry of special effects. In the decades that followed, Moore’s Law has ground away: forty years means our transistors are somewhere around a million times smaller, cheaper and faster than when Lucas was filming during the mid 1970s, and a hologram that seemed so fanciful then has now come within reach.

In that spirit, let’s put ourselves into Princess Leia’s position – needing to send a message via hologram. Where do we start?

Capturing the essence

A bit more than 20 years ago, a paper presented at the annual SIGGRAPH conference on computer graphics – where graphics boffins share their research – described a new technique to create “realistic three-dimensional models from a series of two-dimensional photographs.”

The basic idea was simple: take enough snaps from enough different angles, then leave it to the computer to “knit” all of the photos into a coherent multi-angle representation. After a fierce calculation, software would extrude 3D shapes from the photos. Take more photos, get a more accurate shape – at the cost of more computer time.

Once difficult and expensive, this technique, known as photogrammetry, has become as cheap as chips – or at least as cheap as an HTC headset and a PC. Free software packages can now knit photographs together and whip out a 3D model for viewing on screen or in a VR system.

Earlier this month, Valve transformed its Steam VR environment into an open-to-all-comers photogrammetry platform, allowing all Steam VR users to explore these exquisitely detailed moments in time. HTC delivers you a resolution of 1080x1200 per eye with a refresh rate of 90Hz and 110-degree field of view. These worlds are beautiful – but static. Statues in a museum.

Our princess, vitally alive, needs something that will capture both image and essence, a dynamic reflection of her plea. We need a media of motion, something with a timeline.

It still takes seconds to transform a single captured moment into photogrammetry. Going to thirty frames a second represents a problem of an entirely new order. Moore’s Law has been good to us – but is it that good?

Early last year I learned it was.

At the Los Angeles offices of a startup called 8i, I watched a crusty old boxing coach as he went through the basics of stance, posture, and balance. This wasn’t video. He was right there, in front of me, tangibly filling space, and seeming very nearly real. Nothing uncanny about it, no tinge of creepiness, as if digital golem had been given an unnatural life by an animator. This is videogrammetry. My first “hologram” possessed the same verisimilitude as a photograph or video – with volume and depth.

We need to be very careful of a confusion of terms. Holography is a three-dimensional imaging technique using coherent light beams (generated by lasers) that recreates an image from an interference pattern. These “holograms” created via videogrammetry are nothing of the sort. The only thing “holograms” have in common with holography is that both generate a three-dimensional representation.

But the princess isn’t picky. These faux holograms are good enough – and they’re available today.

To make one of these videograms you start with a lot of cameras. A typical videogrammetry rig has upwards of forty high-definition cameras, different angles onto a subject being captured, all of them cranking out images thirty times a second, generating terabytes of data every hour.

That data must be knit together, processed and composed into a hologram, multiplying the computational intensity of photogrammetry by terabytes of FHD video shot at thirty frames a second. It’s the sort of problem even a data centre would struggle with.

Founder and CEO of videogrammetry startup Humense, Scott O’Brien, reckons the princess’ ship would be involved. “If we have depth sensors that know about each other, and correct for each other, with processing going on in R2D2, we’d have the resources to capture videogrammetry.” R2D2 would utilise some of the computing capacity on board Tantive IV (the Princess’ captured transport), tapping into the vessel’s surveillance cameras, capturing the bits of the Princess facing away from the droid.

With current tech, R2D2 would still be waiting for the
progress bar to fill while Vader did his worst...
Pic by Shutterstock

It could be done – but how long would it take to post-process her message? Too long: Vader would capture the droids while R2 waited for the progress bar to fill. Quickly we go from New Hope to No Hope.

Recently, boffins at Microsoft Research showed how to combine Kinect-like depth-sensing-cameras with photographic captur e to generate videogrammetry in real time. A depth camera – already on Google’s Tango devices, and rumoured to be a feature of the next iPhone – provides much of the information generated by conventional photogrammetry calculations. Although neither is pretty nor as convincing as compute-intensive videogrammetry, it’s here today – and fast enough for the Princess to record her message, then dispatch R2D2 to Tatooine.

Next page: Invisible pictures

Similar topics

Narrower topics


Other stories you might like

  • Talos names eight deadly sins in widely used industrial software
    Entire swaths of gear relies on vulnerability-laden Open Automation Software (OAS)

    A researcher at Cisco's Talos threat intelligence team found eight vulnerabilities in the Open Automation Software (OAS) platform that, if exploited, could enable a bad actor to access a device and run code on a targeted system.

    The OAS platform is widely used by a range of industrial enterprises, essentially facilitating the transfer of data within an IT environment between hardware and software and playing a central role in organizations' industrial Internet of Things (IIoT) efforts. It touches a range of devices, including PLCs and OPCs and IoT devices, as well as custom applications and APIs, databases and edge systems.

    Companies like Volvo, General Dynamics, JBT Aerotech and wind-turbine maker AES are among the users of the OAS platform.

    Continue reading
  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading

Biting the hand that feeds IT © 1998–2022