Oh no, you're thinking, yet another cookie pop-up. Well, sorry, it's the law. We measure how many people read us, and ensure you see relevant ads, by storing cookies on your device. If you're cool with that, hit “Accept all Cookies”. For more info and to customize your settings, hit “Customize Settings”.

Review and manage your consent

Here's an overview of our use of cookies, similar technologies and how to manage them. You can also change your choices at any time, by hitting the “Your Consent Options” link on the site's footer.

Manage Cookie Preferences
  • These cookies are strictly necessary so that you can navigate the site as normal and use all features. Without these cookies we cannot provide you with the service that you expect.

  • These cookies are used to make advertising messages more relevant to you. They perform functions like preventing the same ad from continuously reappearing, ensuring that ads are properly displayed for advertisers, and in some cases selecting advertisements that are based on your interests.

  • These cookies collect information in aggregate form to help us understand how our websites are being used. They allow us to count visits and traffic sources so that we can measure and improve the performance of our sites. If people say no to these cookies, we do not know how many people have visited and we cannot monitor performance.

See also our Cookie policy and Privacy policy.

This article is more than 1 year old

Nowhere to run to, nowhere to hide, muaha... Boffins build laser-eyed intelligent cam that sorta sees around corners

Guess we can't escape our future Terminator overlords

Artificial intelligence with frickin' lasers beams attached can see objects hidden around corners, according to a study published in the journal Optica on Thursday.

Academics, led by boffins at the Stanford University, hope the technology will allow self-driving cars to spot potential hazards, such as pedestrians crossing at busy intersections or parked cars one day.

“Non-line-of-sight imaging has important applications in medical imaging, navigation, robotics and defense,” said Felix Heide, a computer vision professor from Princeton University and co-author of the study. “Our work takes a step toward enabling its use in a variety of such applications.”

Here’s how it works: first, a laser beam is steered to strike a wall where it is reflected onto an object hidden around the corner. The wall is referred to as a virtual source. The laser light from the hidden object is bounced back onto part of the wall that is being watched by a camera connected to a machine-learning system.

The reflection on the wall appears in the form of a speckle pattern, created by the interference of the light leaving the laser and the beam reflected from the hidden object. This pattern contains information about the overall shape of the object. That pattern is observed by the aforementioned camera, and fed into a convolutional neural network (CNN) to decode to reconstruct an image of the object.

AI_corner

A schematic describing how the imaging system works. The wall used to reflect the light to and from the object is referred to as a virtual source and a detector. Image credit: Prasanna Rangarajan, Southern Methodist University

The CNN was trained to analyse spatial correlations in the speckle image to estimate the amount of light reflected off the hidden object, so that it can piece together its outline. To test their imaging system, the team reconstructed images of a series of letters and numbers measuring a centimeter tall that were placed around a corner. The camera was placed 0.8 metres from the reflecting wall.

The systems was able to reconstruct images of one centimeter-tall letters and numbers hidden behind a corner, using an imaging setup about one meter from the wall. Using an exposure length of a quarter of a second, the approach produced reconstructions with a resolution of 300 microns.

AI_corner_2

An example of the image reconstructed from the original hidden object. From left to right: An image of the hidden object, a letter 'F', the reconstruction of the letter from a speckle image, processing the reconstruction of the speckle image to retrieve the final outline of the letter 'F'.

"The biggest issue with our technology is that it currently has a very narrow field of view; only a couple inches or so," Christopher Metzler, a postdoctoral researcher at Stanford University and first author of the study, told The Register. "This means it can image small objects very well like numbers on a license plate, but can't make out large objects like cars or people.

"Fortunately, we can pair it with other non-line-of-sight systems [developed] by many researchers, that can capture low resolution images of larger scenes. These systems work by exploiting the fact that light travels with a fixed and finite speed; relying on the time-of-flight information to recover obscured object information. Our system instead treats light as a wave that causes interference and exploits the structure of this interference to reconstruct an image of the hidden object."

The experimental setup obviously isn’t large or complex enough to mirror real life scenarios for self-driving cars. But the researchers believe their technique is promising, since it can deal with the problem of noise and doesn’t require long exposure to capture a decent image.

“Compared to other approaches for non-line-of-sight imaging, our deep learning algorithm is far more robust to noise and thus can operate with much shorter exposure times,” said Prasanna Rangarajan, an assistant professor of engineering from Southern Methodist University.

“By accurately characterizing the noise, we were able to synthesize data to train the algorithm to solve the reconstruction problem using deep learning without having to capture costly experimental training data.” ®

Similar topics

TIP US OFF

Send us news


Other stories you might like