HPC

Real-time drone videos get GPU-tastic

Swimming in sensors, drowning in data


HPC blog Here at the GPU Technology Conference (GTC 2012), you see a lot of things that you didn’t think were quite possible yet. Case in point: cleaning up surveillance video.

The standard scene in “24” or any spy thriller is of agents poring over some grainy, choppy, barely-lit video that’s so bad you can’t tell whether it’s four humans negotiating an arms deal or two bears having an animated conversation about football. In the Hollywood version, the techno geek says, “Let me work on this a little bit,” and suddenly things clear up to the degree that not only can you see the faces clearly, you can tell when the guys last shaved.

Cleaning up and enhancing video is a tall order, compute-wise – and doing it in real time? Hella hard. But I just saw a demo of that in a GTC12 session run by MotionDSP. Their specialty is processing video streams from mobile platforms (think drones and airplanes) on the fly. We’re talking full motion, 30 frames per second video streams that are enhanced, cleaned up, and highly analysable in real time.

The amount of processing they’re doing is incredible. Lighting is enhanced, edges are enhanced, jitter is taken out, and the on-screen metadata (time, location, speed, etc) is masked. Again – all in real time.

The effect is profound. In the demo, what was once just a vague gray ship (which seemed to be vibrating like a can in a paint-shaker) was clarified so that you could easily see what kind of ship it was and also see two suspicious figures milling around on deck. To me, it looked like there were enough pixels to enhance the video even further – to the point where we could identify the figures.

As the folks from MotionDSP explained, processing at this speed simply isn’t possible without using GPUs. Cleaning up a single stream of video to that degree takes 160 gigaflops of processing power. A single GPU card (Fermi, assumedly) can handle two simultaneous HD streams or four to six standard definition streams.

Not surprisingly, MotionDSP's biggest customers are various branches of the US government (Air Force, Naval Special Warfare Group, and lots of other secret acronym agencies). In fact, the “swimming in sensors, drowning in data” quote is from a general (I think) talking about their struggle to take advantage the masses of data provided by their sensor platforms.

Check out the demo views on the MotionDSP website; it’s interesting stuff for sure. While the early applications are typically military surveillance, how far off is the day when we’ll see this technology used to make other videos more clear?

I’m thinking about the typical YouTube video shot from a helmet cam worn by some kid on a bike at the top of a huge mountain. What’s always detracted from my viewing experience is the way the video gets so shaky and distorted after he loses his balance and starts to tumble down the mountainside. Sure, the first hit is clear, and maybe the first loop, but once he picks up speed, there’s just too much distortion. Hopefully, MotionDSP will release an edition at a price scaled to the amateur stunt man. ®

Similar topics

Narrower topics


Other stories you might like

  • Pentester pops open Tesla Model 3 using low-cost Bluetooth module
    Anything that uses proximity-based BLE is vulnerable, claim researchers

    Tesla Model 3 and Y owners, beware: the passive entry feature on your vehicle could potentially be hoodwinked by a relay attack, leading to the theft of the flash motor.

    Discovered and demonstrated by researchers at NCC Group, the technique involves relaying the Bluetooth Low Energy (BLE) signals from a smartphone that has been paired with a Tesla back to the vehicle. Far from simply unlocking the door, this hack lets a miscreant start the car and drive away, too.

    Essentially, what happens is this: the paired smartphone should be physically close by the Tesla to unlock it. NCC's technique involves one gadget near the paired phone, and another gadget near the car. The phone-side gadget relays signals from the phone to the car-side gadget, which forwards them to the vehicle to unlock and start it. This shouldn't normally happen because the phone and car are so far apart. The car has a defense mechanism – based on measuring transmission latency to detect that a paired device is too far away – that ideally prevents relayed signals from working, though this can be defeated by simply cutting the latency of the relay process.

    Continue reading
  • Google assuring open-source code to secure software supply chains
    Java and Python packages are the first on the list

    Google has a plan — and a new product plus a partnership with developer-focused security shop Snyk — that attempts to make it easier for enterprises to secure their open source software dependencies.

    The new service, announced today at the Google Cloud Security Summit, is called Assured Open Source Software. We're told it will initially focus on some Java and Python packages that Google's own developers prioritize in their workflows. 

    These two programming languages have "particularly high-risk profiles," Google Cloud Cloud VP and GM Sunil Potti said in response to The Register's questions. "Remember Log4j?" Yes, quite vividly.

    Continue reading
  • Rocket Lab is taking NASA's CAPSTONE to the Moon
    Mission to lunar orbit is further than any Photon satellite bus has gone before

    Rocket Lab has taken delivery of NASA's CAPSTONE spacecraft at its New Zealand launch pad ahead of a mission to the Moon.

    It's been quite a journey for CAPSTONE [Cislunar Autonomous Positioning System Technology Operations and Navigation Experiment], which was originally supposed to launch from Rocket Lab's US launchpad at Wallops Island in Virginia.

    The pad, Launch Complex 2, has been completed for a while now. However, delays in certifying Rocket Lab's Autonomous Flight Termination System (AFTS) pushed the move to Launch Complex 1 in Mahia, New Zealand.

    Continue reading

Biting the hand that feeds IT © 1998–2022