FPGAs for AI? GPUs and CPUs are the future, shrugs drone biz Insitu

Unmanned surveill-o-plane firm goes under the hood with El Reg

Interview "It's just too hard to maintain all of those threads," eye-in-the-sky drone firm Insitu told The Register, explaining its move away from FPGAs to commercial off-the-shelf compute hardware for its AI and machine learning tech.

The firm's chief growth officer, Jon Damush, was answering our questions about the tech underneath the bonnet, so to speak. We know all about the aeronautical hardware but what's the state of play with its real special sauce, its image analysis software?

Damush – formerly a gros fromage at image-crunching firm 2D3 Sensing, which had a significant AI/ML-focused base in Oxford, until that company was snapped up by Insitu a few years ago – told us: "Most of the tools that are deployed today [on Insitu's gear] are thin client tools that run on local computers, so dedicated metal if you will, which the customer likes because it's in their secure environment. But in the last two years we've been investing in building a cloud-based data management system so that we can store and move data globally with the use of secure cloud."

Insitu builds light unmanned aerial vehicles (UAVs) along with its own software for flying them and crunching the imagery data collected by the craft's onboard cameras and sensors. Most of its customers are military (including the UK) but the firm is steadily making inroads into the commercial world.

An Insitu Scaneagle drone being launch from a Royal Navy warship. Crown copyright

An Insitu Scaneagle drone being launched from a Royal Navy warship. Crown copyright

"In the future we will be investing more in making the air vehicles themselves capable of onboard compute," continued Damush. "What we're doing today is exploring a variety of architectures from the thin client world to cloud to potential ways in which we can write code once and deploy in just about any of those environments... Think of wide area data collects where you're trying to produce some kind of derivative product, like a map. That further underscores why we have to use generic off-the-shelf compute... it's just too hard to maintain all of those threads. What we really have to do is focus on the algo and the collection methodologies of our systems and then use whatever we can to build out the rest."

He explained that while Insitu has traditionally stuck to FPGAs for its in-flight compute, with advances being made in GPU and CPU tech by firms such as Nvidia and Intel, it doesn't make sense from a commercial point of view to effectively keep rolling their own.

In cloudy terms, Insitu is working "primarily with both Microsoft and Amazon", Damush said, with the data architecture for Insitu's commercial software being "built on top of Amazon".

"And we're using some of their cutting edge capabilities, some pre-release stuff," he boasted. While he wouldn't be drawn further on that he did say that "we have used Snowball in a pretty novel way". Snowball is AWS's shift-your-cloud box, which can move up to 50TB from one bit barn to another.

Machines, rising

Compute and storage architecture aside, Insitu's work on practical applications of AI and ML tech is mature. The company's Oxford-based AI brainboxes have "become, effectively, our AI and ML experts inside the company" in Damush's words. "Where the team has done some pretty novel work is in the automatic training of data sets. So building a corpus of information from a relatively limited input set and then working through the user interface concerns to make it easy for a human to provide additional supervised learning. We're really capitalising on that virtuous cycle of machine learning with occasional human input.

"One of the capabilities we spent some time doing was taking relatively small samples of representative images, like a specific vehicle, and then using computer vision to effectively build a model of that and then using that model to generate a variety of other representative views that can then be used for machine learning classifiers... Most importantly, real people, not a sophisticated user, can capitalise on machine learning."

Classifiers are at the heart of any machine learning-based computer vision system. They are particularly critical for Insitu's tech, which is mostly deployed at sea on the back of government-owned ships (military and civilian) to survey everything from floating debris to drug smugglers. The tech is well regarded, with the US Coastguard particularly proud of its drug busts made in part with the aid of the firm's Scaneagle drone.

An Insitu Scaneagle 2 pictured in its transit case at Mazagon, Spain

An Insitu Scaneagle 2 pictured in its transit case at Mazagón, Spain

"We've been working pretty closely with a handful of specific customers to try to mature more of the machine learning capabilities on their data sets," continued Damush. "We've been able to work with Shell in Australia to produce some machine learning classifiers that are starting to automatically check on the needs of their infrastructure managers... automatically classifying whether or not an [oil rig] wellhead is open or closed using machine learning. We've been able to give a representative sample of what the various wellheads look like, we've been able to train classifiers and now we've been able to run that in real time on their data."

Learning to learn

Gathering training data is a little trickier, however. As many of Insitu's customers are military Damush admitted "that can be problematic because they don't want their data going anywhere else". In contrast, he said, in the commercial world, "I wont say they're loose with their data but it doesn't have the 'national secret' kind of classification, right?"

Insitu's approach to commercialising AI and ML is not to build something technically intriguing for the sake of it and then start hunting for someone who might want to invest in it or buy the standalone product. Instead, as Damush told us: "From a business perspective it makes more sense to be able to provide our customers the whole solution as opposed to separate line items on a contract with pieces of software they're going to buy. So our view when it comes to the application of AI and machine learning is that it's not just something that our Oxford team works on and then throws over... it has to be done collaboratively across the company."

Intriguingly, Damush was also clear that Insitu is working on making its ML tech end user-friendly, explicitly because "we don't necessarily want to force our customers to always have to come back to us to train their classifiers, which is why we're focusing our UK team's energies on how do we build training sets more efficiently and with easier user interfaces." He added: "We want to enable those customers to be able to take those capabilities and the entire system we've designed around satisfying the use case and be able to go train up something new."

The cycle time for them to give us data, for us to build classifiers and train algorithms and then give it back to them, that's probably too long to be effective. So let's democratise the tools, let's give them the tools they need to build the classifiers on their own and they can build at the pace of the mission.

Damush also hinted that he wanted to see AI tech aboard Insitu's unmanned aircraft in around five years or so, the idea being that if the aircraft lost comms with its ground base, it could carry on autonomously to complete its assigned mission rather than turning around and going back to base. Proof-of-concept versions of this have already been tested by AI researchers, albeit on a very small scale. For now, though, humans will remain firmly in the loop. ®

Similar topics

Other stories you might like

  • Google Pixel 6, 6 Pro Android 12 smartphone launch marred by shopping cart crashes

    Chocolate Factory talks up Tensor mobile SoC, Titan M2 security chip ... for those who can get them

    Google held a virtual event on Tuesday to introduce its latest Android phones, the Pixel 6 and 6 Pro, which are based on a Google-designed Tensor system-on-a-chip (SoC).

    "We're getting the most out of leading edge hardware and software, and AI," said Rick Osterloh, SVP of devices and services at Google. "The brains of our new Pixel lineup is Google Tensor, a mobile system on a chip that we designed specifically around our ambient computing vision and Google's work in AI."

    This latest Tensor SoC has dual Arm Cortex-X1 CPU cores running at 2.8GHz to handle application threads that need a lot of oomph, two Cortex-A76 cores at 2.25GHz for more modest workloads, and four 1.8GHz workhorse Cortex-A55 cores for smaller tasks. This octa-core processor has an Arm-designed 20-core Mali-G78 MP20 GPU, and is fabricated using a 5nm node.

    Continue reading
  • BlackMatter ransomware gang will target agriculture for its next harvest – Uncle Sam

    What was that about hackable tractors?

    The US CISA cybersecurity agency has warned that the Darkside ransomware gang, aka BlackMatter, has been targeting American food and agriculture businesses – and urges security pros to be on the lookout for indicators of compromise.

    Well known in Western infosec circles for causing the shutdown of the US Colonial Pipeline, Darkside's apparent rebranding as BlackMatter after promising to go away for good in the wake of the pipeline hack hasn't slowed their criminal extortion down at all.

    "Ransomware attacks against critical infrastructure entities could directly affect consumer access to critical infrastructure services; therefore, CISA, the FBI, and NSA urge all organizations, including critical infrastructure organizations, to implement the recommendations listed in the Mitigations section of this joint advisory," said the agencies in an alert published on the CISA website.

    Continue reading
  • It's heeere: Node.js 17 is out – but not for production use, says dev team

    EcmaScript 6 modules will not stop growing use of Node, claims chair of Technical Steering Committee

    Node.js 17 is out, loaded with OpenSSL 3 and other new features, but it is not intended for use in production – and the promotion for Node.js 16 to an LTS release, expected soon, may be more important to most developers.

    The release cycle is based on six-monthly major versions, with only the even numbers becoming LTS (long term support) editions. The rule is that a new even-numbered release becomes LTS six months later. All releases get six months of support. This means that Node.js 17 is primarily for testing and experimentation, but also that Node.js 16 (released in April) is about to become LTS. New features in 16 included version 9.0 of the V8 JavaScript engine and prebuilt Apple silicon binaries.

    "We put together the LTS release process almost five years ago, it works quite well in that we're balancing [the fact] that some people want the latest, others prefer to have things be stable… when we go LTS," Red Hat's Michael Dawson, chair of the Node.js Technical Steering Committee, told The Register.

    Continue reading

Biting the hand that feeds IT © 1998–2021