Boffins build smallest drone to fly itself with AI

Hand-sized quadrotor packs a neural network


A team of computer scientists have built the smallest completely autonomous nano-drone that can control itself without the need for a human guidance.

Although computer vision has improved rapidly thanks to machine learning and AI, it remains difficult to deploy algorithms on devices like drones due to memory, bandwidth and power constraints.

But researchers from ETH Zurich, Switzerland and the University of Bologna, Italy have managed to build a hand-sized drone that can fly autonomously and consumes only about 94 milliWatts (0.094 W) of energy. Their efforts were published in a paper on arXiv earlier this month.

At the heart of it all is DroNet, a convolutional neural network that processes incoming images from a camera at 20 frames per second. It works out the steering angle, so that it can control the direction of the drone, and the probability of a collision, so that it know whether to keep going or stop. Training was conducted using thousands of images taken from bicycles and cars driving along different roads and streets.

DroNet was previously deployed on a Parrot Bebop 2.0 drone, a larger commercial-off-the-shelf drone.

In the older model the researchers worked on, the drone had to be in radio contact with a laptop running DroNet on a high-powered processor. Now, all the number crunching is done directly on the PULP (Parallel Ultra Low Power) platform developed by ETH Zurich and the University of Bologna, using GAP8, a chip based on the architecture of RISC-V open-source processors, about the size of a five-eurocent coin.

“Computation is fully on-board, from state-estimation to navigation controls. This means, nano-drones are completely autonomous. This is the first time such a small quadrotor can be controlled this way, without any need of external sensing and computing. The methodology remains however almost unchanged using steering angle and the collision probability prediction [in DroNet],” Loquercio told The Register.

DroNet is deployed by performing the most computationally intense kernels of the algorithm in parallel across eight of the RISC-V cores. The newer system is slightly smaller and performs fewer computations to reach roughly the same performance.

GAP8_architecture

The architecture of an GAP8 processor. Image credit: Palossi et al.

But it suffers from some of the same setbacks as the older model. Since it was trained with images from a single plane, the drone can only move horizontally and cannot fly up or down. Also, Loquercio reckoned it can fly for no longer than three to four minutes given the payload of the processor. "We did not test this thoroughly, though," he said.

Autonomous drones are desirable because if we're going to use drones to do things like deliver packages, it would be grand if they could avoid obstacles instead of flying on known-safe routes. Autonomy will also help drones to monitor environments, spy on people and develop swarm intelligence for military use.

But experts have raised concerns about baking AI into drones, on grounds that they'll become better at delivering lethal payloads.

Swarming bugs

Drone 'swarm' buzzed off FBI surveillance bods, says tech bloke

READ MORE

That won't be a problem with this drone, as Loquercio told El Reg that the prototype only works in limited experiments, where the surroundings and navigation tasks are similar to the ones in the training dataset. It won’t fly very well in, say, forests or particularly challenging weather conditions.

But he expects things to improve. “In the future, I see them working similar to flies. Despite not [having an] elegant flying patterns - flies crash a lot - they can reach any place they need.” ®

Similar topics

Broader topics


Other stories you might like

  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading
  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading
  • If AI chatbots are sentient, they can be squirrels, too
    Plus: FTC warns against using ML for automatic content moderation, and more

    In Brief No, AI chatbots are not sentient.

    Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.

    The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.

    Continue reading

Biting the hand that feeds IT © 1998–2022