We sent a Reg vulture to RSA to learn about the future of AI and security. And it's no use. It's bots all the way down

We'd call this blue-sky thinking but the sky is thick with swarms of drones


RSA AI algorithms will in the future form and direct swarms of physical and virtual bots that will live among us... according to this chap speaking at 2019's RSA conference in San Francisco on Tuesday.

Thomas Caldwell, founder and president of the League of AI, a consultancy focused on security and robots, talked about the rapid rise of machine learning tools that can be used to develop systems with so-called swarm intelligence. He described this as a group of computer-controlled agents with the ability to “collaborate, communicate, and reach a consensus” in order to complete a specific task.

This digital collective smartness could be used to launch a wide offensive attack, or defend strategically against an incoming assault, physical or virtual: the agents might exist as real, physical robots, or virtual entities within devices.

“AI bots can live inside a robot, drone, be virtual in a cloud, or be resident on an edge device like a Raspberry Pi or an Nvidia TX2,” he gushed. “They could be the brains of new-age security management tools or they could apply attack vectors.”

Bender the robot, Futurama

Don't rage against the machine – wage against the master, says McAfee amid AI havoc fear hype

READ MORE

He went on to describe a future in which robots could band together and work alongside humans. Those pack droids could patrol schools, or work alongside police officers. They could be decked out with cameras that identify suspicious items including guns and knives, or include microphones to listen out for gunshots and tell-tale screams. Sending in a swarm of these machines to tackle grim events like school shootings, locating or perhaps even protecting kids, might be a safer option than using humans.

When it comes to virtual bots, Caldwell pictured scenarios in which they could pervade systems by hiding as non-malicious software to evade detection, and springing into action when necessary. They could, as a group, attack AI models by poisoning input data, or training datasets, to confuse the neural networks into performing the wrong actions. The agents could band together to fuzz code to find exploitable vulnerabilities, working in parallel and at scale.

(You can, of course, do this right now with regular software: it doesn't need to operate as a swarm, unless you're seeking to achieve some kind of huge parallel scale, perhaps.)

Researchers, we were told, are interested in creating roving network-inspecting bots that can study past cyber-attacks, identify patterns in the intruders' data accesses and methods, and use that knowledge to detect future network compromises. The agents would also, presumably, learn what's normal on the network to avoid falsely flagging up harmless connections, users, and applications.

Caldwell described the idea of “an oracle, able to look into the past, that analyzes the present to predict the future.” Swarms of virtual bots patrolling computer systems could look out for unexpected or previously seen behavioral to clock and thwart miscreants sneaking in, or rogue employees seeking to cause damage.

These swarms of software agents might, one day, even be able to communicate coherently with humans via Slack, Alexa, or other mobile apps.

Your imagination is the limit, really. Just use your imagination. Because it's, right now, admittedly, mostly imagination. ®

Similar topics

Broader topics


Other stories you might like

  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Samsung invests in ML chip startup NeuReality
    Coining the term hardware-based 'AI hypervisor' has to be worth several million, dontcha think?

    The venture capital arm of Samsung has cut a check to help Israeli inference chip designer NeuReality bring its silicon dreams a step closer to reality.

    NeuReality announced Monday it has raised an undisclosed amount of funding from Samsung Ventures, adding to the $8 million in seed funding it secured last year to help it get started.

    As The Next Platform wrote in 2021, NeuReality is hoping to stand out with an ambitious system-on-chip design that uses what the upstart refers to as a hardware-based "AI hypervisor."

    Continue reading
  • Zscaler bulks up AI, cloud, IoT in its zero-trust systems
    Focus emerges on workload security during its Zenith 2022 shindig

    Zscaler is growing the machine-learning capabilities of its zero-trust platform and expanding it into the public cloud and network edge, CEO Jay Chaudhry told devotees at a conference in Las Vegas today.

    Along with the AI advancements, Zscaler at its Zenith 2022 show in Sin City also announced greater integration of its technologies with Amazon Web Services, and a security management offering designed to enable infosec teams and developers to better detect risks in cloud-native applications.

    In addition, the biz also is putting a focus on the Internet of Things (IoT) and operational technology (OT) control systems as it addresses the security side of the network edge. Zscaler, for those not aware, makes products that securely connect devices, networks, and backend systems together, and provides the monitoring, controls, and cloud services an organization might need to manage all that.

    Continue reading
  • Amazon can't channel the dead, but its deepfake voices take a close second
    Megacorp shows Alexa speaking like kid's deceased grandma

    In the latest episode of Black Mirror, a vast megacorp sells AI software that learns to mimic the voice of a deceased woman whose husband sits weeping over a smart speaker, listening to her dulcet tones.

    Only joking – it's Amazon, and this is real life. The experimental feature of the company's virtual assistant, Alexa, was announced at an Amazon conference in Las Vegas on Wednesday.

    Rohit Prasad, head scientist for Alexa AI, described the tech as a means to build trust between human and machine, enabling Alexa to "make the memories last" when "so many of us have lost someone we love" during the pandemic.

    Continue reading

Biting the hand that feeds IT © 1998–2022