HPC

Video games were 'Eureka!' moment that helped boffins simulate neural activity on a single commodity GPU

'Procedural generation' might offer a way forward for projects queueing up to use a supercomputer


Researchers at the UK's University of Sussex have developed a way to boost neural activity simulations without reaching for expensive and scarce supercomputing resources.

Taking inspiration from the games industry, research fellow James Knight and professor of informatics Thomas Nowotny have shown how a single GPU can be used to model a macaque visual cortex with 4 million neurons and 24 billion synapses, a feat only previously possible using a supercomputer.

The challenge in modelling brain activity is not just in the neurons, but in the synapses that connect these biological processing nodes, Knight told The Register.

"Synapses tend to outnumber neurons by a factor of 1,000 or even 10,000. So, if you have a model that's even relatively large, you have to have a lot of memory to store the synapses. This is why, typically, most people simulate models of scale in our paper using supercomputers. It's not the actual processing requirements; it's because they need to distribute the model across a distributed system to get enough memory."

Supercomputers are expensive and there is often a long queue of researchers waiting to run their models in these environments, putting limits on more widespread computational neuroscience.

Before going into academia, Knight was a games developer, having worked as a software engineer for Ideaworks Games Studio adapting Call of Duty for mobile platforms. As such, he thought the common technique of procedural content generation in games development might help address the memory problem in representing synapses. "From my background, I knew procedural content is a classic way of saving memory in your game," he said.

Knight started out by simply setting the model up on GPUs, something that can take a while with CPUs. But the work led to a light-bulb moment.

"We realised that you can do this on the fly so whenever those [synapse] connections are needed, you can regenerate them on the [16GB] GPU," Knight said. "It saves a vast amount of memory. This model would take factors 10 times more memory than it currently does if you did it the traditional way, and it wouldn’t fit on a CPU."

The model neurons are held on the GPU, but because they are "spiking neurons" – more closely related to biological neurons than their ML counterparts – they only transmit data via synapses when they have reached a certain level of activity, at which point the GPU generates the necessary synaptic connections from memory.

"This is particularly well suited to GPU architectures," Knight said.

A similar approach was used by Russian mathematician and neuroscientist Eugene Izhikevich for simulating a large cortical model on a CPU cluster in 2005. But as it was not written up, it was difficult to know how he achieved the result and the work has not been applied to modern hardware, Knight said.

While "procedural connectivity" - as the researchers call it - vastly reduces the memory requirements of large-scale brain simulations, the GPU code generation strategies do not scale well with a large number of neurons. To address this second problem, the team developed a "kernel merging" code generator, as described in the researchers' paper in Nature Computational Science.

The neurological model used to demonstrate the power of this approach is the macaque visual cortex developed by the Jülich Supercomputing Centre SimLab. In the name of open science, it is available on GitHub.

A lot of attention in neuroscience has focused on grand projects, such as the Human Brain Project coordinated by the École Polytechnique Fédérale de Lausanne and largely funded by the European Union. It has a troubled history and critics have leapt on its lack of results.

Knight showed researchers can effectively model neural activity on a commodity GPU workstation – in this case an Nvidia Titan RTX – which is available for a few thousand pounds. The hope is the development will allow more researchers to build and test a greater number of large models, improving our understanding of how brains work.

"There's a huge lack of really large-scale models," he said. "Some brain activity patterns only emerge when you have a suitably large model. Our hope is that [our work] will allow a wider range of computational neuroscience researchers to start experimenting with large brain models. The main people working on it right now are those with the expertise and the access to the supercomputers." ®


Other stories you might like

  • VMware claims ‘bare-metal’ performance from virtualized Nvidia GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual datacenter product updates across CPU, GPU, and DPU
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Now Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading

Biting the hand that feeds IT © 1998–2022