Software engineers – the ultimate brain scientists?

Part I: Everything you know about AI is probably wrong

Guest Opinion Bill Softky is a scientist at the Redwood Neuroscience Institute, founded by Jeff Hawkins. He has worked as a software architect, visualization designer, and educator.

Can software engineers hope to create a digital brain? Not before understanding how the brain works, and that's one of the biggest mysteries left in science. Brains are hugely intricate circuits of billions of elements. We each have one very close by, but can't open it up: it's the ultimate Black Box.

The most famous engineering brain models are "Neural Networks" and "Parallel Distributed Processing." Unfortunately both have failed as engineering models and as brain models, because they make certain assumptions about what a brain should look like.

The trouble is, real problems such as robotic motion and planning, audio or visual segmentation, or real-time speech recognition are not yet well enough understood to justify any particular circuit design, much less "neural" ones. A neuron is the brain's computational building block, its "transistor". So the "neurons" and "networks" of those models are idealized fantasies designed to support mathematically elegant theories, and have not helped to explain real brains.

There is an abundance of research on brains, discovering which areas light up when you solve certain problems, which chemicals are outside and inside neurons, what drugs change one?s moods, all amounting to thousands of research papers, thousands of researchers.

But there remain two huge mysteries: what the brain's neural circuit really is, and what it does.



Here's what we already know about brain circuitry.

We know that neurons are little tree-shaped cells with tiny branches gathering electrochemical input, and a relatively long output cable which connects to other neurons. Neurons are packed cheek-to-jowl in the brain: imagine a rainforest in which the trees and vines are so close their trunks all touch, and their branches are completely intertwined. A big spaghetti mess, with each neuron sending to and receiving from thousands of others.

It's not a total hash; like a rainforest, that neural tangle has several distinct layers. And fortunately, those layers look pretty much the same everywhere in the main part of the brain, so there is hope that whatever one layered circuit does with its inputs (say, visual signals), other layers elsewhere may be doing something similar with their inputs.

OK, so we know what a brain circuit looks like, in the same sense that we know what a CPU layout looks like. But for several reasons, we don't know how it works,.

First, we don't know how a single neuron works. Sure, we know that in general a neuron produces more output pulses when it gets more inputs. But we don't know crucial details: depending on dozens of (mostly unknown) electrochemical properties, that neuron might be adding up its inputs, or multiplying them, or responding to averages, or responding to sudden changes. Or maybe doing one function at some times, on some of its branches, and other functions on other branches or at other times. For now, we can't measure brain neurons well enough to more than guess their input/output behavior.

Second, we can't tell how the neurons are connected. Sure, neurons are connected to neighboring neurons. But that isn't very helpful. It's like saying that chips in a computer are connected to neighboring chips. It doesn't explain the specific circuitry. The best biologists can do is trace connections between handfuls of neurons at a time in a dead brain, and if they're lucky, they can even record the simultaneous outputs from a handful of neurons in a live brain. But all the interesting neural circuits contain thousands to millions of neurons, so measuring just a few is hopelessly inadequate, like trying to understand a CPU by measuring the connections between - or the voltages on - a few random transistors.

Third, we don't understand neurons' electrical code. We do know that neurons communicate by brief pulses, and that the pulses from any one neuron occur at unpredictable times. But is that unpredictability a random noise, like the crackle of static, or a richly-encoded signal, like the crackle of a modem? Must the recipient neurons average over many input pulses, or does each separate pulse carry some precise timing?

Finally, we don't know how brains learn. We're pretty sure that learning mostly involves the changes in connections between neurons, and those connections form and strengthen based on local voltages and chemicals. But it's devilishly hard even to record from two interconnected neurons, much less watch the connection change while knowing or controlling everything affecting it. And what about the factors which create brand-new connections, or kill off old ones? Those circuit changes are even stronger, yet nearly impossible to measure.

So here's what we don't know about brain circuitry: we don't know what a single neuron does, what code they use, how they are connected, or how the connections change with learning. Without such knowledge, we can't reverse-engineering brains to deduce their function from their structure.

Next page: Neural behavior

Other stories you might like

  • Ubuntu 21.10: Plan to do yourself an Indri? Here's what's inside... including a bit of GNOME schooling

    Plus: Rounded corners make GNOME 40 look like Windows 11

    Review Canonical has released Ubuntu 21.10, or "Impish Indri" as this one is known. This is the last major version before next year's long-term support release of Ubuntu 22.04, and serves as a good preview of some of the changes coming for those who stick with LTS releases.

    If you prefer to run the latest and greatest, 21.10 is a solid release with a new kernel, a major GNOME update, and some theming changes. As a short-term support release, Ubuntu 21.10 will be supported for nine months, which covers you until July 2022, by which point 22.04 will already be out.

    Continue reading
  • Heart FM's borkfast show – a fine way to start your day

    Jamie and Amanda have a new co-presenter to contend with

    There can be few things worse than Microsoft Windows elbowing itself into a presenting partnership, as seen in this digital signage for the Heart breakfast show.

    For those unfamiliar with the station, Heart is a UK national broadcaster with Global as its parent. It currently consists of a dozen or so regional stations with a number of shows broadcast nationally. Including a perky breakfast show featuring former Live and Kicking presenter Jamie Theakston and Britain's Got Talent judge, Amanda Holden.

    Continue reading
  • Think your phone is snooping on you? Hold my beer, says basic physics

    Information wants to be free, and it's making its escape

    Opinion Forget the Singularity. That modern myth where AI learns to improve itself in an exponential feedback loop towards evil godhood ain't gonna happen. Spacetime itself sets hard limits on how fast information can be gathered and processed, no matter how clever you are.

    What we should expect in its place is the robot panopticon, a relatively dumb system with near-divine powers of perception. That's something the same laws of physics that prevent the Godbot practically guarantee. The latest foreshadowing of mankind's fate? The Ethernet cable.

    By itself, last week's story of a researcher picking up and decoding the unintended wireless emissions of an Ethernet cable is mildly interesting. It was the most labby of lab-based demos, with every possible tweak applied to maximise the chances of it working. It's not even as if it's a new discovery. The effect and its security implications have been known since the Second World War, when Bell Labs demonstrated to the US Army that a wired teleprinter encoder called SIGTOT was vulnerable. It could be monitored at a distance and the unencrypted messages extracted by the radio pulses it gave off in operation.

    Continue reading

Biting the hand that feeds IT © 1998–2021