This article is more than 1 year old

Design patterns for a Black Box Brain?

Pt.2

Guest Opinion Bill Softky is a scientist at the Redwood Neuroscience Institute, founded by Jeff Hawkins. He has worked as a software architect, visualization designer, and educator. Part One can be found here.

The bad news is that biologists are very far from figuring out the grand mystery of the brain. The good news is that software engineers might get there first.

As argued in a recent article we have no idea of how the brain circuit works, how it learns, or why its pieces do what they do. And horrific technical difficulties - like measuring tiny electrochemical fluctuations in microscopic, intertwined neurons - make it unlikely that we'll understand the biological circuit any time soon.

What we desperately need - and what software and signal processing can help us with - is a theory of what a brain ought to do. A human brain is a black box with a million wires coming in and half a million going out; a rat's brain is smaller, with fewer wires, but faces the same basic signal-processing problems: what kind of input patterns can it expect, and how can it deal with them?

Brains give us two big clues. One is that the final wiring is based on experience; the brain learns about vision from the eye inputs, about speech from the ears, and about movement from the muscles (and the eyes, and the skin)... it's not hard-wired. The other clue is that any given chunk of immature brain tissue is capable of doing any one of those tasks: the same primordial circuit can learn to analyze visual input or memorize words, and does so by making sense of the patterns and regularities in the outside world. For example, pixels in real-world video signals are clumped into contours, moving objects, shadows. And output to real-world muscles (or pistons) needs to be patterned, coordinated, like the specific contractions of walking, grasping, or throwing.

But we don't know how that slippery term "making sense of patterns" translates into mathematics or circuits. Forget for a moment how the brain does it. We don't even know how we might do it our own way, on digital computers, no-holds-barred. Suppose you needed to program your "black box brain" to discover those input and output patterns on its own, from experience. What would you do? If you had a staff of a thousand programmers, what would you tell them to program?

The software spec from hell

Clearly, writing a program which could either be a computer-vision system or a continuous-speech recognizer, depending only on its input signals, requires a very different kind of specification and programming style than software engineers are used to. It demands generic, abstract, statistical descriptions of both the inputs and their processing. But that's what has to happen. Below are some important issues in constructing such a system, and some of the tricks it might contain.

The first step in understanding perception is to make some measurements, or assumptions, about what kind of statistical "patterns" there are in the outside world's input. They're all just numbers, but what kind of numbers? Clouds of solitary points in a high-dimensional space, like strings or ribbons or folded sheets? Streaks created by such points moving through time? Blobs and clusters corresponding to "objects" in the real world?

What are the cues by which we learn that very different images or sounds really correspond to the same one thing? Do we learn the different views of, say, a bike from watching the same bike from different angles sequentially, or from associating each view separately with the spoken word "bike"?

Be forewarned: this task probably involves fancy math like hyperspace manifolds, Bayesian probabilities, and invariant transformations.

But that might not be enough. The system needs to find patterns among millions of inputs (say pixels), then represent them with fewer signals (contours and shapes), and ultimately with just a few signals ("bike moving here, car parked there"). There are lots of computer-science methods, like information theory and signal compression, for squeezing down many channels into a few, but they only work when we know what kind of signals we are trying to squeeze, and what we want to do with the result.

Go with the workflow

Then we encounter a major flaw in most pattern-recognition systems, like "Neural Networks", which is that they treat the input as a slide-show of isolated presentations. But real-world time flows smoothly. So to handle continuous real-time inputs, we need mathematical descriptions of inputs streaking through time, and the system must continuously accept new inputs and fit them into the context of preceding ones, say by continuously updating its model of the present based on the last minute's worth of data.

The best technology for making sense of noisy, real-time input are Automatic Speech Recognition systems, which make various statistical guesses about what words might account for the recorded waveforms, but they only work by hard-wiring in information about sounds, nouns, verbs, and grammar. We need our brain-like system to figure out all those primitives and rules by itself, from scratch.

So on the one hand, we need a system which uses the fact that each moment's input is intimately related to the next. But on the other hand, there are far too many inputs into a brain - millions of channels, millions of timesteps - to deal with "all at once." A good approach would be to break them down into bite-sized chunks - say fifty inputs over a hundred timesteps, chunks overlapping in both space and time to preserve continuity - and then let the chunks interact with each other in simple, stereotyped ways. Just like good software design.

Pattern recognition

Wouldn't it be great if the same basic circuit element or algorithm (say a "pattern detection/memorization/encoding" module) could be re-used everywhere? The same algorithm could process either visual, tactile, or auditory information, and accept the output of other such modules as its own input. This is certainly what real brain modules look like, with the same stereotypes neural tissue everywhere. Maybe the brain actually works that way too (for example, there is recent evidence that the same brain tissue which does early-stage visual processing in sighted people will instead learn - in blind people - to memorize words. Same original wiring, vastly different ultimate use).

Again, standard software practice. It helps if each little circuit element needs to know when it’s doing a good job, and how to improve. For example, it might try predicting an upcoming signal, then measuring the prediction error and passing it to other modules as an indicator of reliability.

Another good engineering principle is to specify what we really want - in this case, a truly generic processor for all different kinds of sensory data. If your only testbed is monochrome grayscale video signals of traffic, your circuit won't do other things. So the only way to build a robustly generic system is to design and test it on video and sound and tactile proprioception. To ensure full generality, you must specify full generality.

Constructing an engineering solution to the brain's problem is hard - no one has gotten close to solving it yet--and there is no guarantee that it will produce a "brain", any more than designing a jetliner will produce a "bird". But it has several advantages over traditional biology as an approach.

Open Source meets Black Box

For starters, it's a big, unsolved, important problem, which may be amenable to a common-sense engineering approach: first understand the problem, then work on a solution. And any amateur can try it: all one needs is pure thought and a computer; no lab. If some lone genius comes up with a solution, it is immediately testable with cameras and microphones, and is immediately replicable worldwide, just like Open Source. Once some kind of half-baked solution is available, others worldwide can apply iterated improvement, indefinitely tweaking and tuning the code. And of course a good solution to this granddaddy of signal-processing algorithms is worth big bucks.

In summary, the bad news is that biologists are very far from figuring out the grand mystery of the brain: the neural circuit is impenetrable and the basic functions and problem-space are undefined (Artificial Intelligence and "Neural Nets" notwithstanding). But the good news is that software engineers - who have just the right skills to specify the abstract problems of perception and action, and to hack up systems to solve them - might get to unravel the mystery first.®

More about

TIP US OFF

Send us news


Other stories you might like