This article is more than 1 year old

Software engineers – the ultimate brain scientists?

Part I: Everything you know about AI is probably wrong

Neural behavior

But what about "forward engineering" What about starting with the problem specification - what brains do - and saving the circuitry for later?

Again, we are overwhelmed with detail. We know a lot about what specific neurons do when exposed to specific sensory inputs. For example, we know that some brain neurons respond to small contours of light, some to small bits of motion, some to certain shapes, some to colors, and some to faces, and there are dozens of similar responses in the visual system alone. Likewise in sound: some neurons respond to chirps, some to hisses, some to tones, some to sounds suddenly starting or stopping. There are thousands of research papers detailing more specific neuron functions than you could ever want to know.

But two insights are missing from this mass of detail.

First, those hard-won neuronal recordings are not of brains doing what they usually do: interpreting and interacting with the real world. These recorded brains are instead exposed to highly artificial, constrained stimuli, chosen specifically to make a few neurons active enough to be measured. The dirty secret of neurophysiology is that under normal circumstances - viewing ordinary scenes, listening to ordinary sounds - neurons don't fire very much at all, and when the do fire, the cause is mysterious. That near-silence doesn't make interesting research papers, so scientists need to impose striking circumstances - like flashing high-contrast shapes at an animal in a darkened room - in order to make a neuron do anything measurable. If you want clear data, you have to give the animal some very weird inputs.

The second problem with all this neural data is that it comes from mature neurons which have already learned, somehow, to do whatever they do. But neurons aren't hard-wired: presumably, growing up with different inputs would have spawned different connections, teaching that neuron to produce a different response. In fact, it seems as if exposure to visual input makes a neuron learn a typical visual response, but exposure to auditory input makes it learn a typical hearing response. So we know something about what the responses are, but not why they got that way.

Grand theories to the rescue

So, despite copious data, we have no idea of how the brain circuit works, how it learns, or why its pieces do what they do. Fortunately, there is one avenue left to make sense of this, and it isn?t hamstrung by the difficulty of measuring tiny, intertwined cells in live animals.

The huge missing piece is a theory of what a brain ought to do. Think of a human brain as a black box, having about a million inputs (sensory nerves) and half a million outputs (to muscles). You can think of the inputs as TV-pixel or mechanical sensor signals, and the outputs as driving little motors or pistons. At a minimum, the black box needs some formulae by which it can discover patterns in the inputs, and can create useful patterns of outputs.

We know that input from the outside world has lots of patterns and regularity. For example, pixels are clumped into contours, moving objects, shadows. And output to the muscles need to be patterned - coordinated - like the specific contractions of walking, grasping, or throwing. But suppose you needed to program the black box to discover those input and output patterns on its own, from experience. What would you do? If you had a staff of a thousand programmers, what would you tell them to program?

Nobody knows the answer, but in the concluding part, we'll look at some of the tricks that are probably involved. ®

Related Link

Part 2: Design patterns for a black box brain

More about

TIP US OFF

Send us news


Other stories you might like