AI pioneer Marvin Minsky dies at 88

Scientist, philosopher and maker of 'box with a switch on it'


Obituary Marvin Minsky, one of the founders of the field of Artificial Intelligence, and an inspiration to generations of researchers, has died.

Minsky was a philosopher and a scientist, as well as an adored and decorated academic. Among these decorations was the Turing Award in 1969, and an induction as a Fellow of the Computer History Museum in 2006 for "co-founding the field of artificial intelligence, creating early neural networks and robots, and developing theories of human and machine cognition."

Minsky pioneered the first ever neural network, constructed using vacuum tubes, as well as the first head-mounted graphical display. Alongside his MIT colleague John McCarthy, Minsky founded the institute's AI lab.

The great man's home page on MIT notes he achieved his BA and PhD in mathematics at Harvard (1950) and Princeton (1954) respectively, after serving in the US Navy at the end of the Second World War.

As a graduate student at Bell Labs in the '50s, Minsky was mentored by none other than Ur cryptographer Claude Shannon. While at Bell Labs, Minsky invented what is arguably the most famous version of the "useless machine", which he dubbed the "ultimate machine", a seemingly banal box with a switch on it.

Once the switch is turned on, the box opens and an arm extends from within to turn the switch off, before retreating back inside. The whole function of the machine was to return itself to its initial state.

This Minsky New Yorker profile of 1981 is worth another look, especially its citation of this extract of a Minsky paper titled "Matter, Mind, and Models":

From Chapter 8, Free Will, of Matter, Mind and Models by Marvin L. Minsky

If one thoroughly understands a machine or a program, he finds no urge to attribute “volition” to it. If one does not understand it so well, he must supply an incomplete model for explanation. Our everyday intuitive models of higher human activity are quite incomplete, and many notions in our informal explanations do not tolerate close examination. Free will or volition is one such notion: people are incapable of explaining how it differs from stochastic caprice but feel strongly that it does. I conjecture that this idea has its genesis in a strong primitive defense mechanism. Briefly, in childhood we learn to recognize various forms of aggression and compulsion and to dislike them, whether we submit or resist. Older, when told that our behavior is “controlled” by such-and-such a set of laws, we insert this fact in our model (inappropriately) along with other recognizers of compulsion. We resist “compulsion,” no matter from “whom.” Although resistance is logically futile, the resentment persists and is rationalized by defective explanations, since the alternative is emotionally unacceptable.

Minsky died of a cerebral haemorrhage in his home on Sunday night at 88 years of age. ®

Broader topics


Other stories you might like

  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading
  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading

Biting the hand that feeds IT © 1998–2022