Tuesday, 24 July 2012

A book by Marvin Minsky

Marvin Minsky's book The Society of Mind is a classic in the field of artificial intelligence.  It's beautifully structured, but it isn't organised into a single coherent narrative.  Instead, Minsky has divided the book into thirty chapters and an appendix, and subdivided those chapters into single-page essays.  The chapters tend to focus on a single theme, but they don't necessarily follow immediately on from one another.  They do build on one another over the long term, however; a chapter on memory is followed by one on thinking about and classifying arches, only to be picked up a few chapters later by a chapter on memory that links into the chapters on arches and everything else in between.  It feels like it was created organically.  It's very easy to read, and can be digested in bite-sized pieces, and there's no reason not to read it, even if you take your time over it.

The idea behind the book, first published in 1985, was to espouse the view that the human mind is divided into agencies that work together like a society to accomplish tasks.  This is a very naturalistic view, quite unlike the normal position most people have of being a single agent responsible for all of their actions.  Minsky, who trained as a mathematician at Harvard and Princeton and who has worked at MIT for decades, attempts to explain the illusions we have of our actions, and how all those simple sub-agencies - fundamentally reducing to electronic switches turning on and off - build up into the complex set of processes involved in every human action.  He wrote the book so that the structure mirrors the ideas in it; each simple little essay works with the others to create the complex theory without a perfect coordination between them, just as our sub-agencies work together without perfect coordination between them to produce our thoughts and actions.

Artificial intelligence is interesting on its own, but it is especially interesting to me (and others) in helping to resolve questions about people (as in that video, in which AI researcher Luc Steels explains how he has tried to create robots capable of symbolic communication).  In designing robots, researchers like Marvin Minsky found that they couldn't just plug in a few bits and pieces and expect a functioning agency, like a human's, to develop from it.  Each part of what robots do, and what we do, is the result of a complex chain of physical processes, the result, direct or indirect, of some kind of programming.  Minsky uses the act of a child creating a tower out of building block toys to demonstrate how complex the tasks we perform actually are at the cognitive level by showing what it would take to programme a robotic arm with the same ability.  The notion that humans could simply learn to do this without some kind of pre-existing programming is an absurdity.  Minsky isn't satisfied by demonstrating the complexity of the task, of course, and proceeds to show how this task and many others can be learnt in the parsimonious way possible.

Some understanding of AI is essential to philosophers of the mind, and Minsky cites and is cited by Jerry Fodor, Dan Dennett, and many others.*  His work in artificial intelligence from the sixties onward was pioneering, and is recognised to be so by researchers from outside of AI departments.  Not, of course, in anthropology departments; any form of cognitive science, AI included, is on the fringe of anthropology.  But I think this book, which is very easy to read and doesn't expect a high mathematical ability of its readers, is a fantastic way to think about how humans work - how we share so much in cognitive ability, how our brains have to work, how 'simple' tasks are performed.  If researchers in all of the human sciences shared a common background understanding of these things, then talking about higher faculties wouldn't require such an insular approach, the very insular approach we find in social science departments.

A comment on Jerry Coyne's website reminded me of the insular nature of anthropology departments, and all departments relying on "critical theory" and other junk thinking.  Reader DrDroid says:
I wonder if there is some kind of “science envy” going on? Academics who chafe at the tough standards of proof in the sciences and have set up a private tennis court where they can play with the net down?
That sounds like a good description of many anthropology departments: places where terrible ideas are given an easy ride.  If everyone had a background in the kind of thinking represented by wonderful books like Minsky's, or a scientific engagement with the literature on how people work in the basic sense, then I doubt we'd see such absurd fawning acceptance of Gilles Deleuze, Bruno Latour, or pseudo-scientific concepts like "embodied knowledge".  These things are incommensurate with a scientific understanding of people, and they are fundamentally inferior to it, and it's a shame that a lot of people are unaware of this or deliberately avoid confronting it.

I'd recommend The Society of Mind to everybody.  It's a fantastic book, and anybody could read it.  The amazon page on it has reviews written by children new to AI as well as professional researchers.  Here's a recent review of its basic outline and theories by Push Singh, another MIT artificial intelligence researcher; Minsky's book clearly stands the test of time, it's easy to read, and it shows a path to resolving some key problems in understanding how humans work.  Ignoring this book, or, more generally, ignoring work in artificial intelligence and cognitive science, is something researchers in the human sciences do at their peril.

* Addendum: In an MIT lecture I saw, Minsky mentioned Fodor and said something like, "Jerry's an interesting guy, because he's brilliant but nearly always wrong".

No comments:

Post a Comment

You can post anonymously if you really want to, but I would appreciate it if you could provide some means of identifying who you are, if only for the purpose of knowing who has written what.