Sunday, November 25, 2012

The Chinese room as seen from the dentist's chair

On Black Friday, I had two impacted wisdom teeth pulled. Our dentist is an artisthe puts his patients at ease, he's conscientious, he's fast, and like every good professional, he knows his limitations. The extractions were as close to painless as one could hope for.

Still, it was not completely painless. Squeamish person that I am, I didn't want to think about what was going on inside my mouth. 

Instead, I began thinking about (what else) chess.  One difference between human chess players and computer chess players: pull a card out of the computer's motherboard, and the computer generally doesn't say "Ouch!".  One computer chess player was famously afraid, but that was in the movies, so I don't think that counts (yet).

Half an hour earlier, on the walk to the dentist's office, I'd been listening to one of John Searle's 2011 lectures on consciousness. (The whole series can be downloaded at  In the seventh lecture, Searle argued that Deep Blue's victory over Kasparov proved nothing about Deep Blue's understanding of chess. Theoretically, a human with no knowledge of the rules of the chess could be thrown in a locked room, given Deep Blue's algorithm for finding the best move, some scratch paper and sharp pencils, and (after the arbiter slips Kasparov's move through the mail slot) replicate Deep Blue's decisions.  (Granted, the human might need a couple millennia to make certain moves, but hey, it's a thought experiment.) Would we award this hypothetical human the title of the world's strongest chess player?

Searle argues that this hypothetical human has no understanding of chess whatsoever.  As Deep Blue  and Houdini 3 are doing nothing more than executing an algorithm, they too don't know what they're doing.  This is a version of Searle's famous "Chinese room" argument: as summarized by Wikipedia, "a program cannot give a computer a 'mind', 'understanding' or 'consciousness', regardless of how intelligently it may make it behave."  Passing the Turing Test alone doesn't make a computer self-aware, wise, able to enjoy victory subjectively and suffer defeat stoically, or have the intuition of a Capablanca.

Human experts are not like computer experts.  A master dentist can approach problems from both intuitive and analytically rigorous perspectives.  Suppose the patient after me also had impacted wisdom teeth: my dentist may have been able to tell from a glance at the X-rays that this patient's molars were going to be much more difficult to extract, so he referred that patient to an oral surgeon.  Being a professional is not just executing an algorithm, it's also an art.  (On the other hand, the most conscientious artist can make an error: Daniel Kahneman discusses the limitations of snap judgments in Thinking: Fast and Slow.)

Chess masters make snap judgments all the time: how else could grandmasters play forty players at once successfully?  In some positions, a master "knows" at a glance what the right move must be.  Of course, not every snap judgment is correct—grandmasters do lose games in simultaneous exhibitions—but the quality of the grandmaster's snap judgment is much higher than the quality of our considered judgments.

(I remember consoling one well-known local master after a critical loss to Aleksander Stamnov: "The problem with playing Stamnov is that he makes good moves very quickly."  He replied, "Yes, and he also makes bad moves very quickly.")

Some recent chess books tell us to look at positions with "computer eyes," calculating all the forcing moves; others advise us to move first and think later.  What's a patzer to do?  Some positions demand brute-force calculation (we can never compete on an even level with chess engines in this sphere), others ask us to use our "feel" for the game. And most positions ask us to use some of each way of thinking.

These two ways of thinkingKahneman calls the intuitive way "System 1" and the analytical way "System 2"—actually occur in different parts of the brain.  Thinking in System 2 is hard work!  As we're surfing through the complications of each chess game, we have to toggle back and forth between the two modes of thinking.

I'll close this rambling (hey, it's a blog) with two observations:

Narrowly: you may never be able to beat Houdini, but you already understand much more about chess than Houdini ever will.  I remember a Scientific American article circa 1979 about programmers who hoped to make chess programs think in a more human fashion.  That turned out to be an absolute dead end, and "brute force" alpha-beta searches won.  It's true that useless branches of the analytical tree are pruned by the top engines, and it's true that evaluation functions have been improved, but still...chess engines are "merely" executing an algorithm, and executing an algorithm requires no understanding whatsoever.

Broadly: similar musings about the failure to date of the "strong AI" project and about how humans become expert at what they do are very much in the air right now.

No comments: