chineseroom 150x150 John Searles Chinese Room

The Chinese Room

When I started preparing for my upcoming diploma thesis about consciousness, the possibility of AI and how it all fits together, I couldn’t help but stop at this thought experiment that the famous and bright philosopher John Searle brought up in 1980.

Few papers are still so disputed after now 25 years (although, this being philosophy we’re talking about, all papers are still somewhat disputed after hundreds of years), and the subject matter is as or more current as it was then: The Chinese Room, brought up in the article Minds, Brains and Programs that was published in the journal Behavioural and Brain Sciences.

In short, the thought experiment that my paper (link further down) tries to shed a light on, goes as follows, synopsis mine and maybe a tad jovial for the subject:

There is a thinking homunculus in a room who doesn’t understand Chinese, but has an elaborate set of rules gouverning his actions.

He receives pieces of paper from scientists on the outside, with Chinese symbols, processes them with his rules and paints other Chinese symbols that he returns to the outside – without understanding a thing.

Now the scientists outside cackle with glee, for they think they have crafted a machine that, by merely following rules, can understand Chinese.

However, so Searle argues, since the homunculus on the inside doesn’t understand Chinese, the whole machine can’t possibly understand Chinese (as there is no single part that does the understanding). From this and the similarity from this to any other machine, he follows that no machine can possibly have true understanding and as such, strong AI (that he defines as truly understanding) cannot exist.

OK, so … is he right? Is (strong) AI research doomed?

There are of course several answers to this, quite different in both approach and consequences:

  • Yes, he’s right. Now, that would be boring, and if I’d believe this I wouldn’t have written this blog post nor the paper it’s about.
  • We don’t even want strong AI research, what Searle deems “weak AI” is actually a huge feat and more than enough for all our wishes. Which Searle doesn’t deny by the way, he didn’t mean to diminish the value of “weak AI” by calling it “weak” (which I can’t help but find a tiny bit funny) and respects advances.
  • Searle seems to underestimate the complexity of the required rules, and believes that we can wrap our heads around a machine with millions of moving parts consisting of pipes and flows of water. While I do think that we can’t due to too limited mental capacity, I do believe that Searle is aware of that restriction of ours and, as he makes clear himself, believes in a matter of principle that doesn’t change with growing system complexity.
  • His argument is fundamentally flawed and looking for the smallest atomic bit of understanding (or, as he later specifies, the non-reproducible biological-causal properties of the human brain that a computer can’t have by matter of principle and material) is the wrong approach in the first place. Searle basically dismisses this because he finds it ridiculous and contradictory to common sense.

Myself, I like to put myself into the last of those camps. Particularly because I don’t believe in such a thing as “common sense” being actually common, or even a guideline towards truth.

For all those approaches trying to disprove him, Searle thinks he showed they’re wrong – and with most of his rebuttals, I’ll have to admit that I agree with him wholeheartedly. However, there are a few points where I strongly disagree despite all due respect for one of the greatest thinkers of our time.

floppy John Searles Chinese RoomAnd those points are where my paper regarding that subject comes into play:

The Chinese Chatroom

I try and figure out which arguments can be found in which ones of the various answers that said article provoked already upon the time of its release (and in particular those that were printed in the same issue of the journal), research what the Turing Test and (even more so) the Turing Machine are really about, and how the question and possibility of simulation versus replication of causal effects come into play.

I’m quite convinced that if what the current perception of the Turing Machine as “a machine to replicate all machines” is correct (which I know hasn’t been mathematically proven), there is no such thing as a fundamental problem with us eventually getting to the point where we do or at least can have strong AI, with machines that are very similar in functionality (albeit probably necessarily much more powerful) to today’s computers.

I’ll have to add that since I wrote this paper, I read quite a bit more about some subjects that appear in the paper, and meanwhile I could add more references to philosophers (in particular, Daniel Dennett) who share views similar to mine and have developed them a lot further than I could so far.