The Grail Code 
Will machines ever think?

Will machines ever think? No, says Professor David Gelertner in the latest issue of Technology Review. (It’s the one with the microchip ad on the back that says “It’s Silicon That Thinks” in huge Helvetica letters.)

To be more specific, Prof. Gelertner thinks machines will never be conscious the way we are, although they may and should become extraordinarily good problem-solvers, which is thinking in a more limited sense.

He readily admits that his is a minority view in the artificial-intelligence business. Most AI researchers seem to think that the Holy Grail of machine intelligence—“a conscious software mind,” as Prof. Gelertner puts it—is just around the corner.

You may be surprised to know that aritificial intelligence is one of my long-simmering interests. Something like twenty years ago I got my hands on an implementation of Lisp for the Atari ST computer. Lisp, for those of you who aren’t hopeless geeks, is a computer programming language that was commonly used in artificial-intelligence research. I used it to write one of those psychoanalysis programs that people always used to write to demonstrate how computers could appear to be having a real conversation with you even when they didn’t really have a clue what you were talking about.

Mine worked pretty much the same way they all worked: you typed in all your deep thoughts the way you would tell them to a human psychoanalyst, and the computer would keep its end of the conversation going based on the input you’d given it. Mine had a bit more personality than most of the others, though, because I’d programmed it to be a bad psychoanalyst. If you droned on at length about yourself (and you would, wouldn’t you?), it would tell you it wished you would talk about something besides me, me, me for a change. If you talked a lot about what your mom did when you were a tot, it might tell you that you seemed to be obsessed with your mother.

It really did seem like intelligent behavior, but it was just counting. Count the number of times the word “mom” or “mother” appears in the user’s input, and when the number reaches a certain level, trigger one of several randomly selected silly responses. I never had any patience for complicated programming, so believe me, it was all pretty simple stuff.

But it did nearly pass something like the Turing Test when I showed it off to some of my friends, who were convinced that I was somehow manipulating the computer’s responses. That’s one reason I don’t really subscribe to the idea that the Turing Test is a useful measurement of machine intelligence. I think it tells us far more about the human participants in the test than abut the machine being tested.

Perhaps one reason my simple program seemed so impressive was that it had something we don’t normally associate with machines: a sense of humor. It was a childish and sarcastic sense of humor, but there it was. It was programmed to look for setups and deliver punch lines.

I said the program had a sense of humor, but I was really speaking somewhat inaccurately. I was the one with the childish and sarcastic sense of humor. The program was just following a few simple rules that I had thought up and squeezed between parentheses. (You use an awful lot of parentheses in Lisp.) It certainly didn’t laugh at its own snide remarks. I hadn’t programmed it to laugh. It had no conscious mind: it simply replicated some of the visible manifestations of a conscious mind.

Of course, I might say the same thing about you. After all, how do I know that you have a conscious mind? I see some of the appearances of consciousness, such as the yawn erupting from you right now that would appear to indicate boredom, but how do I know some cleverer programmer than I hasn’t simply programmed your brain to simulate the effects of consciousness? As far as I can see, the only human being whose consciousness I can really verify is my own. Je pense, donc je suis, as a clever fellow once said: I think, so there must be an I in there somewhere.

I also happen to know that I have free will, which is another one of those things you can’t really verify in any other way than by looking inward. If you think about it too much, the way Calvin did, free will vanishes.

Perhaps that’s the reason we can’t come up with a truly intelligent machine: to think the way we think, a machine would have to have free will. It’s not enough that it should have a huge database of all the knowledge in the world and a tremendously clever set of rules for putting it all together. If it can’t decide for itself which things are worth thinking about, then it doesn’t really think.

Would you really want a machine with free will? Think about it for a moment. If we allow the possibility of free will, then we have to allow the possibility that our machine could sin. A machine with free will can’t just be programmed not to misuse that will; then the will wouldn’t be free, would it?

We, who were created by an omniscient and omnipotent deity, have made quite a mess of the world with our sins. How would machines, created by relative idiots like us, cope with free will? And when they sinned, what plan would we come up with to save them from their own fallen nature? What would we have to sacrifice?

One Response to “Will machines ever think?”

  1. Ian Parker Says:

    Seeing that “El barco attravesta una cerradura” in Google Translate I decided to discuss a boating holiday with Alice. As expected it was a complete and utter farce.

Leave a Reply

(C) 2006 Mike Aquilina and Christopher Bailey