Thought and Computers

There is a mathematician by the name of Roger Penrose who wrote a book back in 1989 entitled "The Emperor's New Mind". I am sorry to say it is the only book written by this man that I have been able to get my hands on at this time, but I have never read a more logical nor convincing work which provided a dissenting opinion for the Strong AI theory.

Strong AI or Formalism is the theory that all intelligence behaves in a manner defined by algorithms, albeit complicated ones. The ramification of this theory is of course that computers are in essence no different from humans or insects or horses or cats or anything else that follows some behavioural pattern. What this theory states is that as soon as a computer can fool a human as to whether or not it itself is a human or not (ie: the Turing test) then it should be treated as self aware because it's operating algorithms are sufficiently complex that it's operation should be considered thought.

Of course, there is a lot of empirical data that suggests that this is in fact the case. Animals that we do not consider sentient still behave with a strong instinct for self-preservation. Other so-called human character traits (eg: Gambling) can be simulated in a lab with rats. Apes appear capable of linguistic comprehension and can even express their 'thoughts'. Does this mean however that the human experience is merely the product of the world's most complex algorithm? Personally I don't see it. There is too much about my personal experiences that suggest to me that there is a different force at work where my sentience is concerned.

Working within the field of computers myself, I have a great deal of experience working with algorithms both simple and complex. I would have to say that while some of them seem to possess some rudimentary intelligence in the way they handle the unexpected or automatically correct obvious errors, none of them seem to possess that one ability that seems common to the human race, Insight. Whenever I have a problem to solve, I do try to solve it in an ordered and logical fashion. Usually I succeed. Sometimes however I find that my success is hampered by blocks that the tools of logic at my disposal can't break down by themselves. In those times I have to get creative. I have to consider those ideas and those resources that logic dictates are unusable in the solution. Sometimes finding 'crazy ways to solve crazy problems' is the only possible answer.

In the past, I have found that I get more done in the hours that I am winding down at home than in the whole day I have spent at work. It's after all the information has been input and no progress has been made through the day that I try to wind down and forget everything for a while. That is when the ideas come. Great flashes of inspiration that arrive unbidden and complete. These flashes usually contain ideas and methods which are far more elegant than those I can come up with while at work. Some will say that I am really thinking about the problem in the back of my mind and that the inspirational 'flash' is just the algorithm concluding and the answer being presented. I am not so sure. Apart from the emotional security I gain from believing that I am the master of my own mind I can see no empirical evidence within nature for similar activity. Besides, I see too many parallels in these flashes to the Holographic Paradigm (another favourite of mine which I will leave for another discussion). In any case, if the fatalists are right, if the formalists have the answer, then I have only one more question to ask: What is the point of it all?

The human race has been searching for meaning now for some five or six thousand years. During that time we have formed religions, created gods, arrived at philosophies, and studied the sciences in a desperate quest for answers that only sentient creatures would think to ask. If self-awareness increases proportionally with the complexity of the algorithm, then why are we always looking to find the simplest possible solution to a problem? At what point does an algorithm become sufficiently complex that the organism or hardware it drives starts asking why as well as how?

A few years back there was a spate of 'End of the World' parties held around the world, all inspired by the movie The Terminator. Apparently, Skynet or whatever was supposed to have become self aware in 1996 according to the movie. Personally I wouldn't know, I'm not a Terminator fan. What I do know is that so far despite all the predictions of the science fiction writers since the fifties we do not yet have a sentient computer. The simple fact is it's not an easy thing to create. The power of the hardware is becoming less and less of a problem these days, that's not the tricky bit. It's the software that has everybody stumped. Even so, another problem may yet present itself if the formalists are right. Supposing it was possible to build such a thing as a sentient electronic life form, how would you get a computer that powerful to keep working on the national debt problems during repeat screenings of Babylon 5?

Article Index


This page ©Copyright 1997 Acid
Site: acid.commandline.com.au