Skip to main content
Artificial Intelligence vs. the Mind

Fatih Gelgi

Apr 1, 2004

In the last fifty years, computer technology has led a new discussion centered on Artificial Intelligence (AI) vs. the mind. The main aim in AI is to construct systems which behave in ‘logical’ ways as far as possible. While a hundred years ago, the question, “can a system with artificial intelligence be more advanced than the human mind?” could not have been imagined, it is one of the most frequently discussed subjects of recent years. AI supporters claim that in the near future there will be advanced systems which possess better decision and evaluating mechanisms than humans. On the other hand, many scientists think that this will not be possible.

The theorem published in 1931 by the 25 year old Austrian scientist, Kurt Godel, made a great impact on the scientific circles. Not only did it destroy the hopes of many scientists, but it also initiated a new point of view concerning AI and the mind. This theorem is one of the most important ones to be proven this century, ranking alongside Einstein's Theory of Relativity and Heisenberg's Uncertainty Principle. However, very few people know about it. In this article, we will examine in detail the effects of Godel’s theorem on AI.

What is Godel's Incompleteness Theorem?

As a formal definition, proof is a sequence of well-formed-formulas (wff), each of which is either an axiom or a wff that is derived from preceding wff’s. Godel’s contemporary Hilbert, one of the most famous mathematicians, thought that all proofs in mathematics can be obtained in an automated way (with an axiomatic system) and he started to work on this project. He believed that if he derived all wff’s in basic arithmetic from its own axioms, then he could derive all facts in mathematics using these axioms.

Unfortunately, Godel demonstrated the impossibility of this. First of all, he found a method of translating the syntax of a formal system into arithmetic. Then he formulated the statement, “This formula is improvable in the system,” (G) in arithmetic. Using the same method, he also formulated the negative of the statement G (“This formula is provable in the system”). For the next step, he showed that if the truth value of G was calculated, the truth value of negation of G could also be calculated, causing a contradiction. At the end of his calculations, Godel arrived at two very important consequences:

1. If a formal system that contains minimal arithmetic is consistent, then it is incomplete.

2. Consistency of any formal system containing minimal arithmetic is not internally provable (by using the system’s own rules and formulas).

Surprisingly, even if G were added as a further axiom into the system, a new Godel sentence could be easily found. In other words, no matter how many axioms we add, one can find a Godel sentence that will make the truth value undeterminable.

What Does the Theorem Imply for Artificial Intelligence vs. the Mind?

By examining Godel's Theorem, one can determine very important consequences for artificial intelligence. An English mathematician, Turing, described an abstract machine called the “Turing Machine.” This is an abstract machine which has an unlimited amount of storage space and which can go on computing forever without making any mistakes. This machine can compute any type of algorithmic problem. According to the Turing Theorem all computers are Turing equivalents. After proposing this, Turing went on to observe that some type of problems have no algorithmic solutions. In the meantime, “the Halting Problem” emerged – the problem of deciding those situations in which a Turing Machine action fails never comes to a halt because of the consequences of the Godel's Incompleteness Theorem.

It has been proven that a halting problem is computationally insoluble. This leads us to an important conclusion; a computer cannot be the same as the human mind because the non-computational physics of the mind is not available for Turing equivalent machines and the nature of the algorithms is not compatible with the thinking process due to the halting problem.

The argument of the Godelian Case problems made great sense to AI supporters. Godel's Theorem started a great debate between supporters of AI vs. those of the human mind.

Reviews of the Theorem on AI vs. Mind

Penrose claims that the human mind cannot be compared to artificial intelligence. Penrose bases his claim on Godel’s Incompleteness Theorem. By appealing to the results obtained by Godel (and Turing), mathematical thinking (and hence conscious thinking generally) is something that cannot be encapsulated within any purely computational model of thought. This is the part of Penrose’s argument that his critics have most frequently taken issue with. In addition, he states that there are certain classes of problems that do not have any algorithmic solutions (R. Penrose, 1994, p.29). In fact, Turing described this as the halting problem. Penrose gives an example of the completely deterministic, but non-computable “tiling problem” (R. Penrose, 1994, p.30-33).

Penrose asserted that some mathematical relations required long chains of reasoning before they could be perceived with certainty. But the object of a mathematical proof is to provide such chains of reasoning that each step is indeed something that can be perceived as being “obvious.” He concluded that the endpoint of such reasoning is something that must be accepted as being true, even though it may not, in itself, be at all obvious. One might imagine that it would be possible to list all possible “obvious” steps of reasoning once and for all, so that from that time on everything could be reduced to computation. But, what Godel’s argument shows is that this is not possible. There is no way to eliminate the need for new “obvious” understandings. Thus, mathematical understanding cannot be reduced to blind computation (R. Penrose, 1994, p.56).

Penrose claims that the results of Godel’s

theorem established that human understanding and insight cannot be reduced to any set of computational rules (R. Penrose, 1994, p.65). In the chapter entitled “The Godelian Case” of his book Shadows of the Mind, Penrose supported his idea with Turing’s Halting Problem and showed sound examples on non-computability. At the end of the chapter he answered possible technical objections to his idea based on Godel’s Theorem in details (R. Penrose, 1994, p.64-116).

Penrose believes that there is something beyond computation in the human mind. In Chapter 3 of Shadows of the Mind, he examines the thinking process and non-computability in mathematical thought carefully and uses formal representations (R. Penrose, 1994, p.127-209). Godel’s theorem states that in any sufficiently complex formal system there exists at least one statement that cannot be proven to be true or false. Penrose believes that this would limit the ability of any AI system in its reasoning. He argues that there will always be a statement that can be constructed which is unprovable by the AI system. However, Penrose believes that somehow the human mind can see the truth of such Godel statements directly (R. Penrose, 1989).

Along the same lines as Penrose, Lucas believes that Godel's theorem seems to prove that the idea of “Mechanism” is false, that is, that minds cannot be seen in terms of machines. He claims that Godel's theorem must apply to cybernetics, because the essence of being a machine is that it should be a concrete instantiation of a formal system. It follows that for any given machine which is consistent and capable of doing simple arithmetic, there is a formula which it will be incapable of producing as being true (i.e., the formula is improvable in the system but which we can see to be true). It follows that no machine can be a complete or adequate model of the mind, that minds are essentially different from machines. This does not mean that a machine cannot simulate any piece of the mind; it only says that there is no machine that can simulate every piece of the mind. Lucas says that there may be deeper objections. Godel’s theorem applies to deductive systems, and human beings are not confined to making only deductive inferences. Godel's theorem applies only to consistent systems, and one may have doubts about how far it is permissible to assume that human beings are consistent. Godel's theorem applies only to formal systems, and there is no a priori bound to human ingenuity which rules out the possibility of our contriving some replica of humanity which is not representable by a formal system (J. Lucas, 1970).

Chalmers examines the situation when a formal system F, which understands the consequences of Godel’s Theorem, is given. According to his claim, F may not be sound, so Godel’s theorem cannot be applied. He specifies that the crucial point of Godel’s argument is not to know “a formal system is sound”; but to determine “if we know that our system is sound.” It follows that we perhaps have a sound system, but we can not conclude that “we know that we have a sound system” (D. J. Chalmers, 1995).

Like Chalmers, McCullough claims that not only artificial intelligence, but also the human mind is tightly related with Godel’s theorem. Godel argument did not prove that human reasoning had to be noncomputable – it only proved that if human reasoning was computable, then it had to either be unsound, or it had to be inherently impossible for a human to know both what a human’s own reasoning powers were and to also know that they were sound. And adds, Penrose dismisses the possibility that a human knows its reasoning powers, but does not know that they are sound. In his paper, McCullough also examines the appliability of Godel’s theorem on non-computable systems and the human mind. According to him, both are possible, by the way he asserts that Penrose’s idea is wrong. Consequently, McCullough agrees with Penrose that human reasoning cannot be formalized in some sense, because humans do not understand their reasoning system well enough to formalize it. This limitation is not due to a lack of human intelligence, but is inherent in any reasoning system that is capable of reasoning about itself. (D. McCullough, 1995).

As a short conclusion, it seems that the discussion between AI vs. mind will last for a long time. But, considering the present situation, AI has a long way to the go in order to achieve the expected skills.

References

• Chalmers, D.J. (1995). “Minds, Machines, and Mathematics”. http://psyche.cs.monash.edu.au/v2/psyche-2-09-chalmers.html

• Godel, K. “On Formally Undecidable Propositions of Principia Mathematica”. http://www.ddc.net/ygg/etext/godel/

• Lucas, J.R. (1970). “Minds, Machines and Godel”. The Freedom of the Will, Oxford: Oxford University Press. http://users.ox.ac.uk/jrlucas/mmg.html

• Maudlin, T. (1995). “Between The Motion And The Act...”. http://psyche.cs.monash.edu.au/v2/psyche-2-02-maudlin.html

• McCarthy, J. (1995). “Awareness and Understanding in Computer Programs”. http://psyche.cs.monash.edu.au/v2/psyche-2-11-mccarthy.html

• McCullough, D. (1995). “Can Humans Escape Godel?”. http://psyche.cs.monash.edu.au/v2/psyche-2-04-mccullough.html

• Penrose, R. (1989). The Emperor’s New Mind. New York: Oxford University Press.

• Penrose, R. (1994). Shadows of the Mind. New York: Oxford University Press.

• Penrose, R. (1996). “Beyond the Doubting of a Shadow”. http://psyche.cs.monash.edu.au/v2/psyche-2-23-penrose.html

• Pysche (1995). An Interdisciplinary Search of Consciousness, Vol. 2, Symposium on Roger Penrose’s Shadows of the Mind. http://psyche.cs.monash.edu.au/psyche-index-v2.html