Can we make everything fit into one world? When we speak of ‘one world’ we mean that we have to cross the boundaries between different aspects of our lives. Is it possible for us to do this? Can we programme our world into another person’s mind? I shall discuss the frame problem, show how it is linked to the issue of programming the human mind and then address the problems of chance and reductionism.
Firstly, when we speak of the ‘frame problem’ in philosophy of science, we have to acknowledge that it is an epistemological problem which humans have encountered when faced with programming artificial intelligence. The AI was given a set of frame axioms (which specify all properties that do not change) and asked to perform a function. Yet, the result was not as expected, even though the answers provided by the AI were supposed to be simple and fairly obvious to any human. Therefore, new axioms were introduced and the experiment repeated but the result was, yet again, very unsatisfactory. This process was repeated many times until the scientists that designed the experiment were faced with an epistemological dilemma. They had reached a point where they had to input so many axioms or explanations into the AI, that it became impossible to be thorough.
Some of the obstacles were simply that our language has many imperfections which are hard to explain to an AI. Our perception of the world is full of examples of words that have several meanings that we, as humans who use language every day, can understand easily and very rarely get confused about. But defining these to an AI, which is a kind of “blank slate”, is a completely different situation. In order to describe all variables and all possible situations, everything has to be explained to the AI, since it could not reason for itself. Dreyfus states: “the frame problem arises precisely because the computer has no skills”. If we cannot describe our own world to the computer, then how are we going ask it to conceive of other possible worlds?
In their venturing inside the human mind, behavior, emotion, even spirit and metaphysics, scientists have been at a loss as to how they have to be described to a computer. We would be faced with the same problem if we attempt to describe values because we cannot agree on what values are. If we attempt to do this through an example, the problem would persist because we have not reached a conclusion on which values are relevant to a topic and which are not. The ‘problem of complete description’ (or PCD) has shown us that even humans cannot give complete descriptions of everything and that we do not know what ‘complete description’ means. Many have tried to fix this problem by identifying everything in terms of its causes. Yet, our minds often choose to label some causes as obvious, even though this is not so to a computer. Therefore, we are faced with the same problem – namely, that we cannot give a complete description of causes. Some, like Pylyshyn, have even gone as far as to argue that we do not even agree on what exactly the frame problem is.
Secondly, when we consider a situation like the example of Searle’s Chinese room hypothesis, we can see how programming a human mind works. For instance, Searle proposes that a man who does not know Chinese is put into a room and given some instructions on how to read and write Chinese characters. After some time, the person manages to identify and respond in Chinese. What we are missing here is that the man’s environment has been influenced without intent, making him able to recognize only the characters made available. The question is whether the person knows Chinese or are they simply programmed to recognize and reproduce Chinese symbols. The answer is clearly that he does not know the language, and the conclusion is that giving right answers is not the same as understanding. Therefore, even if we did manage to describe everything to the AI, this does not mean that it will ‘understand’ what the words signify.
Finally, we could try to circumvent the causality problem by identifying everything as chance. This sort of reductionism has been tried in the past, but in today’s world dominated by physics, everything seems to happen because of a physical cause. Identifying the cause does not necessarily come first, with cases where the effect is initially observed. Reducing every event to its causes does not work. So, there are some things that we always leave to chance, because of their high level of statistical improbability. A circumvention of the causality problem would not lead to an explanation, but rather an ignoring of the method. If an AI could be programmed to identify causes by itself, this would be a breakthrough in creating an object that can think without outside support.
In the field of philosophy of science there have been relatively few breakthroughs made when it comes to identifying scientific values, determining what natural kinds are and even trying to integrate epistemology and science to ensure better theoretical discourse. Yet, here we speak about programming these very values and emotions that we are not sure of into an AI or computer. Therefore, my conclusion as to how everything fits into one world is that we hold the key to make it fit, and in our minds it does fit. But transferring this innate ability into another being is one of the hardest tasks scientists have undertaken.