This is an old revision of this page, as edited by Matthew Stannard (talk | contribs) at 10:51, 30 November 2004 (→See also). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 10:51, 30 November 2004 by Matthew Stannard (talk | contribs) (→See also)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)The idea that people may create devices that are conscious is known as Artificial consciousness (AC). This is an ancient idea, perhaps dating back to the ancient Greek promethean myth in which conscious people were supposedly manufactured from clay, pottery being an advanced technology in those days. In modern science fiction artificial people or conscious beings are described as being manufactured from electronic components.
The idea of artificial consciousness is an interesting philosophical problem in the twenty first century because, with increased understanding of genetics, neuroscience and information processing it may soon be possible to create an entity that is conscious.
The simplest way to create such a being would be to manufacture a genome that had the genes necessary for a human brain and to inject this into a suitable host germ cell. Such a creature, when implanted and born from a suitable womb, would very possibly be conscious and artificial. But what properties of this organism would be responsible for its consciousness? Could such a being be made from non-biological components? Can the techniques used in the design of computers be adapted to create a conscious entity? Would it ever be ethical to do such a thing?
The nature of consciousness
Consciousness is described at length in the consciousness article in Misplaced Pages. According to naive realism and direct realism we perceive things in the world directly and our brains perform processing. On the other hand, according to indirect realism and dualism our brains contain data about the world and what we perceive is some sort of mental model that appears to overlay physical things as a result of projective geometry (such as the point observation in Rene Descartes dualism). Which of these general approaches to consciousness is correct has not been resolved and is the subject of fierce debate.
The theory of direct perception is problematical because it would seem to require some new physical theory that allows conscious experience to supervene directly on the world outside the brain. On the other hand, if we perceive things indirectly, via a model of the world in our brains, then some new physical phenomenon, other than the endless further flow of data, would be needed to explain how the model becomes experience.
If we perceive things directly self-awareness is difficult to explain because one of the principle reasons for proposing direct perception is to avoid Ryle's regress where internal processing becomes an infinite loop or recursion. The belief in direct perception also demands that we cannot 'really' be aware of dreams, imagination, mental images or any inner life because these would involve recursion. As mentioned above, proponents of indirect perception suggest some phenomenon, either physical or dualist to prevent the recursion.
If we perceive things indirectly then self awareness would result from the extension of experience in time described by Immanuel Kant, William James and Descartes. Unfortunately this extension in time may not be consistent with our current understanding of physics (see space-time theories of consciousness.
Information processing and consciousness
Information processing consists of encoding a state, such the geometry of an image, on a carrier such as a stream of electrons, and then submitting this encoded state to a series of transformations specified by a set of instructions called a program. In principle the carrier could be anything, even steel balls or onions, and the machine that implements the instructions need not be electronic, it could be mechanical or fluidic.
Digital computers implement information processing. From the earliest days of digital computers people have suggested that these devices may one day be conscious. One of the earliest workers to consider this idea seriously was Alan Turing. The Misplaced Pages article on Artificial Intelligence (AI) considers this problem in depth.
If technologists were limited to the use of the principles of digital computing when creating a conscious entity they would have the problems associated with the philosophy of strong AI. The most serious problem is John Searle's chinese room argument in which it is demonstrated that the contents of an information processor have no intrinsic meaning - at any moment they are just a set of electrons or steel balls etc.
Searle's objection does not convince those who believe in direct perception because they would maintain that 'meaning' is only to be found in the objects of perception, which they believe are the world itself.
It is interesting that the misnomer digital sentience is sometimes used in the context of artificial intelligence research. Sentience means the ability to feel or perceive in the absence of thoughts, especially inner speech. It draws attention to the way that conscious experience is a state, it consists of things laid out in time and space and is more than the simple processes that occur in digital computers.
Artificial consciousness beyond information processing
The debate about whether a machine could be conscious under any circumstances is usually described as the conflict between physicalism and dualism. Dualists believe that there is something non-physical about consciousness whilst physicalists hold that all things are physical.
Those who believe that consciousness is physical are not limited to those who hold that consciousness is a property of encoded information on carrier signals. Several indirect realist philosophers and scientists have proposed that, although information processing might deliver the content of consciousness, the state that is consciousness is due to some other physical phenomenon. The eminent neurologist Wilder Penfield was of this opinion and scientists such as Arthur Stanley Eddington, Roger Penrose, Herman Weyl, Karl Pribram and Henry Stapp amongst many others, have also proposed that consciousness involves physical phenomena that are more subtle than simple information processing. As was mentioned above, neither the ideas that involve direct perception nor those that involve models of the world in the brain seem to be compatible with current physical theory and no amount of information processing is likely to resolve this problem. It seems that new physical theory may be required and the possibility of dualism is not, as yet, ruled out.
Testing for artificial consciousness
Unless artificial consciousness can be proven formally, judgments of the success of any implementation will depend on observation.
The Turing test is a proposal for identifying machine intelligence as determined by a machine's ability to interact with a person. In the Turing test one has to guess whether the entity one is interacting with is a machine or a human. An artificially conscious entity could only pass an equivalent test when it had itself passed beyond the imaginations of observers and entered into a meaningful relationship with them, and perhaps with fellow instances of itself.
A cat or dog would not be able to pass this test. It is highly likely that consciousness is not an exclusive property of humans. It is likely that a machince could be conscious and not be able to pass the Turing test.
As mentioned above, the Chinese room argument attempts to debunk the validity of the Turing Test by showing that a machine can pass the test and yet not be conscious.
Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness.
Indeed, for those who argue for indirect perception no test of behaviour can prove or disprove the existence of consciousness because a conscious entity can have dreams and other features of an inner life. This point is made forcibly by those who stress the subjective nature of conscious experience such as Thomas Nagel who, in his essay, What is it like to be a bat?, argues that subjective experience cannot be reduced, because it cannot be objectively observed, but subjective experience is not in contradiction with physicalism.
Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, the failure of any particular test would not disprove consciousness. Ultimately it will only be possible to assess whether a machine is conscious when a universally accepted understanding of consciousness is available.
Artificial consciousness as a field of study
Artificial consciousness includes research aiming to create and study artificially conscious systems in order to understand corresponding natural mechanisms.
The term "artificial consciousness" was used by several scientists including Professor Igor Aleksander, a faculty member at the Imperial College in London, England, who stated in his book Impossible Minds that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language. Understanding a language does not mean understand the language you are using. Dogs may understand up to 200 words, but may not be able to demonstrate to everyone that they can do so.
Artificial consciousness has so far been an elusive goal, and a vague and poorly understood one at that. Since the 1950s, computer scientists, mathematicians, philosophers, and science fiction authors have debated the meaning, possibilities and the question of what would constitute digital sentience.
The ethics of artificial consciousness
In the absence of a true physical understanding of consciousness researchers do not even know why they want to construct a machine that is conscious. If it was certain that a particular machine was conscious it would probably need to be given rights under law and could not be used as a slave.
Artificial consciousness in literature and movies
Fictional instances of artificial consciousness:
- Vanamonde in Arthur C. Clarke's The City and the Stars
- Jane in Orson Scott Card's Speaker for the Dead, Xenocide, Children of the Mind, and The Investment Counselor
- HAL in 2001: A Space Odyssey
- Data in Star Trek
- Robots in Isaac Asimov's Robot Series
- Andrew Martin in The Bicentennial Man
- Blade Runner
- The Matrix
External links
- Are People Computers? Strong AI, The Simulation Argument and Naive Realism
- http://www.ph.tn.tudelft.nl/~davidt/consciousness.html
- Artefactual consciousness depiction by Professor Igor Aleksander Requires Microsoft Powerpoint to view
- Proposed mechanisms for AC implemented by computer program: absolutely dynamic systems
- http://www.consciousentities.com
- David Chalmers
- Consciousness in the Artificial Mind non-mainstream
- Course notes/slides on Neurophilosophy
- Models of Consciousness - ESF Exploratory Workshop - Scientific Report