This is an old revision of this page, as edited by Psb777 (talk | contribs) at 16:04, 18 March 2004 (Deleted para re-written by Matt Stan and found below). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 16:04, 18 March 2004 by Psb777 (talk | contribs) (Deleted para re-written by Matt Stan and found below)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)An artificial consciousness (AC) system is an artifact capable of achieving verifiable aspects of consciousness.
Consciousness is sometimes defined as self-awareness. Self-awareness is a subjective characteristic which may be difficult to test. Other measures may be easier. For example: Recent work in measuring the consciousness of the fly has determined it manifests aspects of attention which equate to those of a human at the neurological level. If attention is deemed a necessary pre-requisite for consciousness, then the fly will have an advantage.
Schools of Thought
Broadly, there seems to be two schools of thought when it comes to artificial consciousness and they seem to have analogues in the weak and strong AI factions.
Weak AC
One school of thought is that artificial consciousness will never be real consciousness, but merely an approximation to it, a mimicing of something which only human beings (and some other sentient beings) can truly experience or manifest.
Strong AC
The other school of thought is that artificial consciousness is (or will be) real consciousness which just happens not to have arisen naturally. The argument in favour of strong AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then is not the human being a machine? If there is something which is not a machine about a human being then we are talking about the soul or a magic spark and the Weak AI argument must then be made in religious or metaphysical terms. Alternatively, if the human being is a machine, then the Church-Turing thesis applies and the possibility of strong AC must be admitted.
Human-like Artificial Consciousness
As is to be expected, the weak and strong schools of thought differ on the question of whether artificial consciousness need be human-like or whether it could be of an entirely different nature. Proponents of strong AC are more likely to hold the view that artificial consciousness need be nothing like human consciousness. Those who hold that artificial consciousness can never be really conscious, holders of the weak view, hold that AC, not being real, will be human-like because this is the only real model of consciousness that we are ever likely to have, and that (weak) AC will be modelled on real consciousness and tested against it.
Objective criteria for testing artificial consciousness
No one would deny that Christopher Nolan was conscious when he wrote Under The Eye of The Clock, and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of Yeats and Joyce. Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for engineering an artificially conscious machine, it is likely that the final test would be conducted in a similar manner to a Turing test, and the result would hence be subjective.
Personality
In the area of behavioural psychology, certain theorists argue that the personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with we would have no need of personalities, and personality as a human attribute would never have evolved. The artificially conscious machine will need to have a personality capable of expression such that human observers can interact with it in a meaningful way. If that is not possible then it is held that the test would fail.
Artificial intelligence
Without telepathy, thought can not be known to occur anywhere other than in one's own brain, and yet one can know that an entity that one is observing is conscious. Therefore one can postulate that an artificially conscious machine needs none of the intelligence borne of thought in order to be convincing, i.e. it can appear pretty dumb but still be considered conscious.
Predictive capability
Presumably even Christopher Nolan blinks. So an artificially conscious machine must appear to be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, i.e. it must be possible to demonstrate that it possesses artificial consciousness in the present and not just in the past, and in order to do this it must itself operate coherently in an unpredictable environment such as the real world.
Attentiveness
Another test should include demonstration of the machine's ability to filter out certain stimuli in its environment so as to give the impression that it is being attentive to other stimuli, and to switch its attention according to certain rules. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could usefully be exploited by engineers of an an artificially conscious machine, because if no one knows what rules govern attentiveness in humans then no one would know if they were being flouted by a machine. Since unconsciousness in humans equates to total inattentiveness, so an artificially conscious machine must have outputs that indicate where its attention is focused at any one time.
Integration Tests
Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.
As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness.
Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. Artificial consciousness proponents therefore have loosened this constraint and allow that simulating or depicting a conscious machine, such as the robots in Star Wars, could count - not as examples of artificial consciousness, since their personalities are generated by actors - but as models. Hence if someone were to produce an artificial C-3PO, which behaved just like the real one, but without needing to be controlled and animated by a human, then it could qualify as being artificially conscious.
In the opinion of those holding the weak AC hypothesis, AC must be capable of achieving some of the same abilities as the average human because consciousness is generally described in reference to human abilities. This reasoning requires that AC must be capable of achieving all verifiable aspects of consciousness of average human, even if they need not have all of them in any particular moment. Therefore AC always remains artificial, and is only as close to consciousness as we objectively understand about the subject.
Another area of contention regards the subsets of possible aspects of consciousness that must be verifiably present before a device would be deemed conscious. One view is that all aspects of consciousness (whatever they are) must be present before a device passes the "test." An obvious problem with this point of view, which could nonetheless be correct, is that some functioning human beings might then not be judged conscious by the same comprehensive tests. Another view is that AC must be capable of achieving these aspects, therefore a test may fail just because the system is not developed to the necessary level.
Nevertheless, most people agree that AC must be capable of achieving some of the same abilities as the average human because consciousness is generally described in reference to human abilities. This reasoning requires that AC must be capable of achieving all verifiable aspects of consciousness of average human, even if they need not have all of them in any particular moment. Therefore AC always remains artificial, and is only as close to consciousness as we objectively understand about the subject.
Studying Artificial Consciousness
As a field of study, artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms.
There are two broad approaches taken in the study of AC and these are not incompatible. One is a top-down approach: The brain, particulalrly the human brain (which is currently the only device which all can agree is conscious), is analysed. The other is a bottom-up approach where the synthesis of (elements of) consciousness is attempted by computer scientists.
Igor Aleksander, a professor at the Imperial College in London, stated in his book Impossible Minds (IC Press 1996) that the principles for creating a conscious machine already existed but that it would take forty years to train a machine to understand language. This is a controversial statement, given that artificial consciousness is thought by most observers to require strong AI. Some people deny the very possibility of strong AI; they are correct, so far, as no artificial intelligence of that type has yet been created.
Examples of Artifical Consciousness
- Vanamonde in Arthur C. Clarke's The City and the Stars
- Jane in Orson Scott Card's Speaker for the Dead, Xenocide. Children of the Mind and The Investment Counselor
- HAL in 2001 A Space Odyssey
- R2-D2 in Star Wars
- C-3PO in Star Wars