Misplaced Pages

Artificial consciousness

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

This is an old revision of this page, as edited by Tkorrovi (talk | contribs) at 22:05, 30 March 2004 (Reverted to NPOV version, please edit from this version and maintain NPOV structure, this was an agreement compromise). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Revision as of 22:05, 30 March 2004 by Tkorrovi (talk | contribs) (Reverted to NPOV version, please edit from this version and maintain NPOV structure, this was an agreement compromise)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

This article is intended to follow NPOV ("neutral point of view") principle of Misplaced Pages. Considering many different views of the subject this article is organised so that the different views are described separately, without an attempt to change one to correspond to the other.

Artificial consciousness is equivalent to digital sentience and simulated consciousness.

Description

  • An artificial consciousness (AC) is an artificial system theoretically capable of achieving all known objectively observable abilities of consciousness where consciousness is defined as being conscious, capable of thought, will, or perception (dictionary.com).
  • An artificial consciousness (AC) system is an artifact capable of achieving verifiable aspects of consciousness. Here the alternative, Shorter Oxford English definition, which does not include thought might be preferred in accordance with some views, if artificial consciousness to be deemed realisable.

Abilities (aspects) of consciousness relevant to AC

  • Consciousness is sometimes defined as self-awareness. Self-awareness is a subjective characteristic which may be difficult to test. Other measures may be easier. For example: Recent work in measuring the consciousness of the fly has determined that it manifests the aspects of attention which equate to those of a human at the neurological level. If attention is deemed a necessary pre-requisite for consciousness, then the fly will have an advantage.
  • Attentiveness. Another test should include demonstration of the machine's ability to filter out certain stimuli in its environment so as to give the impression that it is being attentive to other stimuli, and to switch its attention according to certain rules. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could usefully be exploited by engineers of an artificially conscious machine, because if no one knows what rules govern attentiveness in humans then no one would know if they were being flouted by a machine. Since unconsciousness in humans equates to total inattentiveness, so an artificially conscious machine must have outputs that indicate where its attention is focused at any one time.
  • Personality. In the area of behavioural psychology, certain theorists argue that the personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, we would have no need of personalities, and personality as a human attribute would never have evolved. The artificially conscious machine will need to have a personality capable of expression such that human observers can interact with it in a meaningful way. If that is not possible then it is held that the test would fail.
  • Anticipation. Artificially conscious machine must appear to be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, i.e. it must be possible to demonstrate that it possesses artificial consciousness in the present and not just in the past, and in order to do this it must itself operate coherently in an unpredictable environment such as the real world.

Schools of thought

  • "Objective stronger AC". AC must be theoretically capable of achieving all known objectively observable abilities of consciousness of average human, even if it needs not to have all of them at any particular moment. Therefore AC is objective and always remains artificial, and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, AC may considered to be a strong artificial intelligence, but this also depends on how strong AI is defined.
  • "Weak AC". Artificial consciousness will never be real consciousness, but merely an approximation to it, only a mimicking of something which only humans (and maybe some other sentient beings) can truly experience or manifest.
  • "Strong AC". Artificial consciousness is (or will be) real consciousness which just happens not to have arisen naturally. The argument in favour of strong AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then we must assume that human is not a machine. If there is something which is not a machine about a human then it must be soul or a magic spark and the Weak AC argument must then be made in religious or metaphysical terms. Alternatively, if the human is a machine, then the Church-Turing thesis applies and the possibility of strong AC must be admitted.
It is possible to argue that, until contradictory evidence is discovered, Occam's Razor and the Copernican principle support the view that Artificial Consciousness will most likely be real consciousness. The Church Turing thesis says we need new physics before two computing machines are different, by Occam's Razor we should not posit new physics without good reason. By the Copernican principle we should claim no special position for human beings without good reason. The only "good" reasons we have are arrogant ones: Humans are supposedly too complicated, too special, too something for their brains to be built or copied artificially.
Human-like artificial consciousness. As is to be expected, the weak and strong schools of thought differ on the question of whether artificial consciousness need be human-like or whether it could be of an entirely different nature. Proponents of strong AC are more likely to hold the view that artificial consciousness need be nothing like human consciousness. Those who hold that artificial consciousness can never be really conscious, holders of the weak view, hold that AC, not being real, will be human-like because this is the only real model of consciousness that we are ever likely to have, and that (weak) AC will be modelled on real consciousness and tested against it.
  • "Another alternative view". It is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss Descartes' argument "I think, therefore I am", would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that, as it is a machine, it can not be conscious. Consciousness does not imply unfailing logical ability. No, if we retreat to the dictionary definition we find that consciousness is self-awareness, it is a totality of thought and experience. How rich or complete a consciousness is, whether something is more conscious than something else, is all that is open to question.
Today's computers running today's programs are not generally considered conscious. When, in response to the wc -w command a Unix computer reports the number of words in a text file this is not a particularly compelling manifestation of consciousness. But when, in response to the top command, the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc, then this is a particular if very limited manifestation of self-awareness, of consciousness, by definition. A consciousness which even the above-average rock does not have. Computers are arguably better at monitoring their limited selves than humans are.

Testing AC

  • All aspects of consciousness (whatever they are) must be present before a device passes the test. An obvious problem with this point of view, which could nonetheless be correct, is that some capable humans might then not be judged conscious by the same comprehensive tests.
  • The Turing test is a proposal for identifying machine "intelligence" by testing a machine's capability to perform human-like conversation. The Chinese room argument is an attempt to debunk the validity of the Turing Test.
  • No one would deny that Christopher Nolan was conscious when he wrote Under The Eye of The Clock, and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of Yeats and Joyce. Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, by the Christopher Nolan argument the failure of any particular test would not disprove consciousness, and it is likely that the final test would be conducted in a similar manner to a Turing test, and the result would hence be subjective.
  • Integration tests. Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.
As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness.
Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. Artificial consciousness proponents therefore have loosened this constraint and allow that simulating or depicting a conscious machine, such as the robots in Star Wars, could count - not as examples of artificial consciousness, since their personalities are generated by actors - but as models. Hence if someone were to produce an artificial C-3PO, which behaved just like the real one, but without needing to be controlled and animated by a human, then it could qualify as being artificially conscious.
In the opinion of those holding the weak AC hypothesis, AC must be capable of achieving some of the same abilities as the average human because consciousness is generally described in reference to human abilities. This reasoning requires that AC must be capable of achieving all verifiable aspects of consciousness of average human, even if they need not have all of them in any particular moment. Therefore AC always remains artificial, and is only as close to consciousness as we objectively understand about the subject.
  • Test may fail just because the system is not developed to the necessary level or don't have enough resources such as computer memory.

Artificial consciousness as a field of study

  • Artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms.
  • The term "artificial consciousness" was used by several scientists including Igor Aleksander, a professor at the Imperial College London, who stated in his book Impossible Minds (IC Press 1996) that the principles for creating a conscious machine already existed but that it would take forty years to train a machine to understand language.

Artificial consciousness in literature and movies

It is not common agreement that all these examples from science fiction are artificial consciousness.

External links