Misplaced Pages

Artificial consciousness

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

This is an old revision of this page, as edited by Psb777 (talk | contribs) at 15:54, 13 March 2004. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Revision as of 15:54, 13 March 2004 by Psb777 (talk | contribs)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)
This article's factual accuracy is disputed. Relevant discussion may be found on the talk page. Please help to ensure that disputed statements are reliably sourced. (Learn how and when to remove this message)

An artificial consciousness (AC) system is an artefact capable of achieving verifiable aspects of consciousness.

Consciousness is sometimes defined as self-awareness. Self-awareness is a subjective characteristic which may be difficult to test. Other measures may be easier. For example: Recent work in measuring the consciousness of the fly has determined it manifests aspects of attention which equate to those of a human at the neurological level, and, if attention is deemed a necessary pre-requisite for consciousness, then the fly is claimed to have a lot going for it.

It is asserted that one necessary ability of consciousness is the ability to predict external events where it is possible for an average human, i.e. to anticipate events in order to be ready to respond to them when they occur and to act so that the results can be anticipated.

Arguments calling this assertion into question include:

  1. A human in an entirely strange, confusing situation where anticipation is impossible, could still be entirely conscious.
  2. It is not necessarily only humans which are conscious.
  3. Being conscious of the past makes sense.
  4. An entirely introspective consciousness unable of anticipation seems possible.
  5. Being without senses, possibly only temporarily, and therefore being unable to gather information to allow anticipation but being conscious also seems entirely possible.

As a field of study, artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms.

Examples of artificial consciousness from literature and movies are:

Professor Igor Aleksander of Imperial College, London, stated in his book Impossible Minds (IC Press 1996) that the principles for creating a conscious machine already existed but that it would take forty years to train a machine to understand language. This is a controversial statement, given that artificial consciousness is thought by most observers to require strong AI. Some people deny the very possibility of strong AI; whether or not they are correct, certainly no artificial intelligence of this type has yet been created.

External Link