Misplaced Pages

Artificial consciousness: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 17:41, 13 March 2004 editTkorrovi (talk | contribs)Extended confirmed users1,655 edits David M. J. Tax didn't say "any particular", this was an interpretation of Paul Beardsell← Previous edit Revision as of 17:54, 13 March 2004 edit undoPsb777 (talk | contribs)Extended confirmed users9,362 edits what he actually saysNext edit →
Line 13: Line 13:
#An entirely introspective consciousness unable of anticipation seems possible. #An entirely introspective consciousness unable of anticipation seems possible.
#Being without senses, possibly only temporarily, and therefore being unable to gather information to allow anticipation but being conscious also seems entirely possible. #Being without senses, possibly only temporarily, and therefore being unable to gather information to allow anticipation but being conscious also seems entirely possible.

#Dr. David M. J. Tax says on his that failing a test of consciousness would not necessarily show that the device being tested was not conscious. One of the three examples he uses is of predictability. He didn't specify or explain such test more exactly.
Another reason to doubt the assertion that predictability is a necessary attribute of consciousness can be found on Dr. David M. J. Tax's :

:''So can we create a situation/experiment/test, such that when the thing passes this test, we can assume it is conscious? (Not passing the test thus does not say it is NOT conscious!) Most of us agree that the test should check the ability to learn, to predict and to generalize previous experiences.''

So, failing the test (whatever Dr Tax has in mind) does not show the device is not conscious, even if the test failed was a test of ''predictability''.


As a field of study, artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms. As a field of study, artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms.

Revision as of 17:54, 13 March 2004

This article's factual accuracy is disputed. Relevant discussion may be found on the talk page. Please help to ensure that disputed statements are reliably sourced. (Learn how and when to remove this message)

An artificial consciousness (AC) system is an artefact capable of achieving verifiable aspects of consciousness.

Consciousness is sometimes defined as self-awareness. Self-awareness is a subjective characteristic which may be difficult to test. Other measures may be easier. For example: Recent work in measuring the consciousness of the fly has determined it manifests aspects of attention which equate to those of a human at the neurological level, and, if attention is deemed a necessary pre-requisite for consciousness, then the fly is claimed to have a lot going for it.

It is asserted that one necessary ability of consciousness is the ability to predict external events where it is possible for an average human, i.e. to anticipate events in order to be ready to respond to them when they occur and to act so that the results can be anticipated.

Arguments calling this assertion into question include:

  1. A human in an entirely strange, confusing situation where anticipation is impossible, could still be entirely conscious.
  2. It is not necessarily only humans which are conscious.
  3. Being conscious of the past makes sense.
  4. An entirely introspective consciousness unable of anticipation seems possible.
  5. Being without senses, possibly only temporarily, and therefore being unable to gather information to allow anticipation but being conscious also seems entirely possible.

Another reason to doubt the assertion that predictability is a necessary attribute of consciousness can be found on Dr. David M. J. Tax's web page on artificial consciousness:

So can we create a situation/experiment/test, such that when the thing passes this test, we can assume it is conscious? (Not passing the test thus does not say it is NOT conscious!) Most of us agree that the test should check the ability to learn, to predict and to generalize previous experiences.

So, failing the test (whatever Dr Tax has in mind) does not show the device is not conscious, even if the test failed was a test of predictability.

As a field of study, artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms.

Examples of artificial consciousness from literature and movies are:

Professor Igor Aleksander of Imperial College, London, stated in his book Impossible Minds (IC Press 1996) that the principles for creating a conscious machine already existed but that it would take forty years to train a machine to understand language. This is a controversial statement, given that artificial consciousness is thought by most observers to require strong AI. Some people deny the very possibility of strong AI; whether or not they are correct, certainly no artificial intelligence of this type has yet been created.

External Link