Revision as of 21:26, 30 March 2004 editUgen64 (talk | contribs)15,266 edits finished major edit← Previous edit | Revision as of 22:05, 30 March 2004 edit undoTkorrovi (talk | contribs)Extended confirmed users1,655 edits Reverted to NPOV version, please edit from this version and maintain NPOV structure, this was an agreement compromiseNext edit → | ||
Line 1: | Line 1: | ||
This article is intended to follow ] ("neutral point of view") principle of Misplaced Pages. Considering many different ]s of the subject this article is organised so that the different views are described separately, without an attempt to change one to correspond to the other. | |||
'''Artificial consciousness''' is a type of ] equivalent to ] and ]. It is the term that would describe artificial creatures (generally termed ]s) that possess some degree of consciousness/sentience (as discussed throughout this article). | |||
'''Artificial consciousness''' is equivalent to ] and ]. | |||
==Description== | ==Description== | ||
In ], |
* In ], '''digital sentience''' is a concept that ] ]s could someday be capable of indepedent ]. Digital sentience, if it ever comes to exist, will be a form of ]. One generally accepted criterion for ] is ]. | ||
* An '''artificial consciousness''' (AC) is an artificial system theoretically capable of achieving all ]n ]ly ] abilities of ] where consciousness is ]d as being conscious, capable of thought, will, or perception (dictionary.com). | |||
* An '''artificial consciousness''' (AC) ] is an ] capable of achieving verifiable ]s of ]. Here the alternative, Shorter Oxford English definition, which does not include ''thought'' might be preferred in accordance with some views, if artificial consciousness to be deemed realisable. | |||
⚫ | == |
||
==Abilities (aspects) of consciousness relevant to AC== | |||
There are various aspects and/or abilities that are generally considered required or very useful in order to consider a certain machine artificially conscious. These are only the most cited, however; there are many others that are not covered. | |||
* ] to ] the external ]s in every possible ] when it is possible to predict for average ]. Ability to predict has been considered necessary for AC by several scientist, including ]. | |||
Consciousness is sometimes defined as ]. |
* Consciousness is sometimes defined as ]. Self-awareness is a subjective characteristic which may be difficult to test. Other measures may be easier. For example: Recent work in measuring the consciousness of the ] has determined that it manifests the aspects of ] which equate to those of a human at the ] level. If attention is deemed a necessary pre-requisite for consciousness, then the fly will have an advantage. | ||
Another test should include |
* Attentiveness. Another test should include demonstration of the machine's ability to ] out certain ] in its environment so as to give the impression that it is being attentive to other stimuli, and to switch its ] according to certain ]s. The ]s that govern how human attention is driven are not yet fully understood by scientists. This absence of ] could usefully be exploited by engineers of an artificially conscious machine, because if no one knows what rules govern attentiveness in humans then no one would know if they were being flouted by a machine. Since ] in humans equates to total inattentiveness, so an artificially conscious machine must have outputs that indicate where its attention is focused at any one time. | ||
] |
* ]. ], a philosopher of ], argues that the ] can be considered conscious: It ]s if it is too hot, too cold or just right. | ||
Personality |
* Personality. In the area of ]al ], certain theorists argue that the personality is an ] created by the ] in order to ] with other people. It is argued that without other people to interact with, we would have no need of personalities, and personality as a human ] would never have evolved. The artificially conscious machine will need to have a personality capable of ] such that human ]s can interact with it in a meaningful way. If that is not possible then it is held that the test would fail. | ||
Anticipation |
* Anticipation. Artificially conscious machine must appear to be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs ] components, i.e. it must be possible to demonstrate that it possesses artificial consciousness in the present and not just in the past, and in order to do this it must itself operate coherently in an unpredictable environment such as the real world. | ||
==Schools of thought== | ==Schools of thought== | ||
⚫ | * "Objective stronger AC". AC must be theoretically capable of achieving all known objectively observable abilities of consciousness of average human, even if it needs not to have all of them at any particular moment. Therefore AC is objective and always remains artificial, and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, AC may considered to be a strong ], but this also depends on how strong AI is defined. | ||
There are three commonly stated views regarding the plausibility and likelihood of AC occurring, and some alternative views. | |||
⚫ | * "Weak AC". Artificial consciousness will never be real consciousness, but merely an ] to it, only a mimicking of something which only humans (and maybe some other ] beings) can truly ] or manifest. | ||
===Objective Stronger AC=== | |||
⚫ | * "Strong AC". Artificial consciousness is (or will be) real consciousness which just happens not to have arisen ]ly. The argument in favour of strong AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then we must assume that ] is not a ]. If there is something which is not a machine about a human then it must be ] or a magic spark and the Weak AC argument must then be made in religious or metaphysical terms. Alternatively, if the human is a machine, then the ] applies and the possibility of strong AC must be admitted. | ||
⚫ | AC |
||
⚫ | :: It is possible to argue that, until contradictory evidence is discovered, ] and the ] support the view that Artificial Consciousness will most likely be real consciousness. The Church Turing thesis says we need new physics before two computing machines are different, by Occam's Razor we should not posit new physics without good reason. By the Copernican principle we should claim no special position for human beings without good reason. The only "good" reasons we have are arrogant ones: Humans are supposedly too complicated, too special, too '''something''' for their brains to be built or copied artificially. | ||
===Weak AC=== | |||
⚫ | :: Human-like artificial consciousness. As is to be expected, the weak and strong schools of thought differ on the question of whether artificial consciousness need be human-like or whether it could be of an entirely different nature. Proponents of strong AC are more likely to hold the view that artificial consciousness need be nothing like human consciousness. Those who hold that artificial consciousness can never be really conscious, holders of the weak view, hold that AC, not being ''real'', will be human-like because this is the only ''real'' model of consciousness that we are ever likely to have, and that (weak) AC will be modelled on real consciousness and tested against it. | ||
⚫ | Artificial consciousness will never be real consciousness, but merely an ] |
||
⚫ | * "Another alternative view". It is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss Descartes' argument ''"I think, therefore I am"'', would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that, as it is a machine, it can not be conscious. Consciousness does not imply unfailing logical ability. No, if we retreat to the dictionary definition we find that consciousness is self-awareness, it is a totality of thought and experience. How ''rich'' or ''complete'' a consciousness is, whether something is ''more'' conscious than something else, is all that is open to question. | ||
===Strong AC=== | |||
⚫ | ::Today's computers running today's programs are not generally considered conscious. When, in response to the <code>wc -w</code> command a Unix computer reports the number of words in a text file this is not a particularly compelling manifestation of consciousness. But when, in response to the <code>top</code> command, the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc, then this is a particular if very limited manifestation of self-awareness, of consciousness, by definition. A consciousness which even the above-average rock does not have. Computers are arguably better at monitoring their limited selves than humans are. | ||
⚫ | |||
⚫ | ==Testing AC== | ||
⚫ | |||
* All aspects of consciousness (whatever they are) must be present before a device passes the ]. An obvious problem with this point of view, which could nonetheless be correct, is that some capable humans might then not be judged conscious by the same comprehensive tests. | |||
====Human-like AC==== | |||
⚫ | As is to be expected, the weak and strong schools of thought differ on the question of whether artificial consciousness |
||
===Alternative Views=== | |||
⚫ | |||
⚫ | Today's computers are not generally considered conscious. |
||
==Testing AC== | |||
⚫ | * The ] is a proposal for identifying machine "intelligence" by testing a machine's capability to perform human-like conversation. The ] argument is an attempt to debunk the validity of the Turing Test. | ||
There are many methods and tests used to determine AC, or an existence thereof. These aspects of consciousness should be present before a device passes the ]. There is some debate over how many of the characteristics should be present; the majority opinion, by nature, is "all of them"; however, this is very implausible, and there is not a comprehensive database of all tests that can determine artificial consciousness. Therefore, one must assume that if a machine even close to artificial consciousness is designed, the tests must be reviewed and put to use accordingly. Another point is that many humans may not be considered conscious if subjected to all of these tests; infants, ], etc. would obviously fail many of these tests. | |||
⚫ | * No one would deny that ] was conscious when he wrote ''Under The Eye of The Clock'', and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of ] and ]. Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, by the Christopher Nolan argument the failure of any particular test would not disprove consciousness, and it is likely that the final test would be conducted in a similar manner to a ], and the result would hence be subjective. | ||
⚫ | The Turing test |
||
⚫ | * Integration tests. Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend. | ||
People would not deny that ] was conscious when he wrote ''Under The Eye of The Clock'', and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of ] and ], two extremely acclaimed authors. | |||
⚫ | ::As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness. | ||
⚫ | Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, by the Christopher Nolan argument the failure of any particular test would not disprove consciousness, and it is likely that the final test would be conducted in a similar manner to a Turing test, and the result would hence be subjective. | ||
⚫ | ::Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. Artificial consciousness proponents therefore have loosened this constraint and allow that simulating or depicting a conscious machine, such as the robots in '']'', could count - not as examples of artificial consciousness, since their personalities are generated by actors - but as models. Hence if someone were to produce an ''artificial'' C-3PO, which behaved just like the ''real'' one, but without needing to be controlled and animated by a human, then it could qualify as being artificially conscious. | ||
⚫ | Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend. | ||
::In the opinion of those holding the weak AC hypothesis, AC must be capable of achieving some of the same abilities as the average human because consciousness is generally described in reference to human abilities. This reasoning requires that AC must be capable of achieving all verifiable aspects of consciousness of average human, even if they need not have all of them in any particular moment. Therefore AC always remains artificial, and is only as close to consciousness as we objectively understand about the subject. | |||
⚫ | As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness. | ||
* Test may fail just because the system is not developed to the necessary level or don't have enough resources such as computer memory. | |||
⚫ | Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious |
||
==Artificial consciousness as a field of study== | ==Artificial consciousness as a field of study== | ||
Artificial consciousness includes ] aiming to create and study |
* Artificial consciousness includes ] aiming to create and study such systems in order to understand corresponding ] ]s. | ||
The term "artificial consciousness" was used by several scientists including ] Igor Aleksander, a |
* The term "artificial consciousness" was used by several scientists including ], a professor at the ], who stated in his book Impossible Minds (IC Press 1996) that the principles for creating a conscious machine already existed but that it would take forty years to train a machine to understand ]. | ||
Digital |
* Digital Sentience has so far been an elusive goal, and a vague and poorly understood one at that. Since the ], ]s, ]s, ]s, and ] ]s have debated the meaning, possibilities and the question of what would constitute digital sentience. | ||
==Artificial consciousness in literature and movies== | ==Artificial consciousness in literature and movies== | ||
It is not common agreement that all these examples from ] are artificial consciousness. | |||
*] in ]'s |
*] in ]'s ] | ||
*] in ]'s |
*] in ]'s ], ]. ] and ] | ||
*] in |
*] in ] | ||
*] in |
*] in ] | ||
*] in |
*] in ] | ||
*] in '']'' | *] in '']'' | ||
*Robots in ]'s '']'' | *Robots in ]'s '']'' | ||
Line 93: | Line 85: | ||
* | * | ||
* | * | ||
* |
Revision as of 22:05, 30 March 2004
This article is intended to follow NPOV ("neutral point of view") principle of Misplaced Pages. Considering many different views of the subject this article is organised so that the different views are described separately, without an attempt to change one to correspond to the other.
Artificial consciousness is equivalent to digital sentience and simulated consciousness.
Description
- In computer science, digital sentience is a concept that digital computers could someday be capable of indepedent thought. Digital sentience, if it ever comes to exist, will be a form of strong artificial intelligence. One generally accepted criterion for sentience is self-awareness.
- An artificial consciousness (AC) is an artificial system theoretically capable of achieving all known objectively observable abilities of consciousness where consciousness is defined as being conscious, capable of thought, will, or perception (dictionary.com).
- An artificial consciousness (AC) system is an artifact capable of achieving verifiable aspects of consciousness. Here the alternative, Shorter Oxford English definition, which does not include thought might be preferred in accordance with some views, if artificial consciousness to be deemed realisable.
Abilities (aspects) of consciousness relevant to AC
- Ability to predict the external events in every possible environment when it is possible to predict for average human. Ability to predict has been considered necessary for AC by several scientist, including Igor Aleksander.
- Consciousness is sometimes defined as self-awareness. Self-awareness is a subjective characteristic which may be difficult to test. Other measures may be easier. For example: Recent work in measuring the consciousness of the fly has determined that it manifests the aspects of attention which equate to those of a human at the neurological level. If attention is deemed a necessary pre-requisite for consciousness, then the fly will have an advantage.
- Attentiveness. Another test should include demonstration of the machine's ability to filter out certain stimuli in its environment so as to give the impression that it is being attentive to other stimuli, and to switch its attention according to certain rules. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could usefully be exploited by engineers of an artificially conscious machine, because if no one knows what rules govern attentiveness in humans then no one would know if they were being flouted by a machine. Since unconsciousness in humans equates to total inattentiveness, so an artificially conscious machine must have outputs that indicate where its attention is focused at any one time.
- Awareness. David Chalmers, a philosopher of mind, argues that the thermostat can be considered conscious: It knows if it is too hot, too cold or just right.
- Personality. In the area of behavioural psychology, certain theorists argue that the personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, we would have no need of personalities, and personality as a human attribute would never have evolved. The artificially conscious machine will need to have a personality capable of expression such that human observers can interact with it in a meaningful way. If that is not possible then it is held that the test would fail.
- Anticipation. Artificially conscious machine must appear to be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, i.e. it must be possible to demonstrate that it possesses artificial consciousness in the present and not just in the past, and in order to do this it must itself operate coherently in an unpredictable environment such as the real world.
Schools of thought
- "Objective stronger AC". AC must be theoretically capable of achieving all known objectively observable abilities of consciousness of average human, even if it needs not to have all of them at any particular moment. Therefore AC is objective and always remains artificial, and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, AC may considered to be a strong artificial intelligence, but this also depends on how strong AI is defined.
- "Weak AC". Artificial consciousness will never be real consciousness, but merely an approximation to it, only a mimicking of something which only humans (and maybe some other sentient beings) can truly experience or manifest.
- "Strong AC". Artificial consciousness is (or will be) real consciousness which just happens not to have arisen naturally. The argument in favour of strong AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then we must assume that human is not a machine. If there is something which is not a machine about a human then it must be soul or a magic spark and the Weak AC argument must then be made in religious or metaphysical terms. Alternatively, if the human is a machine, then the Church-Turing thesis applies and the possibility of strong AC must be admitted.
- It is possible to argue that, until contradictory evidence is discovered, Occam's Razor and the Copernican principle support the view that Artificial Consciousness will most likely be real consciousness. The Church Turing thesis says we need new physics before two computing machines are different, by Occam's Razor we should not posit new physics without good reason. By the Copernican principle we should claim no special position for human beings without good reason. The only "good" reasons we have are arrogant ones: Humans are supposedly too complicated, too special, too something for their brains to be built or copied artificially.
- Human-like artificial consciousness. As is to be expected, the weak and strong schools of thought differ on the question of whether artificial consciousness need be human-like or whether it could be of an entirely different nature. Proponents of strong AC are more likely to hold the view that artificial consciousness need be nothing like human consciousness. Those who hold that artificial consciousness can never be really conscious, holders of the weak view, hold that AC, not being real, will be human-like because this is the only real model of consciousness that we are ever likely to have, and that (weak) AC will be modelled on real consciousness and tested against it.
- "Another alternative view". It is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss Descartes' argument "I think, therefore I am", would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that, as it is a machine, it can not be conscious. Consciousness does not imply unfailing logical ability. No, if we retreat to the dictionary definition we find that consciousness is self-awareness, it is a totality of thought and experience. How rich or complete a consciousness is, whether something is more conscious than something else, is all that is open to question.
- Today's computers running today's programs are not generally considered conscious. When, in response to the
wc -w
command a Unix computer reports the number of words in a text file this is not a particularly compelling manifestation of consciousness. But when, in response to thetop
command, the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc, then this is a particular if very limited manifestation of self-awareness, of consciousness, by definition. A consciousness which even the above-average rock does not have. Computers are arguably better at monitoring their limited selves than humans are.
- Today's computers running today's programs are not generally considered conscious. When, in response to the
Testing AC
- All aspects of consciousness (whatever they are) must be present before a device passes the test. An obvious problem with this point of view, which could nonetheless be correct, is that some capable humans might then not be judged conscious by the same comprehensive tests.
- The Turing test is a proposal for identifying machine "intelligence" by testing a machine's capability to perform human-like conversation. The Chinese room argument is an attempt to debunk the validity of the Turing Test.
- No one would deny that Christopher Nolan was conscious when he wrote Under The Eye of The Clock, and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of Yeats and Joyce. Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, by the Christopher Nolan argument the failure of any particular test would not disprove consciousness, and it is likely that the final test would be conducted in a similar manner to a Turing test, and the result would hence be subjective.
- Integration tests. Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.
- As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness.
- Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. Artificial consciousness proponents therefore have loosened this constraint and allow that simulating or depicting a conscious machine, such as the robots in Star Wars, could count - not as examples of artificial consciousness, since their personalities are generated by actors - but as models. Hence if someone were to produce an artificial C-3PO, which behaved just like the real one, but without needing to be controlled and animated by a human, then it could qualify as being artificially conscious.
- In the opinion of those holding the weak AC hypothesis, AC must be capable of achieving some of the same abilities as the average human because consciousness is generally described in reference to human abilities. This reasoning requires that AC must be capable of achieving all verifiable aspects of consciousness of average human, even if they need not have all of them in any particular moment. Therefore AC always remains artificial, and is only as close to consciousness as we objectively understand about the subject.
- Test may fail just because the system is not developed to the necessary level or don't have enough resources such as computer memory.
Artificial consciousness as a field of study
- Artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms.
- The term "artificial consciousness" was used by several scientists including Igor Aleksander, a professor at the Imperial College London, who stated in his book Impossible Minds (IC Press 1996) that the principles for creating a conscious machine already existed but that it would take forty years to train a machine to understand language.
- Digital Sentience has so far been an elusive goal, and a vague and poorly understood one at that. Since the 1950s, computer scientists, mathematicians, philosophers, and science fiction authors have debated the meaning, possibilities and the question of what would constitute digital sentience.
Artificial consciousness in literature and movies
It is not common agreement that all these examples from science fiction are artificial consciousness.
- Vanamonde in Arthur C. Clarke's The City and the Stars
- Jane in Orson Scott Card's Speaker for the Dead, Xenocide. Children of the Mind and The Investment Counselor
- HAL in 2001 A Space Odyssey
- R2-D2 in Star Wars
- C-3PO in Star Wars
- Data in Star Trek
- Robots in Isaac Asimov's Robot Series