Misplaced Pages

Artificial consciousness: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 21:26, 30 March 2004 editUgen64 (talk | contribs)15,266 edits finished major edit← Previous edit Revision as of 22:05, 30 March 2004 edit undoTkorrovi (talk | contribs)Extended confirmed users1,655 edits Reverted to NPOV version, please edit from this version and maintain NPOV structure, this was an agreement compromiseNext edit →
Line 1: Line 1:
This article is intended to follow ] ("neutral point of view") principle of Misplaced Pages. Considering many different ]s of the subject this article is organised so that the different views are described separately, without an attempt to change one to correspond to the other.
'''Artificial consciousness''' is a type of ] equivalent to ] and ]. It is the term that would describe artificial creatures (generally termed ]s) that possess some degree of consciousness/sentience (as discussed throughout this article).

'''Artificial consciousness''' is equivalent to ] and ].


==Description== ==Description==


In ], the synonymous term '''digital sentience''' is used to describe the concept that digital ]s could someday be capable of indepedent ]. Digital sentience, if it ever comes to exist, will be a form of ]. A generally accepted criterion for ] is ]. To support the concept of self-awareness, a definition of ''conscious'' can be cited: "having an awareness of one's environment and one's own existence, sensations, and thoughts" () * In ], '''digital sentience''' is a concept that ] ]s could someday be capable of indepedent ]. Digital sentience, if it ever comes to exist, will be a form of ]. One generally accepted criterion for ] is ].


In more general terms, an AC system should be able to exhibit various verifiable, known, objective, and observable aspects of consciousness. Another definition of the word ''conscious'' is: "being conscious, capable of thought, will, or perception" (dictionary.com). * An '''artificial consciousness''' (AC) is an artificial system theoretically capable of achieving all ]n ]ly ] abilities of ] where consciousness is ]d as being conscious, capable of thought, will, or perception (dictionary.com).


* An '''artificial consciousness''' (AC) ] is an ] capable of achieving verifiable ]s of ]. Here the alternative, Shorter Oxford English definition, which does not include ''thought'' might be preferred in accordance with some views, if artificial consciousness to be deemed realisable.
==Aspects of AC==


==Abilities (aspects) of consciousness relevant to AC==
There are various aspects and/or abilities that are generally considered required or very useful in order to consider a certain machine artificially conscious. These are only the most cited, however; there are many others that are not covered.


One related aspect is the ability to predict external ]s in certain ]s in which a human could predict them. The ability to predict has been considered necessary for AC by several ]s, including ]. * ] to ] the external ]s in every possible ] when it is possible to predict for average ]. Ability to predict has been considered necessary for AC by several scientist, including ].


Consciousness is sometimes defined as ]. While self-awareness is very important in determining the consciousness of a machine, it is generally difficult to test. For example, recent work in measuring the consciousness of the ] has determined that it manifests the aspects of ] which equate to those of a human at the ] level. * Consciousness is sometimes defined as ]. Self-awareness is a subjective characteristic which may be difficult to test. Other measures may be easier. For example: Recent work in measuring the consciousness of the ] has determined that it manifests the aspects of ] which equate to those of a human at the ] level. If attention is deemed a necessary pre-requisite for consciousness, then the fly will have an advantage.


Another test should include a demonstration of the machine's ability to filter out certain ] in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The ]s that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of an "artificially conscious" machine; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an artificially conscious machine should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test. * Attentiveness. Another test should include demonstration of the machine's ability to ] out certain ] in its environment so as to give the impression that it is being attentive to other stimuli, and to switch its ] according to certain ]s. The ]s that govern how human attention is driven are not yet fully understood by scientists. This absence of ] could usefully be exploited by engineers of an artificially conscious machine, because if no one knows what rules govern attentiveness in humans then no one would know if they were being flouted by a machine. Since ] in humans equates to total inattentiveness, so an artificially conscious machine must have outputs that indicate where its attention is focused at any one time.


] could be another required characteristic of an artificially conscious "organism". However, again, there are some problems with the exact definition of ''awareness''. The ] ] argues that the ] can be considered conscious: It knows if it is too hot, too cold, or at the correct temperature. * ]. ], a philosopher of ], argues that the ] can be considered conscious: It ]s if it is too hot, too cold or just right.


Personality is another characteristic that is generally considered vital within consciousness. In the area of ], there is a somewhat popular theory that personality is an ] created by the ] in order to interact with other people. It is argued that without other people to interact with, humans (and other animals, possibly) would have no need of personalities, and human personality would never have ]. An artificially conscious machine will need to have a personality capable of expression such that human ]s can interact with it in a meaningful way. However, this is often questioned by ]s; the ], which measures a machine's personality, is not considered generally useful any more. * Personality. In the area of ]al ], certain theorists argue that the personality is an ] created by the ] in order to ] with other people. It is argued that without other people to interact with, we would have no need of personalities, and personality as a human ] would never have evolved. The artificially conscious machine will need to have a personality capable of ] such that human ]s can interact with it in a meaningful way. If that is not possible then it is held that the test would fail.


Anticipation is the final characteristic that could possibly be used to define artificial consciousness. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs ] components, making it possible to demonstrate that it possesses artificial consciousness in the present and not just in the past. In order to do this, the machine being tested must operate coherently in an unpredictable environment, to simulate the real world. * Anticipation. Artificially conscious machine must appear to be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs ] components, i.e. it must be possible to demonstrate that it possesses artificial consciousness in the present and not just in the past, and in order to do this it must itself operate coherently in an unpredictable environment such as the real world.


==Schools of thought== ==Schools of thought==


* "Objective stronger AC". AC must be theoretically capable of achieving all known objectively observable abilities of consciousness of average human, even if it needs not to have all of them at any particular moment. Therefore AC is objective and always remains artificial, and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, AC may considered to be a strong ], but this also depends on how strong AI is defined.
There are three commonly stated views regarding the plausibility and likelihood of AC occurring, and some alternative views.


* "Weak AC". Artificial consciousness will never be real consciousness, but merely an ] to it, only a mimicking of something which only humans (and maybe some other ] beings) can truly ] or manifest.
===Objective Stronger AC===


* "Strong AC". Artificial consciousness is (or will be) real consciousness which just happens not to have arisen ]ly. The argument in favour of strong AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then we must assume that ] is not a ]. If there is something which is not a machine about a human then it must be ] or a magic spark and the Weak AC argument must then be made in religious or metaphysical terms. Alternatively, if the human is a machine, then the ] applies and the possibility of strong AC must be admitted.
AC should theoretically be capable of achieving certain known, objective abilities of human consciousness, although not simultaneously. Therefore, AC is objective and always remains artificial, and is only as close to consciousness as humans objectively understand about the subject. AC may considered to be a strong ], due to the difficult demands regarding the concept of AC, but this also depends on how strong AI is defined.


:: It is possible to argue that, until contradictory evidence is discovered, ] and the ] support the view that Artificial Consciousness will most likely be real consciousness. The Church Turing thesis says we need new physics before two computing machines are different, by Occam's Razor we should not posit new physics without good reason. By the Copernican principle we should claim no special position for human beings without good reason. The only "good" reasons we have are arrogant ones: Humans are supposedly too complicated, too special, too '''something''' for their brains to be built or copied artificially.
===Weak AC===


:: Human-like artificial consciousness. As is to be expected, the weak and strong schools of thought differ on the question of whether artificial consciousness need be human-like or whether it could be of an entirely different nature. Proponents of strong AC are more likely to hold the view that artificial consciousness need be nothing like human consciousness. Those who hold that artificial consciousness can never be really conscious, holders of the weak view, hold that AC, not being ''real'', will be human-like because this is the only ''real'' model of consciousness that we are ever likely to have, and that (weak) AC will be modelled on real consciousness and tested against it.
Artificial consciousness will never be real consciousness, but merely an ] of it; it only mimics something that only humans (and some other ] beings) can truly ] or manifest. Currently, this is the state of artificial intelligence. No computer has been able to pass the somewhat vague Turing test, which would be the first step to an AI that contains a "personality"; this would be the most likely path to a strong AC.


* "Another alternative view". It is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss Descartes' argument ''"I think, therefore I am"'', would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that, as it is a machine, it can not be conscious. Consciousness does not imply unfailing logical ability. No, if we retreat to the dictionary definition we find that consciousness is self-awareness, it is a totality of thought and experience. How ''rich'' or ''complete'' a consciousness is, whether something is ''more'' conscious than something else, is all that is open to question.
===Strong AC===


::Today's computers running today's programs are not generally considered conscious. When, in response to the <code>wc -w</code> command a Unix computer reports the number of words in a text file this is not a particularly compelling manifestation of consciousness. But when, in response to the <code>top</code> command, the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc, then this is a particular if very limited manifestation of self-awareness, of consciousness, by definition. A consciousness which even the above-average rock does not have. Computers are arguably better at monitoring their limited selves than humans are.
Proponents of this view believe that artificial consciousness is (or will be) real consciousness, albeit one that has not arisen naturally. The argument in favour of strong AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then we must assume that the ] is not a ]. If there is something which is not a machine about a human then it must be the ] or a magic spark and the Weak AC argument must then be made in ] or ] terms; these involvements generally weaken the argument and its factual bases. Alternatively, if the human is a machine, then the ] applies and the possibility of strong AC must be admitted.


==Testing AC==
However, it is argued that until contradictory evidence is discovered, ] and the ] support the view that AC will most likely transform into a real consciousness. The Church-Turing thesis states that we need new ] before two computing machines are different; by Occam's Razor, we should not posit new physics without good reason. The Copernican principle states that we should claim no special position for human beings without good reason. The only "good" reasons we have are those of arrogance: Humans are supposedly too complicated, special, or some other similar term for their brains to be built or copied artificially.


* All aspects of consciousness (whatever they are) must be present before a device passes the ]. An obvious problem with this point of view, which could nonetheless be correct, is that some capable humans might then not be judged conscious by the same comprehensive tests.
====Human-like AC====

As is to be expected, the weak and strong schools of thought differ on the question of whether artificial consciousness needs to be human-like or whether it could be of an entirely different nature. Proponents of strong AC generally hold the view that artificial consciousness does not need to be similar to human consciousness. Those who hold that artificial consciousness can never be really conscious (i.e., weak AC proponents) hold that AC, not being ''real'', can only be human-like because that is the only "''true''" model of consciousness that we will (most likely) ever have, and that this weak AC will be modeled on real consciousness and tested against it.

===Alternative Views===

One alternative view states that it is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss ] argument ''"]"'', would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that because it is a machine, it cannot be conscious. Consciousness does not imply unfailing ]al ability. If we look at the dictionary definition, we find that consciousness is self-awareness: a totality of thought and experience. The ''richness'' or ''completeness'' of consciousness, degrees of consciousness, and many other related topics are under discussion, and will be so for some time (possibly forever).

Today's computers are not generally considered conscious. A ] (or derivative thereof) computer's response to the <code>wc -w</code> command, reporting the number of words in a text file, is not a particularly compelling manifestation of consciousness. However, the response to the <code>top</code> command, in which the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc., is a particular if very limited manifestation of self-awareness (and therefore consciousness) by definition.

==Testing AC==


* The ] is a proposal for identifying machine "intelligence" by testing a machine's capability to perform human-like conversation. The ] argument is an attempt to debunk the validity of the Turing Test.
There are many methods and tests used to determine AC, or an existence thereof. These aspects of consciousness should be present before a device passes the ]. There is some debate over how many of the characteristics should be present; the majority opinion, by nature, is "all of them"; however, this is very implausible, and there is not a comprehensive database of all tests that can determine artificial consciousness. Therefore, one must assume that if a machine even close to artificial consciousness is designed, the tests must be reviewed and put to use accordingly. Another point is that many humans may not be considered conscious if subjected to all of these tests; infants, ], etc. would obviously fail many of these tests.


* No one would deny that ] was conscious when he wrote ''Under The Eye of The Clock'', and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of ] and ]. Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, by the Christopher Nolan argument the failure of any particular test would not disprove consciousness, and it is likely that the final test would be conducted in a similar manner to a ], and the result would hence be subjective.
The Turing test, as mentioned before, is a test proposed by ] to identify machine "intelligence" by testing a machine's capability to perform human-like conversation. The ] argument is an attempt to debunk the validity of the Turing Test; nowadays, most computer scientists agree that the test is not a very useful or comprehensive way to determine intelligence, personality, or consciousness.


* Integration tests. Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.
People would not deny that ] was conscious when he wrote ''Under The Eye of The Clock'', and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of ] and ], two extremely acclaimed authors.


::As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness.
Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, by the Christopher Nolan argument the failure of any particular test would not disprove consciousness, and it is likely that the final test would be conducted in a similar manner to a Turing test, and the result would hence be subjective.


::Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. Artificial consciousness proponents therefore have loosened this constraint and allow that simulating or depicting a conscious machine, such as the robots in '']'', could count - not as examples of artificial consciousness, since their personalities are generated by actors - but as models. Hence if someone were to produce an ''artificial'' C-3PO, which behaved just like the ''real'' one, but without needing to be controlled and animated by a human, then it could qualify as being artificially conscious.
Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.


::In the opinion of those holding the weak AC hypothesis, AC must be capable of achieving some of the same abilities as the average human because consciousness is generally described in reference to human abilities. This reasoning requires that AC must be capable of achieving all verifiable aspects of consciousness of average human, even if they need not have all of them in any particular moment. Therefore AC always remains artificial, and is only as close to consciousness as we objectively understand about the subject.
As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness.


* Test may fail just because the system is not developed to the necessary level or don't have enough resources such as computer memory.
Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. However, one could then argue that it was conscious of its environment, and it was self-aware of its actual nature: an artificial machine (analogous to our knowledge that we are humans). Artificial consciousness proponents therefore have loosened this constraint and allow that simulating or depicting a conscious machine, such as the robots in '']'', could count - not as examples of artificial consciousness, since their personalities are generated by actors, but as models. Hence, if someone were to produce an ''artificial'' C-3PO, which behaved just like the ''real'' one, but without needing to be controlled and animated by a human, then it could qualify as being artificially conscious.


==Artificial consciousness as a field of study== ==Artificial consciousness as a field of study==


Artificial consciousness includes ] aiming to create and study artificially conscious systems in order to understand corresponding natural mechanisms. * Artificial consciousness includes ] aiming to create and study such systems in order to understand corresponding ] ]s.


The term "artificial consciousness" was used by several scientists including ] Igor Aleksander, a faculty member at the ] in ], ], who stated in his book ''Impossible Minds'' that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand ]. * The term "artificial consciousness" was used by several scientists including ], a professor at the ], who stated in his book Impossible Minds (IC Press 1996) that the principles for creating a conscious machine already existed but that it would take forty years to train a machine to understand ].


Digital sentience has so far been an elusive goal, and a vague and poorly understood one at that. Since the ], computer scientists, ]s, philosophers, and ] ]s have debated the meaning, possibilities and the question of what would constitute digital sentience. * Digital Sentience has so far been an elusive goal, and a vague and poorly understood one at that. Since the ], ]s, ]s, ]s, and ] ]s have debated the meaning, possibilities and the question of what would constitute digital sentience.


==Artificial consciousness in literature and movies== ==Artificial consciousness in literature and movies==


These examples are considered by some to be fictional instances of artificial consciousness: It is not common agreement that all these examples from ] are artificial consciousness.


*] in ]'s '']'' *] in ]'s ]
*] in ]'s '']'', '']'', '']'', and '']'' *] in ]'s ], ]. ] and ]
*] in '']'' *] in ]
*] in '']'' *] in ]
*] in '']'' *] in ]
*] in '']'' *] in '']''
*Robots in ]'s '']'' *Robots in ]'s '']''
Line 93: Line 85:
* *
* *
*

Revision as of 22:05, 30 March 2004

This article is intended to follow NPOV ("neutral point of view") principle of Misplaced Pages. Considering many different views of the subject this article is organised so that the different views are described separately, without an attempt to change one to correspond to the other.

Artificial consciousness is equivalent to digital sentience and simulated consciousness.

Description

  • An artificial consciousness (AC) is an artificial system theoretically capable of achieving all known objectively observable abilities of consciousness where consciousness is defined as being conscious, capable of thought, will, or perception (dictionary.com).
  • An artificial consciousness (AC) system is an artifact capable of achieving verifiable aspects of consciousness. Here the alternative, Shorter Oxford English definition, which does not include thought might be preferred in accordance with some views, if artificial consciousness to be deemed realisable.

Abilities (aspects) of consciousness relevant to AC

  • Consciousness is sometimes defined as self-awareness. Self-awareness is a subjective characteristic which may be difficult to test. Other measures may be easier. For example: Recent work in measuring the consciousness of the fly has determined that it manifests the aspects of attention which equate to those of a human at the neurological level. If attention is deemed a necessary pre-requisite for consciousness, then the fly will have an advantage.
  • Attentiveness. Another test should include demonstration of the machine's ability to filter out certain stimuli in its environment so as to give the impression that it is being attentive to other stimuli, and to switch its attention according to certain rules. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could usefully be exploited by engineers of an artificially conscious machine, because if no one knows what rules govern attentiveness in humans then no one would know if they were being flouted by a machine. Since unconsciousness in humans equates to total inattentiveness, so an artificially conscious machine must have outputs that indicate where its attention is focused at any one time.
  • Personality. In the area of behavioural psychology, certain theorists argue that the personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, we would have no need of personalities, and personality as a human attribute would never have evolved. The artificially conscious machine will need to have a personality capable of expression such that human observers can interact with it in a meaningful way. If that is not possible then it is held that the test would fail.
  • Anticipation. Artificially conscious machine must appear to be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, i.e. it must be possible to demonstrate that it possesses artificial consciousness in the present and not just in the past, and in order to do this it must itself operate coherently in an unpredictable environment such as the real world.

Schools of thought

  • "Objective stronger AC". AC must be theoretically capable of achieving all known objectively observable abilities of consciousness of average human, even if it needs not to have all of them at any particular moment. Therefore AC is objective and always remains artificial, and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, AC may considered to be a strong artificial intelligence, but this also depends on how strong AI is defined.
  • "Weak AC". Artificial consciousness will never be real consciousness, but merely an approximation to it, only a mimicking of something which only humans (and maybe some other sentient beings) can truly experience or manifest.
  • "Strong AC". Artificial consciousness is (or will be) real consciousness which just happens not to have arisen naturally. The argument in favour of strong AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then we must assume that human is not a machine. If there is something which is not a machine about a human then it must be soul or a magic spark and the Weak AC argument must then be made in religious or metaphysical terms. Alternatively, if the human is a machine, then the Church-Turing thesis applies and the possibility of strong AC must be admitted.
It is possible to argue that, until contradictory evidence is discovered, Occam's Razor and the Copernican principle support the view that Artificial Consciousness will most likely be real consciousness. The Church Turing thesis says we need new physics before two computing machines are different, by Occam's Razor we should not posit new physics without good reason. By the Copernican principle we should claim no special position for human beings without good reason. The only "good" reasons we have are arrogant ones: Humans are supposedly too complicated, too special, too something for their brains to be built or copied artificially.
Human-like artificial consciousness. As is to be expected, the weak and strong schools of thought differ on the question of whether artificial consciousness need be human-like or whether it could be of an entirely different nature. Proponents of strong AC are more likely to hold the view that artificial consciousness need be nothing like human consciousness. Those who hold that artificial consciousness can never be really conscious, holders of the weak view, hold that AC, not being real, will be human-like because this is the only real model of consciousness that we are ever likely to have, and that (weak) AC will be modelled on real consciousness and tested against it.
  • "Another alternative view". It is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss Descartes' argument "I think, therefore I am", would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that, as it is a machine, it can not be conscious. Consciousness does not imply unfailing logical ability. No, if we retreat to the dictionary definition we find that consciousness is self-awareness, it is a totality of thought and experience. How rich or complete a consciousness is, whether something is more conscious than something else, is all that is open to question.
Today's computers running today's programs are not generally considered conscious. When, in response to the wc -w command a Unix computer reports the number of words in a text file this is not a particularly compelling manifestation of consciousness. But when, in response to the top command, the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc, then this is a particular if very limited manifestation of self-awareness, of consciousness, by definition. A consciousness which even the above-average rock does not have. Computers are arguably better at monitoring their limited selves than humans are.

Testing AC

  • All aspects of consciousness (whatever they are) must be present before a device passes the test. An obvious problem with this point of view, which could nonetheless be correct, is that some capable humans might then not be judged conscious by the same comprehensive tests.
  • The Turing test is a proposal for identifying machine "intelligence" by testing a machine's capability to perform human-like conversation. The Chinese room argument is an attempt to debunk the validity of the Turing Test.
  • No one would deny that Christopher Nolan was conscious when he wrote Under The Eye of The Clock, and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of Yeats and Joyce. Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, by the Christopher Nolan argument the failure of any particular test would not disprove consciousness, and it is likely that the final test would be conducted in a similar manner to a Turing test, and the result would hence be subjective.
  • Integration tests. Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.
As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness.
Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. Artificial consciousness proponents therefore have loosened this constraint and allow that simulating or depicting a conscious machine, such as the robots in Star Wars, could count - not as examples of artificial consciousness, since their personalities are generated by actors - but as models. Hence if someone were to produce an artificial C-3PO, which behaved just like the real one, but without needing to be controlled and animated by a human, then it could qualify as being artificially conscious.
In the opinion of those holding the weak AC hypothesis, AC must be capable of achieving some of the same abilities as the average human because consciousness is generally described in reference to human abilities. This reasoning requires that AC must be capable of achieving all verifiable aspects of consciousness of average human, even if they need not have all of them in any particular moment. Therefore AC always remains artificial, and is only as close to consciousness as we objectively understand about the subject.
  • Test may fail just because the system is not developed to the necessary level or don't have enough resources such as computer memory.

Artificial consciousness as a field of study

  • Artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms.
  • The term "artificial consciousness" was used by several scientists including Igor Aleksander, a professor at the Imperial College London, who stated in his book Impossible Minds (IC Press 1996) that the principles for creating a conscious machine already existed but that it would take forty years to train a machine to understand language.

Artificial consciousness in literature and movies

It is not common agreement that all these examples from science fiction are artificial consciousness.

External links