Misplaced Pages

Artificial consciousness: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 18:31, 3 May 2004 editTkorrovi (talk | contribs)Extended confirmed users1,655 edits Stupid comments deletes← Previous edit Revision as of 19:04, 3 May 2004 edit undoTkorrovi (talk | contribs)Extended confirmed users1,655 edits Major edit, restore what you wantNext edit →
Line 17: Line 17:
Consciousness is sometimes defined as ]. While self-awareness is very important, it may be subjective and is generally difficult to test. Consciousness is sometimes defined as ]. While self-awareness is very important, it may be subjective and is generally difficult to test.


Another test should include a demonstration that machine is capable to learn the ability to filter out certain ] in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The ]s that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an AC should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test. Recent work in measuring the consciousness of the ] has determined that it manifests the aspects of ] which equate to those of a human at the ] level. Another test should include a demonstration that machine is capable to learn the ability to filter out certain ] in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The ]s that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an AC should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test.


] could be another required aspect. However, again, there are some problems with the exact definition of ''awareness''. To illustrate this point the ] ] argues that the ] could be considered conscious: It knows if it is too hot, too cold, or at the correct temperature. ] could be another required aspect. However, again, there are some problems with the exact definition of ''awareness''.


Personality is another characteristic that is generally considered vital within consciousness. In the area of ], there is a somewhat popular theory that personality is an ] created by the ] in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have ]. An artificially conscious machine may need to have a personality capable of expression such that human ]s can interact with it in a meaningful way. However, this is often questioned by ]s; the ], which measures a machine's personality, is not considered generally useful any more. Personality is another characteristic that is generally considered vital within consciousness. In the area of ], there is a somewhat popular theory that personality is an ] created by the ] in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have ]. An artificially conscious machine may need to have a personality capable of expression such that human ]s can interact with it in a meaningful way. However, this is often questioned by ]s; the ], which measures a machine's personality, is not considered generally useful any more.
Line 27: Line 27:
==Schools of thought== ==Schools of thought==


There are several commonly stated views regarding the plausibility and capability and of AC, and the likelihood that AC will ever be real consciousness. Note that the terms ''Genuine'' and ''Not-genuine'' refer not to the capability of the artificial consciousness but to its ''reality'' (how close it is to real consciousness). Believers in Genuine AC think that AC can (one day) be real. Believers in Not-genuine AC think it never can be real. E.g. '''Some''' believers in Genuine AC say the thermostat is really conscious but they do not claim the thermostat is capable of an . There are several commonly stated views regarding the plausibility and capability and of AC, and the likelihood that AC will ever be real consciousness. Note that the terms ''Genuine'' and ''Not-genuine'' refer not to the capability of the artificial consciousness but to its ''reality'' (how close it is to real consciousness). Believers in Genuine AC think that AC can (one day) be real. Believers in Not-genuine AC think it never can be real.


===Objective less Genuine AC=== ===Objective less Genuine AC===
Line 37: Line 37:
===Not-genuine AC=== ===Not-genuine AC===


Artificial consciousness will never be real consciousness, but merely an approximation of it; it only mimics something that only humans (and some other ] beings) can truly experience or manifest. Currently, this is the state of artificial intelligence and holders of the Not-genuine AC hypothesis believe that this will always be the case. No computer has been able to pass the somewhat vague Turing test, which would be a first step to an AI that contains a "personality"; this would perhaps be one path to a Genuine AC. Artificial consciousness will never be real consciousness, but merely an approximation of it; it only mimics something that only humans (and some other ] beings) can truly experience or manifest. Currently, this is the state of artificial intelligence and holders of the Not-genuine AC hypothesis believe that this will always be the case. No computer has been able to pass the somewhat vague Turing test, which would be a first step to an AI that contains a "personality"; this would perhaps be one path to a Genuine AC. Strictly the subject of another fields as AI should not the subject of AC, by that only such study what cannot be categorized anywhere else, like artificial emotions, is "Not-genuine AC".


===Genuine AC=== ===Genuine AC===


Proponents of this view believe that artificial consciousness is (or will be) real consciousness, albeit one that has not arisen naturally.
Proponents of this view believe that artificial consciousness is (or will be) real consciousness, albeit one that has not arisen naturally. The argument in favour of Genuine AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then we must assume that the ] is not a ]. If there is something which is not a machine about a human then it must be the ] or a magic spark and the Not-genuine AC argument must then be made in ] or ] terms and science is bypassed. Alternatively, if the human is a machine, then the ] applies and the possibility of Genuine AC must be admitted.

It is argued that until contradictory evidence is discovered, ] and the ] support the view that AC can be real consciousness and that the building of AC which is real consciousness is likely: The human being is nothing but a machine. The Church-Turing thesis states that we need new ] before two computing machines are different; by Occam's Razor, we should not posit new physics without good reason. The Copernican principle states that we should claim no special position for human beings without good reason. The only "good" reasons we have are those of arrogance: Humans are supposedly too complicated or special (or some other similar term) for their brains to be built or copied artificially, or for an alternative artificial achitecture to the brain to be truly capable of consciousness.

===Human-like AC===

As is to be expected, the Not-genuine and Genuine schools of thought differ on the question of whether artificial consciousness needs to be human-like or whether it could be of an entirely different nature. Proponents of Genuine AC generally hold the view that artificial consciousness does not need to be similar to human consciousness. Some who hold that artificial consciousness can never be really conscious (i.e., Not-genuine AC proponents) hold that AC, not being ''real'', can only be human-like because that is the only "''true''" model of consciousness that we will (most likely) ever have, and that this Not-genuine AC will be modeled on real consciousness and tested against it.


===Nihilistic view=== ===Nihilistic view===


It is impossible to test if anything is conscient.
It is impossible to test if anything is conscient. To ask a thermometer to appreciate music is like asking a human to think in five dimensions. It is unnecessary for humans to think in five dimensions, as much as it is irrelevant for thermostats to understand music. Consciousness is just a word attributed to things that appear to make their own choices and perhaps things that are too complex for our mind to comprehend. Things seems to be conscient, but that is just because our morale tells us to believe in it, or because of our feelings for other things. Consciousness is an illusion.

===Alternative Views===

One alternative view states that it is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss ] argument ''"]"'', would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that because it is a machine, it cannot be conscious. Consciousness does not imply unfailing ]al ability. If we look at the dictionary definition, we find that consciousness is self-awareness: a totality of thought and experience. The ''richness'' or ''completeness'' of consciousness, degrees of consciousness, and many other related topics are under discussion, and will be so for some time (possibly forever). That one entity's consciousness is less "advanced" than another's does not prevent each from considering its own consciousness rich and complete. A work of popular children's fiction bears this out: ''''.

Today's computers are not generally considered conscious. A ] (or derivative thereof) computer's response to the <code>wc -w</code> command, reporting the number of words in a text file, is not a particularly compelling manifestation of consciousness. However, the response to the <code>top</code> command, in which the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc., is a particular if very limited manifestation of self-awareness (and therefore consciousness) by definition.


==Testing AC== ==Testing AC==


Unless artificial consciousness can be proven formally, judgments of the success of any implementation will depend on observation. Depending upon one's standpoint regarding what constitues ] and the particular attributes of ] and ] necessary to demonstrate it, there are various views on what the test acceptance criteria for artificial consciousness should be. Obviously, if the criteria are too stringent then counter-examples citing people who would themselves fail to meet such criteria can be posited to invalidate a particular test. On the other hand, the formulation of a minimum set of requirements, to act as design objectives for any implementation, can be helpful in order to conjecture that the failure to meet all such requirements would result in the failure of the implementation to be deemed conscious by observers. Unless artificial consciousness can be proven formally, judgments of the success of any implementation will depend on observation. Depending upon one's standpoint regarding what constitues ] and the particular attributes of ] and ] necessary to demonstrate it, there are various views on what the test acceptance criteria for artificial consciousness should be.


The Turing test, as mentioned before, is a proposal for identifying machine ] as determined by a machine's ability to interact with a person. The ] argument attempts to debunk the validity of the Turing Test by showing that a machine can pass the test and yet not be sapient. In the Turing test one has to guess whether the entity one is interacting with is a machine or a human. An artificially conscious entity could only pass an equivalent test when it had itself passed beyond the imaginations of observers and entered into a meaningful relationship with them, and perhaps with fellow instances of itself. The Turing test, as mentioned before, is a proposal for identifying machine ] as determined by a machine's ability to interact with a person. The ] argument attempts to debunk the validity of the Turing Test by showing that a machine can pass the test and yet not be sapient. In the Turing test one has to guess whether the entity one is interacting with is a machine or a human. An artificially conscious entity could only pass an equivalent test when it had itself passed beyond the imaginations of observers and entered into a meaningful relationship with them, and perhaps with fellow instances of itself.

People would not deny that ] was conscious when he wrote ''Under The Eye of The Clock'', and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of ] and ], two other highly acclaimed Irish authors.

Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, by the Christopher Nolan argument the failure of any particular test would not disprove consciousness, and it is likely that the final test would be conducted in a similar manner to a Turing test, and the result would hence be subjective.


Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend. Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.


Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. However, one could then argue that it was conscious of its environment, and it was self-aware of its actual nature: an artificial machine (analogous to our knowledge that we are humans).
As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness.

Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. However, one could then argue that it was conscious of its environment, and it was self-aware of its actual nature: an artificial machine (analogous to our knowledge that we are humans). Artificial consciousness proponents therefore have loosened this constraint and allow that simulating or depicting a conscious machine, such as the robots in '']'', could count - not as examples of artificial consciousness, since their personalities are generated by actors, but as models. Hence, some consider that if someone were to produce an ''artificial'' C-3PO, which behaved just like the ''real'' one, but without needing to be controlled and animated by a human, then it could qualify as being artificially conscious.


Finally, an AC system may fail certain tests simply because the system is not developed to the necessary level or doesn't have enough resources; for example, a computer could fail an attentiveness test simply because it does not have enough ]. Finally, an AC system may fail certain tests simply because the system is not developed to the necessary level or doesn't have enough resources; for example, a computer could fail an attentiveness test simply because it does not have enough ].
Line 92: Line 74:
*] in ]'s '']'', '']'', '']'', and '']'' *] in ]'s '']'', '']'', '']'', and '']''
*] in '']'' *] in '']''
*] in '']''
*] in '']''
*] in '']''
*Robots in ]'s '']''
**Andrew Martin in ]
*'']''


==External links== ==External links==
Line 108: Line 84:


==See also== ==See also==
*] *]
*] *]
*] *]
*]
*]
*]
*]

Revision as of 19:04, 3 May 2004

Artificial consciousness encompasses digital sentience and simulated consciousness. It is a term that would describe artificial systems (e.g. robots) that simulate some degree of consciousness or sentience.

Simulated consciousness can not be real consciousness, by definition. Yet one school of thought (see below) holds that AC might be genuinely conscious. Therefore the terms artificial consciousness and simulated consciousness are not equivalent. Digital sentience assumes that the artificial consciousness is exhibited by a computer (or a system with a computer as its "brain"): The possibility of a man-made yet biological (e.g.) system being conscious demonstrates that artificial consciousness is not equivalent to digital sentience.

Description

In computer science, the term digital sentience is used to describe the concept that digital computers could someday be capable of independent thought. Digital sentience, if it ever comes to exist, is likely to be a form of artificial intelligence. A generally accepted criterion for sentience is self-awareness and this is also one of the definitions of consciousness. To support the concept of self-awareness, a definition of conscious can be cited: "having an awareness of one's environment and one's own existence, sensations, and thoughts" (dictionary.com).

In more general terms, an AC system should be theoretically capable of achieving various or by a more strict view all verifiable, known, objective, and observable aspects of consciousness. Another definition of the word conscious is: "being conscious, capable of thought, will, or perception" (dictionary.com).

Aspects of AC

There are various aspects and/or abilities that are generally considered necessary for an AC system, or an AC system should be able to learn them; these are very useful as criteria to determine whether a certain machine is artificially conscious. These are only the most cited, however; there are many others that are not covered.

One aspect is the ability to predict the external events in every possible environment when it is possible to predict for capable human. Ability to predict has been considered necessary for AC by several scientists, including Igor Aleksander.

Consciousness is sometimes defined as self-awareness. While self-awareness is very important, it may be subjective and is generally difficult to test.

Another test should include a demonstration that machine is capable to learn the ability to filter out certain stimuli in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an AC should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test.

Awareness could be another required aspect. However, again, there are some problems with the exact definition of awareness.

Personality is another characteristic that is generally considered vital within consciousness. In the area of behaviorial psychology, there is a somewhat popular theory that personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have evolved. An artificially conscious machine may need to have a personality capable of expression such that human observers can interact with it in a meaningful way. However, this is often questioned by computer scientists; the Turing test, which measures a machine's personality, is not considered generally useful any more.

Anticipation is the final characteristic that could possibly be used to define artificial consciousness. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, making it possible to demonstrate that it possesses artificial consciousness in the present and not just in the past. In order to do this, the machine being tested must operate coherently in an unpredictable environment, to simulate the real world.

Schools of thought

There are several commonly stated views regarding the plausibility and capability and of AC, and the likelihood that AC will ever be real consciousness. Note that the terms Genuine and Not-genuine refer not to the capability of the artificial consciousness but to its reality (how close it is to real consciousness). Believers in Genuine AC think that AC can (one day) be real. Believers in Not-genuine AC think it never can be real.

Objective less Genuine AC

By "less Genuine" we mean not as real as "Genuine" but more real than "Not-genuine". It is alternative view to "Genuine AC", by what AC is less genuine only because of the requirement that AC study must be as objective as the scientific method demands, but by Thomas Nagel consciousness includes subjective experience, what cannot be objectively observed. It does not intend to restrict AC in any other way.

An AC system must be theoretically capable of achieving all known objectively observable abilities of consciousness possessed by a capable human, even if it does not need to have all of them at any particular moment. Therefore AC is objective and always remains artificial, and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, AC may considered to be a strong artificial intelligence, but this also depends on how strong AI is defined.

Not-genuine AC

Artificial consciousness will never be real consciousness, but merely an approximation of it; it only mimics something that only humans (and some other sentient beings) can truly experience or manifest. Currently, this is the state of artificial intelligence and holders of the Not-genuine AC hypothesis believe that this will always be the case. No computer has been able to pass the somewhat vague Turing test, which would be a first step to an AI that contains a "personality"; this would perhaps be one path to a Genuine AC. Strictly the subject of another fields as AI should not the subject of AC, by that only such study what cannot be categorized anywhere else, like artificial emotions, is "Not-genuine AC".

Genuine AC

Proponents of this view believe that artificial consciousness is (or will be) real consciousness, albeit one that has not arisen naturally.

Nihilistic view

It is impossible to test if anything is conscient.

Testing AC

Unless artificial consciousness can be proven formally, judgments of the success of any implementation will depend on observation. Depending upon one's standpoint regarding what constitues consciousness and the particular attributes of sentience and sapience necessary to demonstrate it, there are various views on what the test acceptance criteria for artificial consciousness should be.

The Turing test, as mentioned before, is a proposal for identifying machine intelligence as determined by a machine's ability to interact with a person. The Chinese room argument attempts to debunk the validity of the Turing Test by showing that a machine can pass the test and yet not be sapient. In the Turing test one has to guess whether the entity one is interacting with is a machine or a human. An artificially conscious entity could only pass an equivalent test when it had itself passed beyond the imaginations of observers and entered into a meaningful relationship with them, and perhaps with fellow instances of itself.

Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.

Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. However, one could then argue that it was conscious of its environment, and it was self-aware of its actual nature: an artificial machine (analogous to our knowledge that we are humans).

Finally, an AC system may fail certain tests simply because the system is not developed to the necessary level or doesn't have enough resources; for example, a computer could fail an attentiveness test simply because it does not have enough memory.

Artificial consciousness as a field of study

Artificial consciousness includes research aiming to create and study artificially conscious systems in order to understand corresponding natural mechanisms.

The term "artificial consciousness" was used by several scientists including Professor Igor Aleksander, a faculty member at the Imperial College in London, England, who stated in his book Impossible Minds that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.

Digital sentience has so far been an elusive goal, and a vague and poorly understood one at that. Since the 1950s, computer scientists, mathematicians, philosophers, and science fiction authors have debated the meaning, possibilities and the question of what would constitute digital sentience.

Artificial consciousness in literature and movies

Fictional instances of artificial consciousness:

External links

See also