This is an old revision of this page, as edited by Ugen64 (talk | contribs) at 21:24, 30 March 2004 (=Artificial consciousness as a field of study=). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 21:24, 30 March 2004 by Ugen64 (talk | contribs) (=Artificial consciousness as a field of study=)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)This article is actively undergoing a major edit for a little while. To help avoid edit conflicts, please do not edit this page while this message is displayed. This page was last edited at 21:24, 30 March 2004 (UTC) (20 years ago) – this estimate is cached, update. Please remove this template if this page hasn't been edited for a significant time. If you are the editor who added this template, please be sure to remove it or replace it with {{Under construction}} between editing sessions. |
Artificial consciousness is a type of consciousness equivalent to digital sentience and simulated consciousness. It is the term that would describe artificial creatures (generally termed robots) that possess some degree of consciousness/sentience (as discussed throughout this article).
Description
In computer science, the synonymous term digital sentience is used to describe the concept that digital computers could someday be capable of indepedent thought. Digital sentience, if it ever comes to exist, will be a form of strong artificial intelligence. A generally accepted criterion for sentience is self-awareness. To support the concept of self-awareness, a definition of conscious can be cited: "having an awareness of one's environment and one's own existence, sensations, and thoughts" (dictionary.com)
In more general terms, an AC system should be able to exhibit various verifiable, known, objective, and observable aspects of consciousness. Another definition of the word conscious is: "being conscious, capable of thought, will, or perception" (dictionary.com).
Aspects of AC
There are various aspects and/or abilities that are generally considered required or very useful in order to consider a certain machine artificially conscious. These are only the most cited, however; there are many others that are not covered.
One related aspect is the ability to predict external events in certain environments in which a human could predict them. The ability to predict has been considered necessary for AC by several scientists, including Igor Aleksander.
Consciousness is sometimes defined as self-awareness. While self-awareness is very important in determining the consciousness of a machine, it is generally difficult to test. For example, recent work in measuring the consciousness of the fly has determined that it manifests the aspects of attention which equate to those of a human at the neurological level.
Another test should include a demonstration of the machine's ability to filter out certain stimuli in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of an "artificially conscious" machine; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an artificially conscious machine should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test.
Awareness could be another required characteristic of an artificially conscious "organism". However, again, there are some problems with the exact definition of awareness. The philosopher David Chalmers argues that the thermostat can be considered conscious: It knows if it is too hot, too cold, or at the correct temperature.
Personality is another characteristic that is generally considered vital within consciousness. In the area of behaviorial psychology, there is a somewhat popular theory that personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, humans (and other animals, possibly) would have no need of personalities, and human personality would never have evolved. An artificially conscious machine will need to have a personality capable of expression such that human observers can interact with it in a meaningful way. However, this is often questioned by computer scientists; the Turing test, which measures a machine's personality, is not considered generally useful any more.
Anticipation is the final characteristic that could possibly be used to define artificial consciousness. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, making it possible to demonstrate that it possesses artificial consciousness in the present and not just in the past. In order to do this, the machine being tested must operate coherently in an unpredictable environment, to simulate the real world.
Schools of thought
There are three commonly stated views regarding the plausibility and likelihood of AC occurring, and some alternative views.
Objective Stronger AC
AC should theoretically be capable of achieving certain known, objective abilities of human consciousness, although not simultaneously. Therefore, AC is objective and always remains artificial, and is only as close to consciousness as humans objectively understand about the subject. AC may considered to be a strong artificial intelligence, due to the difficult demands regarding the concept of AC, but this also depends on how strong AI is defined.
Weak AC
Artificial consciousness will never be real consciousness, but merely an approximation of it; it only mimics something that only humans (and some other sentient beings) can truly experience or manifest. Currently, this is the state of artificial intelligence. No computer has been able to pass the somewhat vague Turing test, which would be the first step to an AI that contains a "personality"; this would be the most likely path to a strong AC.
Strong AC
Proponents of this view believe that artificial consciousness is (or will be) real consciousness, albeit one that has not arisen naturally. The argument in favour of strong AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then we must assume that the human is not a machine. If there is something which is not a machine about a human then it must be the soul or a magic spark and the Weak AC argument must then be made in religious or metaphysical terms; these involvements generally weaken the argument and its factual bases. Alternatively, if the human is a machine, then the Church-Turing thesis applies and the possibility of strong AC must be admitted.
However, it is argued that until contradictory evidence is discovered, Occam's Razor and the Copernican principle support the view that AC will most likely transform into a real consciousness. The Church-Turing thesis states that we need new physics before two computing machines are different; by Occam's Razor, we should not posit new physics without good reason. The Copernican principle states that we should claim no special position for human beings without good reason. The only "good" reasons we have are those of arrogance: Humans are supposedly too complicated, special, or some other similar term for their brains to be built or copied artificially.
Human-like AC
As is to be expected, the weak and strong schools of thought differ on the question of whether artificial consciousness needs to be human-like or whether it could be of an entirely different nature. Proponents of strong AC generally hold the view that artificial consciousness does not need to be similar to human consciousness. Those who hold that artificial consciousness can never be really conscious (i.e., weak AC proponents) hold that AC, not being real, can only be human-like because that is the only "true" model of consciousness that we will (most likely) ever have, and that this weak AC will be modeled on real consciousness and tested against it.
Alternative Views
One alternative view states that it is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss Descartes' argument "I think, therefore I am", would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that because it is a machine, it cannot be conscious. Consciousness does not imply unfailing logical ability. If we look at the dictionary definition, we find that consciousness is self-awareness: a totality of thought and experience. The richness or completeness of consciousness, degrees of consciousness, and many other related topics are under discussion, and will be so for some time (possibly forever).
Today's computers are not generally considered conscious. A UNIX (or derivative thereof) computer's response to the wc -w
command, reporting the number of words in a text file, is not a particularly compelling manifestation of consciousness. However, the response to the top
command, in which the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc., is a particular if very limited manifestation of self-awareness (and therefore consciousness) by definition.
Testing AC
There are many methods and tests used to determine AC, or an existence thereof. These aspects of consciousness should be present before a device passes the test. There is some debate over how many of the characteristics should be present; the majority opinion, by nature, is "all of them"; however, this is very implausible, and there is not a comprehensive database of all tests that can determine artificial consciousness. Therefore, one must assume that if a machine even close to artificial consciousness is designed, the tests must be reviewed and put to use accordingly. Another point is that many humans may not be considered conscious if subjected to all of these tests; infants, mentally handicapped persons, etc. would obviously fail many of these tests.
The Turing test, as mentioned before, is a test proposed by Alan Turing to identify machine "intelligence" by testing a machine's capability to perform human-like conversation. The Chinese room argument is an attempt to debunk the validity of the Turing Test; nowadays, most computer scientists agree that the test is not a very useful or comprehensive way to determine intelligence, personality, or consciousness.
People would not deny that Christopher Nolan was conscious when he wrote Under The Eye of The Clock, and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of William Butler Yeats and James Joyce, two extremely acclaimed authors.
Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, by the Christopher Nolan argument the failure of any particular test would not disprove consciousness, and it is likely that the final test would be conducted in a similar manner to a Turing test, and the result would hence be subjective.
Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.
As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness.
Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. However, one could then argue that it was conscious of its environment, and it was self-aware of its actual nature: an artificial machine (analogous to our knowledge that we are humans). Artificial consciousness proponents therefore have loosened this constraint and allow that simulating or depicting a conscious machine, such as the robots in Star Wars, could count - not as examples of artificial consciousness, since their personalities are generated by actors, but as models. Hence, if someone were to produce an artificial C-3PO, which behaved just like the real one, but without needing to be controlled and animated by a human, then it could qualify as being artificially conscious.
Artificial consciousness as a field of study
Artificial consciousness includes research aiming to create and study artificially conscious systems in order to understand corresponding natural mechanisms.
The term "artificial consciousness" was used by several scientists including Professor Igor Aleksander, a faculty member at the Imperial College in London, England, who stated in his book Impossible Minds that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.
Digital sentience has so far been an elusive goal, and a vague and poorly understood one at that. Since the 1950s, computer scientists, mathematicians, philosophers, and science fiction authors have debated the meaning, possibilities and the question of what would constitute digital sentience.
Artificial consciousness in literature and movies
It is not common agreement that all these examples from science fiction are artificial consciousness.
- Vanamonde in Arthur C. Clarke's The City and the Stars
- Jane in Orson Scott Card's Speaker for the Dead, Xenocide. Children of the Mind and The Investment Counselor
- HAL in 2001 A Space Odyssey
- R2-D2 in Star Wars
- C-3PO in Star Wars
- Data in Star Trek
- Robots in Isaac Asimov's Robot Series