Misplaced Pages

Artificial consciousness

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

This is an old revision of this page, as edited by Tkorrovi (talk | contribs) at 22:35, 3 May 2004 (Major edit. You exceed your limit to revert 3 times. You hsve no right to offend me in the article.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Revision as of 22:35, 3 May 2004 by Tkorrovi (talk | contribs) (Major edit. You exceed your limit to revert 3 times. You hsve no right to offend me in the article.)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Artificial consciousness encompasses digital sentience and simulated consciousness. It is a term that would describe artificial systems (e.g. robots) that simulate some degree of consciousness or sentience.

Simulated consciousness can not be real consciousness, by definition. Yet one school of thought (see below) holds that AC might be genuinely conscious. Therefore the terms artificial consciousness and simulated consciousness are not equivalent. Digital sentience assumes that the artificial consciousness is exhibited by a computer (or a system with a computer as its "brain"): The possibility of a man-made yet biological (e.g.) system being conscious demonstrates that artificial consciousness is not equivalent to digital sentience.

Description

In computer science, the term digital sentience is used to describe the concept that digital computers could someday be capable of independent thought. Digital sentience, if it ever comes to exist, is likely to be a form of artificial intelligence. A generally accepted criterion for sentience is self-awareness and this is also one of the definitions of consciousness. To support the concept of self-awareness, a definition of conscious can be cited: "having an awareness of one's environment and one's own existence, sensations, and thoughts" (dictionary.com).

In more general terms, an AC system should be theoretically capable of achieving various or by a more strict view all verifiable, known, objective, and observable aspects of consciousness. Another definition of the word conscious is: "being conscious, capable of thought, will, or perception" (dictionary.com).

Aspects of AC

There are various aspects and/or abilities that are generally considered necessary for an AC system, or an AC system should be able to learn them; these are very useful as criteria to determine whether a certain machine is artificially conscious. These are only the most cited, however; there are many others that are not covered.

One aspect is the ability to predict the external events in every possible environment when it is possible to predict for capable human. Ability to predict has been considered necessary for AC by several scientists, including Igor Aleksander.

Consciousness is sometimes defined as self-awareness. While self-awareness is very important, it may be subjective and is generally difficult to test.

Another test should include a demonstration that machine is capable to learn the ability to filter out certain stimuli in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an AC should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test.

Awareness could be another required aspect. However, again, there are some problems with the exact definition of awareness.

Personality is another characteristic that is generally considered vital within consciousness. In the area of behaviorial psychology, there is a somewhat popular theory that personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have evolved. An artificially conscious machine may need to have a personality capable of expression such that human observers can interact with it in a meaningful way. However, this is often questioned by computer scientists; the Turing test, which measures a machine's personality, is not considered generally useful any more.

Anticipation is the final characteristic that could possibly be used to define artificial consciousness. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, making it possible to demonstrate that it possesses artificial consciousness in the present and not just in the past. In order to do this, the machine being tested must operate coherently in an unpredictable environment, to simulate the real world.

Schools of thought

There are several commonly stated views regarding the plausibility and capability and of AC, and the likelihood that AC will ever be real consciousness. Note that the terms Genuine and Not-genuine refer not to the capability of the artificial consciousness but to its reality (how close it is to real consciousness). Believers in Genuine AC think that AC can (one day) be real. Believers in Not-genuine AC think it never can be real.

Objective less Genuine AC

By "less Genuine" we mean not as real as "Genuine" but more real than "Not-genuine". It is alternative view to "Genuine AC", by what AC is less genuine only because of the requirement that AC study must be as objective as the scientific method demands, but by Thomas Nagel consciousness includes subjective experience, what cannot be objectively observed. It does not intend to restrict AC in any other way.

An AC system must be theoretically capable of achieving all known objectively observable abilities of consciousness possessed by a capable human, even if it does not need to have all of them at any particular moment. Therefore AC is objective and always remains artificial, and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, AC may considered to be a strong artificial intelligence, but this also depends on how strong AI is defined.

Not-genuine AC

Artificial consciousness will never be real consciousness, but merely an approximation of it; it only mimics something that only humans (and some other sentient beings) can truly experience or manifest. Currently, this is the state of artificial intelligence and holders of the Not-genuine AC hypothesis believe that this will always be the case. No computer has been able to pass the somewhat vague Turing test, which would be a first step to an AI that contains a "personality"; this would perhaps be one path to a Genuine AC. Strictly the subject of another fields as AI should not the subject of AC, by that only such study what cannot be categorized anywhere else, like artificial emotions, is "Not-genuine AC".

Genuine AC

Proponents of this view believe that artificial consciousness is (or will be) real consciousness, albeit one that has not arisen naturally.

It is argued that until contradictory evidence is discovered, Occam's Razor and the Copernican principle support the view that AC can be real consciousness and that the building of AC which is real consciousness is likely: The human being is nothing but a machine. The Church-Turing thesis states that we need new physics before two computing machines are different; by Occam's Razor, we should not posit new physics without good reason. The Copernican principle states that we should claim no special position for human beings without good reason. The only "good" reasons we have are those of arrogance: Humans are supposedly too complicated or special (or some other similar term) for their brains to be built or copied artificially, or for an alternative artificial achitecture to the brain to be truly capable of consciousness.

Nihilistic view

It is impossible to test if anything is conscient.

Alternative Views

One alternative view states that it is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss Descartes' argument "I think, therefore I am", would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that because it is a machine, it cannot be conscious. Consciousness does not imply unfailing logical ability. If we look at the dictionary definition, we find that consciousness is self-awareness: a totality of thought and experience. The richness or completeness of consciousness, degrees of consciousness, and many other related topics are under discussion, and will be so for some time (possibly forever). That one entity's consciousness is less "advanced" than another's does not prevent each from considering its own consciousness rich and complete.

Testing AC

Unless artificial consciousness can be proven formally, judgments of the success of any implementation will depend on observation. Depending upon one's standpoint regarding what constitues consciousness and the particular attributes of sentience and sapience necessary to demonstrate it, there are various views on what the test acceptance criteria for artificial consciousness should be.

The Turing test, as mentioned before, is a proposal for identifying machine intelligence as determined by a machine's ability to interact with a person. The Chinese room argument attempts to debunk the validity of the Turing Test by showing that a machine can pass the test and yet not be sapient. In the Turing test one has to guess whether the entity one is interacting with is a machine or a human. An artificially conscious entity could only pass an equivalent test when it had itself passed beyond the imaginations of observers and entered into a meaningful relationship with them, and perhaps with fellow instances of itself.

Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.

Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. However, one could then argue that it was conscious of its environment, and it was self-aware of its actual nature: an artificial machine (analogous to our knowledge that we are humans).

Finally, an AC system may fail certain tests simply because the system is not developed to the necessary level or doesn't have enough resources; for example, a computer could fail an attentiveness test simply because it does not have enough memory.

Artificial consciousness as a field of study

Artificial consciousness includes research aiming to create and study artificially conscious systems in order to understand corresponding natural mechanisms.

The term "artificial consciousness" was used by several scientists including Professor Igor Aleksander, a faculty member at the Imperial College in London, England, who stated in his book Impossible Minds that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.

Digital sentience has so far been an elusive goal, and a vague and poorly understood one at that. Since the 1950s, computer scientists, mathematicians, philosophers, and science fiction authors have debated the meaning, possibilities and the question of what would constitute digital sentience.

Artificial consciousness in literature and movies

Fictional instances of artificial consciousness:

External links

See also